If you give a large group of researchers the same data and research questions, how many of them will come up with the same answer? That's what researchers in a large crowd-sourced innovative research project wondered in which Sascha Füllbrunn, Sven Nolte, Utz Weitzel and Stefan Zeisberger of Nijmegen School of Management, Department of Economics and Business Economics, were collaborators. The paper involving a total of 343 authors, in 164 groups, was recently accepted for publication in the leading Journal of Finance.
In the large-scale research project, teams of researchers were given data on financial market transactions and had to test six different hypotheses that relationships commonly occur in finance. The differences in outcomes were quite large: the so-called “non-standard errors" were found to be about the same size as the (mean) standard error. Whether the teams consisted of experienced researchers made no difference. Providing mutual feedback did help reduce variation to some extent.
The project demonstrates that research is needed, but we must keep in mind that that research depends - more than expected so far - on the approach that researchers take. This calls for further investigation of several cases where similar, but not the same, methods were used.