The Unscientific Method

Given the recent hubbub over the errors in Rogoff and Reinhart’s famous 90 percent debt-to-GDP ratio, perhaps it’s time for a serious re-examination of publishing practices in our field.  Sometimes embarrassments like this can prompt us to action, and perhaps this one will.

While this particular error is squarely in the lap of Rogoff and Reinhart, I think it is symptomatic of a broad failure to ensure that empirical results are replicable, which is the “gold standard by which the reliability of scientific claims are judged” (National Research Council, 2001).  The lack of replicability of empirical models in economics should be an embarrassment to a field that has been trying (mistakenly, in my view) to catch up with the big boys in the natural sciences.

The NSF requires that researchers who receive funding archive their data so that others can replicate the results, but many academic journals in economics do not.  Perhaps we should follow the NSF lead in this regard. A number of economists do upload data to their own personal or university websites, making the data freely available to others – a great practice, but obviously insufficient.  While referees and/or editors will sometimes ask for access to data when reviewing scholarly papers, this seems to happen far less often than it should; moreover, it is anything but uniform across journals and editors.  Even those journals that have policies requiring submission of data do not seem to have particularly compelling incentives for authors to actually cooperate (in this 2007 paper, Daniel Hamermesh pointed out that the editor of JMCB sought data sets and documentation from authors with accepted papers in that journal, but only got about one-third of them).

Mark Thoma at Economist’s View linked to an NPR story asking how much we should trust economics.  Given the lack of true transparency in empirical work, that’s a very good question indeed.

Author: Brandon Dupont

Share This Post On