Streetwise Professor

November 29, 2009

Let (Peer Reviewed) Publishing Perish?

Filed under: Climate Change,Economics — The Professor @ 10:59 pm

The whole CRU disaster has got me thinking about whether given modern information technology, peer review is an anachronism that impedes, rather than advances, scientific knowledge (including social scientific knowledge).  It is quite entertaining–in a perverse, watching a car crash kind of way–to observe the defenders of the climate change consensus repeat “peer review” like a magic spell that will somehow ward off evil (skeptic) spirits.  But if anything, the whole fiasco calls into question the reliability of peer review.

Indeed, the whole display brings to mind the comments of my former colleague, the late Roger Kormendi.  Somebody once mentioned to Roger that he should pay particular attention to a certain piece of research because it had been peer reviewed.  To which Roger replied: “Oh, that means it’s completely arbitrary.”

But as with any institution, it’s not sufficient to point out the flaws in one to justify its replacement with another.  It is necessary to make a comparative analysis with realistic alternatives.  What would be the alternatives to peer review?

In the modern era, the ability to disseminate papers nearly universally and instantaneously, and to make people aware of their existence through things like SSRN makes it possible for myriad scholars to access and evaluate papers, rather than one or two reviewers.  Moreover, the same information technology makes it feasible to provide access to data and code to facilitate replication, examinations of robustness, and the testing of alternative specifications and models on a given set of data.  In this way, it is possible to harness the knowledge of myriad, dispersed individuals with specialized expertise, rather than one or two or even three individuals.  Moreover, the open entry model mitigates the incentive problems associated with peer review, where individuals with weak and often perverse incentives exert incredible influence over what gets published and what doesn’t.

Furthermore, wiki-type mechanisms can be employed to collect and aggregate commentary and critiques about particular papers or groups of papers.  This is another way of harnessing the dispersed knowledge of crowds.  In particular, it would facilitate the exploitation of comparative advantage, allowing, for instance, statisticians to comment on statistical methodologies, computational experts to critique numerical techniques, and so on.

Just think of how things might have evolved differently if climate data and paleoclimate reconstruction had been done under this model, rather than via the peer review mechanism.  The “hockey stick” reconstructions would have been subjected to the critique of expert statisticians who would have uncovered Mann et al’s misuse of principal component methods.  Making raw climate data available would have made it possible to evaluate the sensitivity of results to data selection and methods used to “clean” (quotes definitely needed) data.*  In sum, we wouldn’t be where we are now, not knowing with any confidence just what the climate record tells us, or even what the climate record actually is.

The open kimono approach with code and data would also provide an extremely strong deterrent to fraud.  Moreover, reducing reliance on journals would reduce resources devoted to rent seeking activities (e.g., influencing journal editors, gaming submissions, torpedoing competitors, spending time devising submission strategies).  It would also enhance competition, and reduce the rents that incumbent “gatekeepers” can extract.  Reduced reliance on journals would also mitigate the file drawer effect because journals inevitably condition acceptance on measures of statistical significance.  This leads scholars to abandon research that does not generate such results and encourages selection searches and other “econometric cons,” meaning that published results are likely to present a biased picture of the true state of the evidence.  A more open model would likely reduce these (statistical) size distorting incentives.

One challenge posed by this alternative model relates to the fact that the hiring, tenure, and promotion mechanisms at modern research universities are adapted to the journal publishing mechanism.  Since citations are probably a better metric of quality than whether or not something is published in Journal A or Journal B, or published at all, perhaps a citation-based mechanism could suffice.  (Though if journals faded in importance, this would raise the questions: Cited in what? and How do you compare the quality of citations?  Perhaps some metric where the number of citations of the paper in which a particular work is cited could be used to determine citation quality.)  Also, since participation in wikis, etc., contributes to knowledge, it would be desirable to provide incentives for that kind of activity–which would inevitably require some (inevitably noisy) measurement technology.

These hiring and P&T issues are not immaterial, but to me the first-order issues are reducing the costs of producing knowledge, discovering error, and deterring fraud.  The open source, wisdom of swarms, collaborative, Wiki-based model seems to offer many advantages over the received, hierarchical, journal-based model.  Open access, open source, “swarm,” and wiki models are threatening other information dissemination mechanisms–notably journalism.  So why not journals too?  Why not have reviews by tens or hundreds or thousands of peers who bring to bear comparative advantages (e.g., statisticians critiquing work done by non-statisticians but employing statistical techniques), and who are self-selected for their interest, rather than reviews by less than a handful of inevitably distracted, sometimes conscripted, and often conflicted peers?

The whole journal-based, peer review process is arguably well adapted to a particular technology for producing and disseminating information.  Given the radical changes in information technology, it is at least worth considering whether this received mechanism is still optimal.  I, for one, have serious doubts.

* As a (relevant) aside, one of the most outrageous admissions to come from the Hadley CRU fiasco is that (a) original source data was allegedly destroyed some time ago, and (b) East Anglia University/Hadley have the audacity to claim that only “value added,” processed data was retained.

The arrogance of this claim is beyond belief.  We are supposed to accept that CRU’s methods maximized “value added” for all possible uses of the data?  That every one of the myriad choices that CRU made when processing, filtering, and adjusting the data was the right one for every possible use of the data, and beyond question, let alone reproach?  We should just take this on faith?

How can we test this remarkable assertion?

Oh, we can’t–because they destroyed what would be necessary to do so.

Just think of the hundreds of possible ways of transforming the raw data to deal with problems such as missing observations, or aggregating individual station data to characterize climate over wide areas.  Hadley made a set of choices, and due to their destruction of the original data, we have to live with that, perhaps forever.

Who the hell died and made them the last word?

With open source data and open source code, we would not have to live with the systemic risk inherent in relying on a single set of choices.  Maybe the choices were right–but if they are not, our ability to change adapt is severely constrained.  And maybe they were right at a particular time, but we are now saddled with choices made in light of the techniques available when they were made; it is impossible to bring new techniques to bear on the old data.

Value added my foot.  Who the hell is Hadley to make that assertion?  Just a supercilious, self-serving effort at CYA.

As #1 SWP daughter said in a discussion of these issues: “PAY NO ATTENTION TO THAT LITTLE MAN BEHIND THE CURTAIN!”  That analogy is spot on.

Print Friendly, PDF & Email

7 Comments »

  1. SWP, it would be interesting to know how your own peer community has responded to the CRU situation as well as to your own thoughts written above. Of what little I know about the academic world, it would seem that your ideas would be considered radical, with or without the GW tribal warfare inserted into the discussion (though your quote from Roger Kormendi was interesting).

    “Value-added data.” Yeah right, whatever.

    Lastly, I was reading a random comment submitted by a reader in a CRU data article that amused me considerably. He said:

    “Let me get this right. We have Leonardo’s and Tycho Brahe’s notebooks from the sixteenth century, Newton’s notebooks from the seventeenth, and we have Darwin’s notebooks from the nineteenth century, but CRU threw away all their climate data, already on magnetic tape, because it was ‘taking up too much room’?”

    I love the wisdom of the common man.

    Comment by Howard Roark — November 30, 2009 @ 4:10 am

  2. Craig,
    I agree with your criticisms of the status quo and the potential advantages of a more open source model. Another tradeoff to throw into the mix is the incentives to produce large original data sets. If data is too freely available, the optimal strategy for the opportunistic academic (but I repeat myself) is to let someone else put in the two years of hard labor collecting and entering the data and then swoop in with some canned econometric software and whip off a few quick papers. New technology has certainly lowered data collection and preparation costs, so maybe this, too, will be less of a problem. But the CRU case suggests not completely. If the CRU guys really did destroy their raw data (and that’s not yet clear; that may just have been their way of putting off requests for it), they assembled the data from various other sources (weather stations, etc.). In principle, it should be possible to recreate the raw data by collecting the same data from the same sources. The problem is that that would presumably be a long and costly process. (However, one that would be justified given what is at stake here.)

    None of this is meant to excuse the CRU for what is clearly very bad behavior. It is just another factor complicating a solution to problem of creating and disseminating new knowledge.

    Comment by Scott — November 30, 2009 @ 7:38 am

  3. Scott–

    I agree/understand. I discussed that some in my post a couple of days back (the one originally written in China in ’06). I contemplated discussing that further in this post, but demurred to avoid making it even more involved that it already was. I appreciate your bringing it up.

    The key issue is how best to provide incentives to create this kind of data. Property rights is one way, as you suggest. Subsidy for the creation of the data is another. Creative commons (something I don’t know a lot about) is yet another.

    Part of the issue I have with CRU is that they took the subsidy (gov’t grant support to collect data) and then exercised property rights. That seems the worst of both worlds.

    I would condition any taxpayer support for data collection on making the data open source.

    I’m going to revisit Posner & Landes’s book on IP to help me focus my thoughts on this matter. It is a trade-off between the incentives to create information in the first place, and the ability to use already created information in subsequent creative endeavors.

    The ProfessorComment by The Professor — November 30, 2009 @ 7:57 am

  4. Craig,

    Sorry. I’d missed the earlier post. We’re on the same page: Data collection paid for by governments needs to be public, even if it were to mean paying more to collect it than currently. (Of course, even then, we have the issue of whether data sets assembled by faculty at public universities automatically becomes public…) Beyond that, it would be a surprise indeed if the enormous change in information technology didn’t change the optimal (i.e., least bad) governance arrangement. Recognizing the appropriate change in advance is the challenge.

    Comment by Scott — November 30, 2009 @ 12:22 pm

  5. […] Let (Peer Reviewed) Publishing Perish? The whole CRU disaster has got me thinking about whether given modern information technology, peer review is an anachronism that impedes, rather than advances, scientific knowledge (including social scientific knowledge). It is quite entertaining–in a perverse, watching a car crash kind of way–to observe the defenders of the climate change consensus repeat “peer review” like a magic spell that will somehow ward off evil (skeptic) spirits. But if anything, the whole fiasco calls into question the reliability of peer review…… * As a (relevant) aside, one of the most outrageous admissions to come from the Hadley CRU fiasco is that (a) original source data was allegedly destroyed some time ago, and (b) East Anglia University/Hadley have the audacity to claim that only “value added,” processed data was retained. The arrogance of this claim is beyond belief. We are supposed to accept that CRU’s methods maximized “value added” for all possible uses of the data? That every one of the myriad choices that CRU made when processing, filtering, and adjusting the data was the right one for every possible use of the data, and beyond question, let alone reproach? We should just take this on faith? How can we test this remarkable assertion? Oh, we can’t–because they destroyed what would be necessary to do so. Just think of the hundreds of possible ways of transforming the raw data to deal with problems such as missing observations, or aggregating individual station data to characterize climate over wide areas. Hadley made a set of choices, and due to their destruction of the original data, we have to live with that, perhaps forever. Who the hell died and made them the last word? With open source data and open source code, we would not have to live with the systemic risk inherent in relying on a single set of choices. Maybe the choices were right–but if they are not, our ability to change adapt is severely constrained. And maybe they were right at a particular time, but we are now saddled with choices made in light of the techniques available when they were made; it is impossible to bring new techniques to bear on the old data. Value added my foot. Who the hell is Hadley to make that assertion? Just a supercilious, self-serving effort at CYA. As #1 SWP daughter said in a discussion of these issues: “PAY NO ATTENTION TO THAT LITTLE MAN BEHIND THE CURTAIN!” That analogy is spot on. […]

    Pingback by Global Warming, Global Cooling or Global Taxing? - Page 102 - PPRuNe Forums — November 30, 2009 @ 4:18 pm

  6. […] to the streetwiseprofessor.com page http://url4.eu/qtCO http://ow.ly/HoxB   2 tweet […]

    Pingback by Twitter Trackbacks for Streetwise Professor » Let (Peer Reviewed) Publishing Perish? [streetwiseprofessor.com] on Topsy.com — December 1, 2009 @ 5:31 am

  7. My take – http://www.sublimeoblivion.com/2009/12/01/deeper-meaning-climategate/

    Comment by Sublime Oblivion — December 2, 2009 @ 1:07 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress