In a couple of excellent posts, Shannon Love at Chicago Boyz notes that one of the most disturbing revelations resulting from the ripping open of the Hadley CRU’s kimono is the shockingly bad, ad hoc, sloppy, and (fill in own pejoratives here) nature of the computer code underlying the quantitative work that is such an important prop for the entire climate change policy edifice. Love points out that software is not peer reviewed, and that scientists are for the most part self-taught programmers who do not follow the strict protocols associated with commercial software development. For an endeavor like that undertaken at Hadley, incremental changes are made on the fly with little documentation, and soon the code resembles a rat’s nest, or an overgrown, weed-choked garden.
The code of the Hadley folks and their confreres (or should it be co-conspirators?) is mainly related to data preparation and analysis. Many of the tasks it performs are relatively pedestrian in concept; the difficulties arise from dealing with the messiness of the underlying data (and, arguably, the perceived necessity of fitting the data to the theory).
But it does raise questions in my mind about the other major prop of the climate change policy edifice: dynamic climate change models. These are huge and complex. I know from much personal experience on simpler but related problems in finance that the kinds of equations they are intended to solve are extremely touchy. Solution techniques can be very brittle. Errors can be subtle and hard to catch.
It is my understanding that this code, like the Hadley programs, is written by scientists. So, my questions: what is the quality of the climate model code? Is it documented properly? Has it been tested? Has it been audited? By whom? What confidence can we have in its reliability? (Reliability in the relatively simple sense that it is bug-free, and properly performs the calculations implied by the underlying theories it is intended to implement. The reliability and completeness of the underlying theories–relying, as they do, on “fudge factor” parameterizations and incomplete characterizations of potentially first order phenomena like clouds–are other issues altogether.)