You might remember that much of the hysteria surrounding the IPCC's assessment of global warming was based on Mann et al.'s "hockey stick" graph. This seemed to show reasonably steady temperatures followed by very rapid warming in recent years (as well as mysteriously removing the Mediaeval Warm Period).
The IPCC's initial reports, most notably that produced in 2001, were all based around this graph, and much of the IPCC's credibility was centred around it. Which was a pity. For the IPCC, I mean.
Because two interested men, Steve McIntyre and Ross McKitrick, demonstrated that the hockey-stick graph was utterly unreliable. A large part of this unreliability affects almost all AGW predictions, because all of these models require substantial amounts of data.
The trouble is that we simply don't have enough data: we have only been measuring and recording temperature since the late 1800s (and even then, mostly in the USA) and, obviously, accuracy and method have improved since that time. Unfortunately, so has urban sprawl, as surfacestations.org has been demonstrating.
As such, we are forced to rely on proxies: these include ice cores (most notably the Vostock cores) and, in the case of the hockey-stick graph, a heavy reliance on tree rings. And the trouble is that tree rings are, as Climate Skeptic explains, more than a little unreliable.
One of the issues scientists are facing with tree ring analyses is called "divergence." Basically, when tree rings are measured, they have "data" in the form of rings and ring widths going back as much as 1000 years (if you pick the right tree!) This data must be scaled -- a ring width variation of .02mm must be scaled in some way so that it translates to a temperature variation. What scientists do is take the last few decades of tree rings, for which we have simultaneous surface temperature recordings, and scale the two data sets against each other. Then they can use this scale when going backwards to convert ring widths to temperatures.
But a funny thing happened on the way to the Nobel Prize ceremony. It turns out that if you go back to the same trees 10 years later and gather updated samples, the ring widths, based on the scaling factors derived previously, do not match well with what we know current temperatures to be.
The initial reaction from Mann and his peers was to try to save their analysis by arguing that there was some other modern anthropogenic effect that was throwing off the scaling for current temperatures (though no one could name what such an effect might be). Upon further reflection, though, scientists are starting to wonder whether tree rings have much predictive power at all. Even Keith Briffa, the man brought into the fourth IPCC to try to save the hockey stick after Mann was discredited, has recently expressed concerns:There exists very large potential for over-calibration in multiple regressions and in spatial reconstructions, due to numerous chronology predictors (lag variables or networks of chronologies – even when using PC regression techniques). Frequently, the much vaunted ‘verification’ of tree-ring regression equations is of limited rigour, and tells us virtually nothing about the validity of long-timescale climate estimates or those that represent extrapolations beyond the range of calibrated variability.
Using smoothed data from multiple source regions, it is all too easy to calibrate large scale (NH) temperature trends, perhaps by chance alone.
Every single climate projection model is based on proxies—sure, most will use a mix of different proxies, but many of the original proxy studies used other proxies to attempt to verify their data.
As such, as each scientist builds on the inaccuracies of the previous ones, the proxy temperature graphs require more... er... "adjustment". And when people build climate models based on such data, then you are really in trouble. As we have seen.
But this is what really got me the other day. Steve McIntyre (who else) has a post that analyzes each of the tree ring series in the latest Mann hockey stick. Apparently, each series has a calibration period, where the scaling is set, and a verification period, an additional period for which we have measured temperature data to verify the scaling. A couple of points were obvious as he stepped through each series:
- Each series individually has terrible predictive ability. Each were able to be scaled, but each has so much noise in them that in many cases, standard T-tests can't even be run and when they are, confidence intervals are huge. For example, the series NOAMER PC1 (the series McIntyre showed years ago dominates the hockey stick) predicts that the mean temperature value in the verification period should be between -1C and -16C. For a mean temperature, this is an unbelievably wide range. To give one a sense of scale, that is a 27F range, which is roughly equivalent to the difference in average annual temperatures between Phoenix and Minneapolis! A temperature forecast with error bars that could encompass both Phoenix and Minneapolis is not very useful.
- Even with the huge confidence intervals above, the series above does not verify! (the verification value is -.19). In fact, only one out of numerous data series individually verifies, and even this one was manually fudged to make it work.
And this is why those who base their faith in catastrophic anthropogenic climate change are making a severe mistake: many of those who do so, have an absolute faith in the IPCC and take their word as Gospel.
But, if the IPCC's conclusions and models are built on false—and demonstrably falsifiable—data, then the IPCC's models and conclusions are simply not reliable. Apart from anything else, if even the IPCC thought that they were absolutely correct, they wouldn't have to keep updating their conclusions, would they?