Dr. Roy Spencer
recently had an article on his blog that was picked up by other climate skeptic megaphones and mouthpieces as a new, telling slam on climate models. What he did was show an amalgamation of climate model predictive results (apparently run with a variety of starting boundary conditions and different forcing factors) for the tropical mid-troposphere, and then compared this to the current temperatures measured in the lower troposphere by him and another group using satellite data. There's a divergence going on.
I was pretty sure that there were problems with this approach, but none immediately appeared in the skeptical refutationist press. That is, until today. Turns out that what Spency did was playing real fast and loose with what he showed of the models for comparison. So much so, in fact, that what he's showing has no information content --
-- thus rendering any comparisons to it USELESS.
Here's what was said, tellingly,
and then you can read the whole thing to find out why.
"This
is such a horrendous abuse of statistics that it is difficult to know
how to begin to address it. One simply wishes to bitch-slap whoever it
was that assembled the graph and ensure that they never work or publish
in the field of science or statistics ever again. One cannot generate an
ensemble of independent and identically distributed models that have
different code. One might, possibly, generate a single model that
generates an ensemble of predictions by using uniform deviates (random
numbers) to seed “noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the
“ensemble” of models is completely meaningless, statistically because the
inputs do not possess the most basic properties required for a
meaningful interpretation. They are not independent, their differences
are not based on a random distribution of errors, there is no reason
whatsoever to believe that the errors or differences are unbiased (given
that the only way humans can generate unbiased anything is through the
use of e.g. dice or other objectively random instruments)."
That's a pretty serious knockdown.
What's really ultimately interesting about this commentary is that he notes what real scientists would do - they would look at the models that came the closest to the real data, try to figure out why, and try to figure out what to do to improve the models so that they got better, and gave even closer fits to the data. If you look beyond the distracting and meaningless (his words) black line, which isn't even an average as it purports to be (read the article), you start to see that there are a few model runs on the low end that aren't that far off the actual data. A scientist would ask, "So what are these models doing right?"
Obviously the Roy-man is so blinded by trying to prove his point with this "horrendous abuse of statistics" (that certainly bore repeating) tha he can't act like a real scientist anymore. Which is a shame because he actually used to be one.