A needed NOAA temperature research program

By |2017-05-23T21:12:37+00:00May 27th, 2017|Climate|12 Comments

NOAA’s global and US temperature estimates have become highly controversial. The core issue is accuracy. These estimates are made by very complex statistical models which are sensitive to a large number of factors, but the magnitude of sensitivity for each factor is unknown. NOAA’s present practice of stating temperatures with precision is clearly untenable, because it ignores these significant uncertainties.

Thus NOAA needs a focused research program to try to determine the accuracy range of these controversial temperature estimates. Below is a brief outline of the factors to be explored. The research goal is to systematically explore the uncertainty each factor contributes to the temperature estimates.

1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.

2. Local heat contamination of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.

3. The limited accuracy of individual thermometer readings. The average temperature cannot be more accurate than the individual readings that go into it. It has been suggested that in some cases this inaccuracy is an entire degree.

4. Other temperature recording station factors, to be identified and explored. Several have been discussed in the literature.

5. Adjustments to temperature data, to be systematically identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, and then analyzed for uncertainty.

6. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.

7. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.

8. The use of an availability or convenience sample rather than a random sample. It is a canon of statistical sampling theory that convenience samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.

9. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties. For example, different thermometers are in effect given very different weights. Plus the global average is an average of averages.

10. Interpolation or in-filling. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.

Other factors are likely to be identified and explored as this research proceeds. To the extent that the uncertainty range contributed by each factor can be quantified, these ranges can then be combined and added into the statistical temperature model. How to do this is itself a research need.

Note that it is not a matter of adjusting the estimate, which is what is presently done. One cannot adjust away an uncertainty. The resulting temperature estimates will at best be in the form of a likely range, not a specific value as is now done. This range may be large. For example, if each of the ten uncertainty factors listed above were to be about 0.1 degrees, then the sum might be a whole degree or more.

Note also that most of this research will also be applicable to the other surface temperature estimation models, such as GISS, HadCRU and BEST. All of these models use roughly the same data and methods, with many differences in detail.


  1. Li D May 27, 2017 at 8:12 AM

    Would you like to put up Hansen et al 1981 verses observations as an example to illistrate your point?

    • David Wojick May 31, 2017 at 10:51 AM

      Can you clarify this, Li D? What observations and what point? To my knowledge mine is the first attempt at a comprehensive catalog of uncertainties in the global surface temperature statistical models. As such there are many points.

      The overall point is that we need a research program to examine all these uncertainties, quantifying them if possible. Only then will we know how to handle this shaky data. I do not see how Hansen et al 1981 is helpful in this regard.

      As for observations, the best we have are the satellite estimates. These too require a certain amount of statistical analysis, but they are far superior to the surface statistical models. They actually sample most of the atmosphere.

      So what specifically do you mean?

  2. Immortal600 May 27, 2017 at 9:11 AM

    Another good article, Dr. Wojick. Ignore the troll below who follows you article after article.

    • David Wojick May 31, 2017 at 11:11 AM

      I try to work with commenters whenever possible, to clarify the debate.

      • Immortal600 May 31, 2017 at 11:29 AM

        I commend you for attempting dialogue with the AGW believers. They can’t be swayed away from the failed theory.

        • David Wojick May 31, 2017 at 11:38 AM

          This may be true but it is necessary to flesh out the arguments in order to see it.

  3. Biologyteacher100 May 30, 2017 at 2:29 PM

    I was impressed by the Berkeley Earth Project. Professor Muller took money from the Koch Brothers to reanalyze billions of temperature measurements. His team found that the NASA and NOAA data are spot on.

    • David Wojick May 31, 2017 at 11:08 AM

      The Berkeley Earth’s BEST statistical model uses the same basic approach as NASA GIStemp, NOAA and HadCRUT, so it is no surprise that the results basically agree. All suffer from the fundamental weaknesses I have listed as research topics.

      Ironically, BEST may actually be the worst, for several reasons. First, they use a lot of short term records that the others have rejected. Second, they use a great deal more interpolation. The other models use area averaging. If they do any interpolation it is to fill in empty grid cell temperatures. But BEST constructs a continuous temperature field covering the Earth. Thus they interpolate a made-up temperature for every place on Earth, then average all these made-up numbers.

      Note too that the other models disagree among themselves, so they cannot be spot on, whatever that means. This is especially true since Karl et al adjusted the NOAA model sea surface temperatures to make the hiatus disappear, which the other models have declined to do. But the uncertainties that I am listing are vastly greater than the differences between the various surface temperature models, which are all basically the same.

  4. Francisco Machado May 31, 2017 at 12:59 PM

    Temperature variations are presented in very small increments – small enough, I would think, to show parallels with economic activity. Does an increase in cash flow increase urban heat island average temperature as use of air conditioning increases, winter thermostats are set higher, more people commute, goods are manufactured and moved, people utilize or can afford more energy use? The Progressive/AGW endeavor to keep the economy from improving may be an ideologically motivated move to rescue the planet from the effects of the Free Market.

    • David Wojick May 31, 2017 at 1:23 PM

      Indeed, some time ago Ross McKitrick did a lot of work showing the close correlation between economic growth and growth in the surface temperature estimates. Given that the satellites do not show this warming in the atmosphere, it is most likely due to heat contamination. But in addition to UHI there is local heat contamination in less urban areas, as Anthony Watts has demonstrated.

      As for rescuing the planet from the effects of the Free Market, that is frequently stated by some alarmist groups.

      • Francisco Machado May 31, 2017 at 2:17 PM

        I’m amused by the vehement denial by the AGW cabal that thirty years ago any serious climate scientist thought we were threatened by a pending ice age or – Horrors! – even contemplated spreading coal ash on the Arctic to reduce heat reflection. The transition to warming gave them no trouble. And some theorists now posit a near term cooling of some considerable duration.
        – One century past most heat was from coal burning. Mass transit was by coal fired trains or ships (even the advanced Titanic burned coal), and you were very likely to be carried to the point of embarkation by a horse drawn conveyance. Given that the rate of technological change is accelerating, it might be more beneficial to use the abundant energy supplies we have (estimated six hundred year reserve) and apply our economic energies to developing advanced fusion energy sources that don’t have the inherent problems of current “renewable energy” and don’t kill birds which, if done by any conventional energy source, would drive the greenies ballistic. A century in the future, who knows, we may have home generators like we now have furnaces and flashlights without batteries – that you don’t have to slam against something to get them lit.

        • David Wojick May 31, 2017 at 2:42 PM

          True, but we seem to be wandering away from my temperature research topic.

Comments are closed.