NOAA’s global and US temperature estimates have become highly controversial. The core issue is accuracy. These estimates are made by very complex statistical models which are sensitive to a large number of factors, but the magnitude of sensitivity for each factor is unknown. NOAA’s present practice of stating temperatures with precision is clearly untenable, because it ignores these significant uncertainties.

Thus NOAA needs a focused research program to try to determine the accuracy range of these controversial temperature estimates. Below is a brief outline of the factors to be explored. The research goal is to systematically explore the uncertainty each factor contributes to the temperature estimates.

1. The urban heat island effect (UHI). This is known to exist but its specific effect on the temperature recording stations at any given time and place is uncertain.

2. Local heat contamination of temperature readings. Extensive investigation has shown that this is a widespread problem. Its overall extent and effect is highly uncertain.

3. The limited accuracy of individual thermometer readings. The average temperature cannot be more accurate than the individual readings that go into it. It has been suggested that in some cases this inaccuracy is an entire degree.

4. Other temperature recording station factors, to be identified and explored. Several have been discussed in the literature.

5. Adjustments to temperature data, to be systematically identified and explored. There are numerous adjustments made to the raw temperature data. These need to be cataloged, and then analyzed for uncertainty.

6. Homogenization, which assumes that temperature change is uniform over large areas, is a particularly troubling adjustment deserving of special attention.

7. The use of sea surface temperature (SST) proxies in global temperature estimates. Proxies always add significant uncertainty. In the global case the majority of the surface is oceanic.

8. The use of an availability or convenience sample rather than a random sample. It is a canon of statistical sampling theory that convenience samples are unreliable. How much uncertainty this creates in the temperature estimates is a major issue.

9. Area averaging. This is the basic method used in the surface temperature estimating model and it is a nonstandard statistical method, which creates its own uncertainties. For example, different thermometers are in effect given very different weights. Plus the global average is an average of averages.

10. Interpolation or in-filling. Many of the area averaging grid cells do not have good temperature data, so interpolation is used to fill them in. This can be done in many different ways, which creates another major uncertainty.

Other factors are likely to be identified and explored as this research proceeds. To the extent that the uncertainty range contributed by each factor can be quantified, these ranges can then be combined and added into the statistical temperature model. How to do this is itself a research need.

Note that it is not a matter of adjusting the estimate, which is what is presently done. One cannot adjust away an uncertainty. The resulting temperature estimates will at best be in the form of a likely range, not a specific value as is now done. This range may be large. For example, if each of the ten uncertainty factors listed above were to be about 0.1 degrees, then the sum might be a whole degree or more.

Note also that most of this research will also be applicable to the other surface temperature estimation models, such as GISS, HadCRU and BEST. All of these models use roughly the same data and methods, with many differences in detail.