Warming
by Proxy
by
John L. Daly
Introduction
The key evidence to support claims of future global warming comes from computer models of the climate system, the accuracy of which have been frequently questioned, particularly in respect of the size of the warming they predict. The real Earth has so far responded neither in the way, nor in the degree, predicted by the models.
This leaves the greenhouse industry in something of a dilemma. There is a clear unwillingness on the part of society and governments to undertake massive structural changes to our economic life on the mere say-so of computer models of questionable accuracy. This has led the industry to seek other, more acceptable physical evidence that either -
(a) the warming is already under way, or
(b) present climate is already the warmest in x
years, or
(c) warmings in the ancient past point to similar future warmings.
To this end, using the principle that "Research and ye shall find", the industry has seen an outpouring of research papers, generously funded, to demonstrate past and present warming on the basis of so-called `proxy' studies, and using these results to offer physical evidence in support of the models.
What is a `Proxy' study?
Accurate measurement of temperature only began in the 19th century, patchily at first, becoming more global in the 20th century. These temperature measurements were intended for immediate meteorological purposes, such as forecasting tomorrow's weather, providing storm warnings etc., and were usually taken by non-scientists using instruments which were unsuitable for later climatological analysis. (Click here for a full account of how unreliable these records are).
Prior to actual temperature measurements, we have no direct means of determining temperature prior to the 19th century, so some researchers have resorted to `proxy' means. This means that indirect indicators such as tree rings, oxygen isotopes in trapped ice, pollens, plant species have all been used at various times to give an indication of past temperatures and climates.
Proxy studies by their very nature can only give a very rough indication of past climates, but the greenhouse industry has found them an inexhaustible source of data with which to stir up public alarm. Researchers in this field do not merely publish their results discreetly in scientific journals, but promptly rush to the public media to make a public circus of their `findings'.
Invariably such studies employ esoteric statistical techniques to confuse and confound any potential critics on the "blind them with numbers" principle. Several shoddy studies of recent years have been exposed on this website, but the greenhouse industry continues to peddle them regardless, each one keeping to the familiar theme of Armageddon tomorrow.
The primary purpose of such studies is to demonstrate that the Earth is now warmer than at any time in the recent or distant past, caused of course by man's emissions of CO2. In some cases, this aim even leads to the re-writing of climate history, such as the latest claims that the Medieval Warm Epoch never really happened.
How Critical Thinkers Can Detect Shoddy Science
One does not need to be a scientist to detect bad science, any more than one needs to be a qualified pilot to know that a plane is being flown badly. It is not even necessary to know anything about the esoteric statistics with which the researchers torture their data. It is a well-known feature of modern statistics that the result of any analysis can be pre-determined by the particular statistical technique chosen. Since there are hundreds of different techniques to choose from, each one processing the same data in a different way, it is now possible to `prove' anything you want to.
For example, selective use of `record-breaking' data, such as the warmest temperature here, the coldest there, the wettest somewhere else etc., it is possible to claim that either the world is heating up, or that an ice age is on the way, simply by being selective in how one chooses and processes the data. Virtually all the proxy studies currently being produced exhibit selectivity about which data set is used, and in what manner it is used.
Any scientific paper (or one which purports to be scientific), can be divided into essentially three parts.
1) The Inputs - These comprise the raw data which the study uses, the criteria for selecting that data, and the underlying assumptions upon which the study is based.
2) The Processing - This stage involves the processing of the data into some kind of ordered pattern, possibly highly statistical or mathematical, a stage which many non-specialists may have difficulty in following.
3) The Outputs - These are the results of the processing, and the conclusions which are drawn.
For a non-specialist, the processing stage provides the greatest deterrent to critical thinking about the paper, since if the processing is not understood, how can any rational critique be made? In this situation, most will simply defer to the authority of the authors and accept the findings of the study.
But such resigned acceptance is not necessary. Many studies are clearly flawed at the inputs stage. Perhaps the criteria used in selecting the data is open to question. Perhaps the data itself is flawed. Perhaps the underlying assumptions are weak, invalid, or questionable. If the inputs are in any way flawed, no amount of mathematics or statistical processing can turn bad data into good, or make sound conclusions from bad input data. "Garbage in, garbage out", or GIGO, is not just an over-used cliche, but is a very sound principle upon which all good science is based. Unfortunately, neglect of this principle has become a common practice in greenhouse science.
Some Recent Examples
Example 1
A paper by Tom Wigley and Benjamin Santer ("Anthropogenic Influence on the Autocorrelation Structure of Hemispheric-Mean Temperatures", Science, 282, 27-Nov-98,pp.1676-1679) made a statistical analysis of the surface temperatures of the northern (NH) and southern (SH) hemispheres, concluding that CO2 really was warming the atmosphere. Their paper implicitly assumed that the 115-year hemispheric temperature series they used in the study is actually suitable for this kind of statistical processing.
The US National Research Council (the research arm of the US National Academy of Sciences), recently said in respect of surface temperature records (from which the hemispheric series are derived), quote -
"Deficiencies in the accuracy, quality and continuity of the records ... place serious limitations on the confidence that can be placed in the research results."
What often may seem like `climate change' at weather stations may only be the result of instrument errors, instrument changes, and procedural changes. The NRC found the basic problem was that the major systems used to collect climate data were never really designed for that purpose.
While disputes do exist about the presence of heat island distortions in the hemispheric data (only the USA and western Europe part of the series can be considered reliable at present), it is nonetheless possible, in theory, to develop a generally acceptable temperature series for the NH, even back to the mid-19th century.
But not for the Southern Hemisphere!
Comprehensive station data from the SH only really dates from the 1950s, particularly beginning with the 1957 International Geophysical Year. Prior to that, data collection was fragmentary, with vast ocean areas not covered at all.
Oceans cover some 83% of the SH, and ice a further 5%. The only reasonable source of pre-1950s data for the SH comes from Australasia and South Africa, but even in these cases, the Australian Bureau of Meteorology has been quite emphatic that anything prior to 1910 is useless for historical comparison purposes. Indeed, many Australian stations do show quite inconsistent and erratic records pre-1910, and virtually impossible to correct accurately.
The oceans have been poorly covered throughout the 1865-1994 period, especially the southeast Pacific, while many records from South America and Africa are erratic even up to the present day. In other words, SH surface data is essentially worthless from 1865 to 1910, highly suspect from 1910 to 1957, and probably flawed even since 1957.
Wigley and Santer even conceded this very point in an earlier paper (Jnl. of Geophys Research, Dec 27, 1997), in which they acknowledged that their global surface temperature estimates (also used by the IPCC) were affected by a lack of surface coverage in some regions of the world, particularly in the middle to high latitude areas of the Southern Oceans. That admission throws further doubt upon the reliability of the surface record which they used in the later study.
Applying the GIGO principle again, the flawed inputs used by Wigley and Santer render their subsequent statistical analysis quite meaningless and hardly deserving of the "excellent and exciting" assessment by the paper's referees. In other words, even if a non-specialist has only limited understanding of how their statistical processing worked, it is a straightforward matter to see the flaws in the inputs and thus discard the conclusions of the paper as being equally flawed.
Example 2
This example is a media report by William Cook of US News, featuring a proxy study by Annette Menzel and Peter Fabian of the University of Munich.
"Evidence of global warming may be as close as the back yard. In the United States, robins are appearing in the North earlier than usual this year. And in Europe, flowers are blooming earlier and leaves are falling later, making the growing season just a little bit longer each year. Compared with four decades ago, spring in Europe is now arriving six days earlier, and fall is coming nearly five days later."
Can you see the inherent input flaw?
"Compared with four decades ago ..." That's it right there in that key assumption hidden discreetly away in an otherwise alarmist text. "Four decades ago" takes us back to the late 1950s, right in the middle of the post-war cooling. It is self-evident that any study, whether using proxy indicators or simple temperature measurements, would find that a warming had occurred between a known cool period and a known warm period. But why "four decades"? Surely the researchers could have gone back further? A longer time frame for comparison would surely make the conclusions more compelling.
If the study was taken back six decades, we would be comparing today with the pre-war warm period of the 1930s, giving a fairly uninteresting and neutral result. All the critical thinker has to do is to find those assumptions and check them out. In this instance, the choice of four decades for comparison instead of using a longer period profoundly affected the outcome of the study.
Example 3
A common feature of some greenhouse papers, is that of comparing selected points in a climate cycle. Since arbitrary dates in a climate cycle can occur at atypical points in the cycle (such as during minima or maxima in the series), it can lead to conclusions which are unsupportable when a longer period of analysis is chosen. An example of this was another paper by Benjamin Santer and Tom Wigley ("A Search For Human Influences On The Thermal Structure Of The Atmosphere", Nature 382, 4 July 1996, p.39-46) which inspired the notorious "discernible human influence ..." phrase in the last IPCC report. In it, they presented upper atmosphere temperatures as measured by balloons, but instead of using the full range of dates available (1958-1995), they instead used 1963-1987 as the basis for their comparison. As can been seen below, the effect was quite dramatic.
Example 4
Another variation on this idea is to compare temperatures today with those of 600 years ago, the obvious conclusion being that today is much warmer.
Here is the abstract of one such example (Pollack, H.N., Huang, S. and Shen, P. "Climate change record in subsurface temperatures: a global perspective", Science 282, p. 279-281, 1998)
"Analyses of underground temperature measurements from 358 boreholes in eastern North America, central Europe, southern Africa, and Australia indicate that, in the 20th century, the average surface temperature of the Earth has increased by about 0.5 C and that the 20th century has been the warmest of the past five centuries. The subsurface temperatures also indicate that Earth's mean temperature has increased by about 1.0 C over the past five centuries. The geothermal data offer an independent confirmation of the unusual character of 20th century climate that has emerged from recent multiproxy studies."
Can you see the flaw?
"... over the past five centuries ..." That takes us back to the Little Ice Age!
Of course things are warmer today! The Little Ice Age was caused by a low point in solar activity. Since the proxies used were boreholes, it would have been a straightforward matter to extend the comparison to a full millenium. But this would have taken us back to the Medieval Warm Epoch, giving an overall cooling instead of warming.
Example 5
The Medieval Warm Epoch itself has long been a sore point with the greenhouse industry, as it suggests that 20th century warmth is not unique or even unusual. 800 years ago, during the Viking period, the climate was much warmer than today, allowing the Vikings to colonise Greenland and even America. This was a global phenomenon induced by purely natural climate processes.
Enter Jonathan Overpeck of the NOAA who now claims that the Medieval Warm Epoch never happened. According to him, it was all just a regional, not global, phenomenon. - a very convenient idea for the industry. However, his closing remark raises questions as to his objectivity -
"20th century global warming is a reality and should be taken seriously."
Proxy evidence from the other side of the world in Tasmania shows that tree ring data from ancient Huon Pine trees give a very clear imprint of the Medieval Warm Epoch, half a world away from Greenland.
While tree ring analysis is a favourite proxy technique, it is also surprising at how easily the researchers involved ignore the CO2 `Fertilizer Effect', which enhances plant growth in the 20th century, an effect quite unrelated to the claimed CO2 climate effects. If tree rings are analysed without taking the Fertilizer Effect into account (as occurred in the case of the Tasmanian Huon Pines study), the wider tree rings in the 20th century caused by CO2 fertilization will convey an impression of greater warmth than really exists.
The recent interest in denying and revising the climate history of the last millenium is clearly intended to portray this century as being not only the warmest of all, but also a warming without natural precedent.
Hints and Innuendo in the Conclusions
The conclusions of any proxy study provide a revealing insight into the real motives of the researchers. It is here that they press home whatever point they wished to demonstrate in the rest of the paper, just in case you did not get it first time.
Watch for words like `might', `could', `may', `possibly', a sure sign the researchers have not proved anything, but want to get you worried anyway. In other words, they are guessing.
Another favourite theme in the conclusions is that of correlations. It not difficult to establish a pattern, or correlation, between two variables, especially when complex statistical programs are employed. However, when any two variables correlate, this does not automatically establish which variable causes which, or even if there is a third unknown causal variable.
For example, airborne CO2 and global temperature is well correlated over the last 160,000 years, based on ice core analysis. But which causes which? It's a crucial question for the greenhouse warming theory. Not surprisingly, papers produced by the greenhouse industry convey the misleading impression to the public that changes in CO2 `cause' the associated changes in temperature, thus enhancing the idea that CO2 is a potent driver of climate. But any objective analysis of the relationship indicates that the changes in CO2 lag the changes in temperature by several centuries! This makes it impossible for CO2 to be the cause. Rather, it is temperature which has been changing the CO2 level, a point confirmed in a recent paper in `Science' (12 March 1999).
Another favourite conclusion is the concept of `consistency'. Conclusions often end with words like these - "Our proxy results are not inconsistent with the growth of anthropogenic greenhouse gases, particularly CO2." Apart from the misleading use of the double negative (`not inconsistent with..'), the fact that the claimed results may `not be inconsistent with' a dozen other variables as well, is deliberately left unsaid. To select out greenhouse gases as the only `consistency' variable, to the exclusion of all others, is not only unscientific, but also tells us a lot about the agenda of the author(s) and possibly their funding source.
1998 !
1998 was a very warm year globally, as confirmed by the satellites, being about +0.46°C warmer than the 1979-1998 average. The anomalous character of 1998 clearly stands out on the chart.
The reason for this warmth is clearly linked to the major 1997-98 El Nino, itself an oceanic phenomenon unrelated to global warming. Just why 1998 should be so much warmer than 1983 when a similar severe El Nino occurred becomes clear when it is realised that the 1982-83 event coincided with the El Chichon volcano which prevented global temperature from soaring in the same way. A full discussion on this here.
So, we know 1998 was warm. We also know why it was warm, and the factors which allowed global temperature to rise, unfettered by volcanic dampening. But this creates both an opportunity and a problem for the greenhouse industry.
The opportunity is that 1998 can be used in a host of proxy studies as a basis for comparison with some previous period, and even to highlight events in 1998 associated with the anomalous warmth of that year, such as increased glacier ice melt, sea ice shrinkage, early spring, and any other proxy which would be affected by such a warm year. Such studies are already being promoted widely in the media.
But 1998 also presents a serious problem for the future. Since it was such an extraordinarily warm year, unique among all the previous years, it is clearly a one-off event, induced by El Nino, but unlikely to be equalled in a very long time. The industry certainly revelled in the glory of announcing one broken warm record after another. Dean Collins of Australia's National Climate Centre even prolaimed in a public internet announcement on 5th Jan 1999,
"Not to be left out of the record annual mean temperature action, Australia last year recorded its warmest on record annual mean temperature."
Clearly, 1998 has seen a wild feeding frenzy for the industry.
But as the years go by, as the sun becomes less active (as shown by the latest solar cycle being the weakest in many decades), the record-breaking euphoria of 1998 is unlikely to be repeated, and once the millenium turns, will become an increasingly remote memory. Record-breaking temperatures will become increasingly hard to come by and instead we may see an increasing number of cold records being broken (as indeed has already happened all across the northern hemisphere this winter). The warmth of 1998 has already ended as the satellites now show global temperature has returned to the long-term average.
Epilogue
When reading of new proxy studies, the critical thinker should take regard of the factors described above and ask themselves the following questions -
1) Are the underlying assumptions sound?
2) Has data selectivity been based on objective criteria?
3) Is the input data suitable for the processing being applied to it?
4) Have the author(s) been overly selective about time scales used for comparisons?
5) Is excessive use made of peak warm years (eg. 1998) or peak cold years?
6) Are the author(s) making unsupportable conclusions?
Applying such questions to recent proxy studies would expose the lack of scientific rigour inherent in many of them and reveal a tendency to use data to support pre-determined conclusions.
Return to "Still Waiting For Greenhouse" Main Page