Table of Contents  Search  for printer      HOME
The Discovery of Global Warming                      April 2024

Chaos in the Atmosphere

Before they could understand how climates change, scientists would have to understand the basic principles for how any complicated system can change. Early studies, using highly simplified models, could see nothing but simple and predictable behavior, either stable or cyclical. But in the 1950s, work with slightly more complex physical and computer models turned up hints that even quite simple systems could lurch in unexpected ways. During the 1960s, computer experts working on weather prediction realized that such surprises were common in systems with realistic feedbacks. The climate system in particular might wobble all on its own without any external push, in a "chaotic" fashion that by its very nature was unforeseeable. By the mid 1970s, many experts found it plausible that at some indeterminate point a small push could trigger severe climate change. While the largest effects could be predicted, important details might lie forever beyond calculation. In the following decades a consensus developed that the climate system was unlikely to jump into an altogether different state. The most likely future was one of gradual change, with low odds for an abrupt catastrophe—yet the odds were not zero, and critical details remained beyond calculation.

Few natural phenomena change so radically and unpredictably as the daily weather. Meteorologists had long understood how the atmosphere in a given locality could be capricious and unstable from hour to hour. As one authority explained in 1957, tiny disturbances in the air, far below the limits of observation, could grow into large weather systems within a few days. Nobody could predict these unstable processes, so "there is an effective time barrier beyond which the detailed prediction of a weather system may well remain impossible." Beyond that limit, which might be only a few days, one could only look to statistics, the probability of rain or frost in a given month.(1)        - LINKS -
Climate was expected to be steadier. Climate was the statistics, defined as a long-term average. People assumed that daily fluctuations would would cancel one another out over the long run, for the atmospheric system was supposed to be self-stabilizing. True, it was undeniable that even a large system could be unstable. Back around the end of the 19th century, the great French mathematician Henri Poincaré had noted that even the orbit of a planet could depend on some tiny fluctuation, as difficult to predict as whether a ball rolling down a knife-edge would fall to left or right. In the 1920s, quantum physicists showed that a lack of certainty was altogether fundamental. This was Werner Heisenberg's Uncertainty Principle, made vivid by Erwin Schrödinger's fable of a cat that might be alive or dead depending on the strictly random decay of a single atom. These ideas worked their way only gradually into common awareness. For decades, scientists who studied complex systems mostly just ignored the ideas. Few questioned that the automatic self-correction of the great natural systems would always keep planets in their accustomed orbits, and that over future decades the rains would fall pretty much as they had in the past.

 

 

More discussion in
<=>Simple models

Of course, anyone who lived through the harrowing Dust Bowl drought of the 1930s, or heard grandfathers talk about the freezing winters of the 1890s, knew that climate could be seriously different from one decade to the next. Few wanted to explain this as mere random drifting. Surely nature was not altogether capricious? Every change must have its specific explanation. Perhaps, for example, rainfall decreased when soils were dried up by overgrazing, and perhaps cold spells followed an increase in smoke from volcanic eruptions. Even more popular than this idea of particular causes for particular deviations was an assumption that features of nature follow periodic patterns, diverging only to return. Things from tides to rabbit populations go through regular cycles, and it was easy to suppose that climate too was cyclical. The idea fascinated many professional and amateur meteorologists. If you could detect a regular cycle in climate, you could develop a scientific explanation for climate change, and use it to calculate predictions of economic value — and perhaps make a killing on the wheat futures exchange!  
From the 19th century forward, then, people who liked to play with data labored to extract climate cycles from weather statistics. One or another worker discovered a plausibly regular rise and decline of temperature or of rainfall over months or decades in this region or that. Given enough different bodies of data, people could also turn up correlations between a weather cycle and some other natural ebb and flow, notably the eleven-year cycle of sunspots. A 1941 U.S. Weather Bureau publication noted that some 50 climate cycles had been reported, ranging from days to centuries (not to mention the ice ages, which seemed to come and go regularly over hundreds of thousands of years). "Each man who has proposed one or more of these cycles," the Bureau remarked, "has become convinced that he has found a particular rhythm."(2)

 

 


<=Solar variation
=>Simple models

=>Climate cycles

Many meteorologists repudiated the whole enterprise, seeing nothing but random fluctuations from the norm. There remained a good number who believed that cycles were probably there, just at the edge of what the data could prove. An indicator of middle-of-the-road opinion was Helmut Landsberg's authoritative climatology textbook of 1941. Among other cases, Landsberg described how "widespread attention" had focused upon a cycle of around 33 years in the level of lakes (which gave a good measure of average precipitation). Detected in the 1890s, the cycle had been used to predict how much rain would fall in the late 1920s — but the prediction had failed ignominiously. Nevertheless, Landsberg thought there was a real effect at work, perhaps an irregular rhythm that varied between 30 and 40 years long. "Scientific skepticism is well warranted in the research on climate cycles," he admitted. "Nevertheless some of them seem to have much more than chance characteristics..."(3) Meanwhile the stock of weather observations increased rapidly and calculation techniques improved, so that it became increasingly possible to offer solid proof of whether or not a given cycle was valid. The answers were usually negative.  
By the middle of the 20th century, opinion among meteorologists was divided about the same way as at the start of the century. Some expected that a few cycles would eventually be pinned down, while others believed that no cycles existed — the variations of climate were purely random. Progress was stymied unless clues could be found in some new approach.

 


<=>Solar variation

A big clue came in the 1950s, when a few scientists decided to build actual physical models of climate. Perhaps if they studied how a fluid behaved in a rotating pan, they would learn something general about the behavior of fluid systems like the rotating planet's atmosphere. These "dishpan" studies turned out to be surprisingly effective in modeling features of the atmosphere like weather fronts. What was most thought-provoking was the way the circulation of a fluid in the laboratory could show different patterns even when the external conditions remained the same. Stir the fluid in the rotating dishpan with a pencil, and you couldn't predict which of two or three possible states the circulation would settle into. The choice of pattern depended in some arbitrary, unpredictable way on the system's past history.(4)  
<=Simple models
Of course, random behavior could be no surprise to anyone who watched the tumbling of fluids. When water flows through a channel, if the speed gradually increases, at some point the smooth, steady flow gives way to a turbulent flow with vortexes swirling unpredictably. In 1921, Vilhelm Bjerknes had suggested that a similar instability might be at the root of major daily disturbances of the atmosphere. Beyond some critical point, the symmetric flow of wind would become unstable and spin off storms.(5) In 1956, Edward Lorenz proposed an explanation along those lines for the dishpan experiments. As the dishpan experiments were refined, however, they seemed to point to something much more unfamiliar.(6)  
A second essential clue came from another new field, digital computation. As scientists applied computers to a variety of tasks, oddities kept popping up. An important example came in 1953 when a group at Los Alamos, led by the great physicist Enrico Fermi, used the pioneer computer MANIAC to study how a complex mechanical system behaved. They wrote equations that described a large number of "nonlinear" oscillators (the mathematical equivalent of springs with flaws that kept them from stretching smoothly), all coupled to one another. Physical intuition insisted that the distribution of energy among the oscillators in such a system should eventually settle down into a steady state, as a shaken glass of water will gradually come to rest. That was indeed what Fermi's group saw, after the computer had ground away at the numbers for a while. Then one day, by accident, they left the computer running long after the steady state had been reached. Fermi's group was amazed to find that the system had only lingered for a while in its steady state. Then it reassembled itself back into something resembling the initial distribution of energy. Like the flow in the rotating dishpan, the system had at least two states that it could flip between for no obvious reason. Further computer experiments showed that the system shifted unpredictably among several "quasi-states."(7) In retrospect, this was the first true computer experiment, with an outcome that foreshadowed much that came later. The lesson, scarcely recognized at the time, was that complex systems did not necessarily settle down into a calm stable state, but could organize themselves in surprising large-scale ways.  
Fermi's group described these wholly unexpected results at a few meetings during the 1950s, stirring curiosity among physicists and mathematicians. Meanwhile there were hints that such behavior was not confined to abstract mathematical systems. For example, a pair of scientists wrote a simple system of equations for the exchanges of carbon dioxide gas among the Earth's atmosphere, oceans, and biosphere, and ran the equations through a computer. The computations tended to run away into self-sustaining oscillations. In the real world that would mean climate instability — or even fluctuations with no regularities at all.(8) Nothing specific came of these and other peculiar results. It is not uncommon for scientists to turn up mildly anomalous calculations. They stick them away in the back of their minds until someone can explain what, if anything, it all has to do with the real world.  
The more people worked with computers, the more examples they found of oddly unstable results. Start two computations with exactly the same initial conditions, and they must always come to precisely the same conclusion. But make the slightest change in the fifth decimal place of some initial number, and as the machine cycled through thousands of arithmetic operations the difference might grow and grow, in the end giving a seriously different result. Of course people had long understood that a pencil balanced on its point could fall left or right depending on the tiniest difference in initial conditions, to say nothing of the quantum uncertainties. Scientists had always supposed that this kind of situation only arose under radically simplified circumstances, far from the stable balance of real-world systems like global climate. It was not until the 1950s, when people got digital machines that could do many series of huge computations, that a few began to wonder whether their surprising sensitivity pointed to some fundamental difficulty.

 

 

 

 

 


<=>Arakawa's math

At first the problem had seemed simply a matter of starting off with the right equations and numbers. That caught attention as early as 1922, when Lewis Fry Richardson published the results of a heroic attempt to compute by hand how a weather pattern developed over eight hours. His starting point was an observed pattern of winds and barometric pressure. Numerically simulating a day of development, Richardson's numbers had veered off into something utterly unlike real weather. He thought his calculation would have worked out if only he could have begun with more accurate wind data. But as the meteorologist Carl-Gustav Rossby pointed out in 1956, people routinely made decent 24-hour predictions by looking at weather maps drawn from very primitive data. "The reasons for the failure of Richardson's prognosis," the puzzled Rossby concluded, "must therefore be more fundamental."(9*)

 


<=Models (GCMs)

 

 


=>Models (GCMs)

The question of unstable computations was addressed most persistently by Philip Thompson, who had taken up weather prediction with the pioneering ENIAC computer group. In 1956, Thompson estimated that because of the way small irregularities got magnified as a computation went forward, it would never be possible to compute an accurate prediction of weather more than about two weeks ahead.(10) Most scientists felt that all this resulted from the way computers chopped up reality into a simplified grid (and in fact some clever changes in the mathematics stabilized Phillips's model). As another computer pioneer remarked, "meteorologists get so used to the idea that something bad is going to go wrong with their forecast that you're not surprised" if a calculation couldn't be made to work.(11) The real world itself was presumably not so arbitrary.  
There had long been a few meteorologists, however, who felt that the atmosphere was so "delicately balanced" that a relatively minor perturbation could trigger not just a week's storm, but a large and durable shift.(12) In the 1950s, the idea was developed in speculative models of climate that showed abrupt variations, due to self-sustaining feedbacks involving factors such as snow cover. Support came from new data which suggested that climate conditions in the past had sometimes in reality jumped quite rapidly into a different state. The respected U.S. Weather Bureau leader, Harry Wexler, warned that "the human race is poised precariously on a thin climatic knife-edge." If the global warming trend that seemed to be underway continued, it might trigger changes with "a crucial influence on the future of the human race."(13)

 
<=Rapid change


<=Simple models
<=Rapid change


<=Modern temp's

The intellectual basis of the new viewpoint was well expressed in 1961 by R.C. Sutcliffe at an international climate conference. Using the popular new language of cybernetics, he described climate as a complex nonlinear feedback system. Unceasing variation might be "built-in," an intrinsic feature of the climate system. Thus it might be pointless to look for external causes of climate change, such as solar variations or volcanic eruptions. Every season the pattern of the general circulation of the atmosphere was newly created, perhaps in a quite arbitrary way. The "sudden jumps" seen in the climate record, Sutcliffe concluded, are "suggestive of a system controlling its own evolution."(14)

 


<=Simple models

The father of cybernetics himself, mathematician Norbert Wiener, insisted that attempts to model the weather by crunching physics equations with computers, as if meteorology were an exact science like astronomy, were doomed to fail. Quoting the old nursery rhyme that told how a kingdom was lost "for want of a nail" (which caused the loss of a horseshoe that kept a knight out of a crucial battle), Wiener warned that "the self-amplification of small details" would foil any attempt to predict weather. One pioneer in computer prediction recalled that Wiener went so far as to say privately that leaders of the work were "misleading the public by pretending that the atmosphere was predictable."(15)  
In 1961, an accident cast new light on the question. Luck in science comes to those in the right place and time with the right set of mind, and that was where Edward Lorenz stood. He taught meteorology at the Massachusetts Institute of Technology, where Wiener was spreading his cybernetics ideas and development of computer models was in the air. Lorenz was one of a new breed of professionals who were combining meteorology with mathematics. Lorenz had devised a simple computer model that produced impressive simulacra of weather patterns. One day he decided to repeat a computation in order to run it longer from a particular point. His computer worked things out to six decimal places, but to get a compact printout he had truncated the numbers, printing out only the first three digits. Lorenz entered these digits back into his computer. After a simulated month or so, the weather pattern diverged from the original result. A difference in the fourth decimal place was amplified in the thousands of arithmetic operations, spreading through the computation to bring a totally new outcome. "It was possible to plug the uncertainty into an actual equation," Lorenz later recalled, "and watch the things grow, step by step."

 


<=Climatologists

Lorenz was astonished. While the problem of sensitivity to initial numbers was well known in abstract mathematics, and computer experts were familiar with the dangers of truncating numbers, he had expected his system to behave like real weather. The truncation errors in the fourth decimal place were tiny compared with any of a hundred minor factors that might nudge the temperature or wind speed from minute to minute. Lorenz had assumed that such variations could lead only to slightly different solutions for the equations, "recognizable as the same solution a month or a year afterwards... and it turned out to be quite different from this." Storms appeared or disappeared from the weather forecasts as if by chance.(16)  
Lorenz did not shove this into the back of his mind, as scientists too often do when some anomaly gets in the way of their work. For one thing, the anomaly reminded him of the sudden transitions in rotating dishpans, which he had worked on but never quite solved. He launched himself into a deep and original analysis. In 1963 he published a landmark investigation of the type of equations that might be used to predict daily weather. "All the solutions are found to be unstable," he concluded. Therefore, "precise very-long-range forecasting would seem to be non-existent."(17)  
That did not necessarily apply to the climate system, which averaged over many states of weather. So Lorenz next constructed a simulacrum of climate in a simple mathematical model with some feedbacks, and ran it repeatedly through a computer with minor changes in the initial conditions. His initial plan was simply to compile statistics for the various ways his model climate diverged from its normal state. He wanted to check the validity of the procedures some meteorologists were promoting for long-range "statistical forecasting," along the lines of the traditional idea that climate was an average over temporary variations. But he could not find any valid way to statistically combine the different computer results to predict a future state. It was impossible to prove that a "climate" existed at all, in the traditional sense of a stable long-term average. Like the fluid circulation in some of the dishpan experiments, it seemed that climate could shift in a completely arbitrary way.(18)  
These ideas spread among climate scientists, especially at a landmark conference on "Causes of Climate Change" held in Boulder, Colorado in August 1965. Lorenz, invited to give the opening address, explained that the slightest change of initial conditions might randomly bring a huge change in the future climate. "Climate may or may not be deterministic," he concluded. "We shall probably never know for sure."(19) Other meteorologists at the conference pored over new evidence that almost trivial astronomical shifts of the Earths orbit might have "triggered" past ice ages.. Summing up a consensus at the end of the conference, leaders of the field agreed that minor and transitory changes in the past "may have sufficed to 'flip' the atmospheric circulation from one state to another."(20)

 
<=Climatologists
<=>Climate cycles


<=>Simple models
=>Rapid change


= Milestone

These concerns were timely. Around the mid 1960s, many people were starting to worry about environmental change in general as something that could come arbitrarily and even catastrophically. This was connected with a growing recognition, in many fields of science and in the public mind as well, that the planet's environment was a hugely complicated structure with points of vulnerability. Almost anything might be acutely sensitive to changes in anything else. So it was hopeless to look for comfortably regular weather cycles driven by single causes. The many forces that acted upon climate, all interacting with one another, added up to a system with an intrinsic tendency to vary, hard to distinguish from random fluctuation.

 
<=Public opinion

 

 


<=>Climatologists

A tentative endorsement of Lorenz's ideas came in a comprehensive 1971 review of climate change. While the authors did not feel Lorenz had proved his case for certain, they found it "conceivable" that sensitivity to initial conditions "could be a 'cause' of climate change."(21) A typical textbook of the time spoke of the atmosphere as an overwhelmingly complex system of different "types of circulation" with rapid transitions among them. "The restlessness of the atmosphere sets a theoretical limit to its predictability," the author concluded. That not only ruined any hope of forecasting weather beyond a week or so, but similarly hampered our ability to foresee climate change. A high-level panel on climate change agreed in 1974 that "we may very well discover that the behavior of the system is not inherently predictable."(22)  
In the early 1970s, concern about arbitrary climate change was redoubled by news reports of devastation from droughts in Africa and elsewhere. The most dramatic studies and warnings came from meteorologist Reid Bryson, who pointed out that the African drought had "minuscule causes," which "suggests that our climate pattern is fragile rather than robust."(23) Meanwhile speculative new models suggested that a slight variation of external conditions could push the climate over an edge, plunging us from the current warmth into an ice age.(24) Studies of dramatic past climate events added plausibility to these models. It was a short step to imagining a system so precariously balanced that it would go through self-sustaining fluctuations without any external trigger at all. As an author of one of the simple models put it, the results raised "the disturbing thought" that science could do no more than follow the history of climate as it evolved.(25)

 


<=Simple models

 


<=>Rapid change

Many meteorologists rejected this approach, what one prestigious panel called "the pessimistic null hypothesis that nothing is predictable." After all, the entire program of the postwar physics-based revolution in meteorology aimed at prediction. Scientists holding to this ideal expected that gross changes could in principle be predicted, although perhaps not their timing and details.(26*) In 1976 a theoretical physicist, Klaus Hasselmann, solved the problem. He showed that even though the chaotic nature of weather makes it impossible to predict storms a month ahead, computer models could calculate climate a century ahead, within particular limits. He worked out equations in which changes in climate resembled the classic physics called a "random walk." It was as if the atmosphere was staggering like a drunkard among a multitude of possible states. The steps (that is, weather events) this way and that could add up to a large excursion in a random direction. If this picture was valid, then the places the drunken climate reached would be halfway predictable, if never entirely so.(27)

 

 

 

 

=>Models (GCMs)

The real world did follow a halfway predictable path, according to one interpretation of new field studies. In 1976, analysis of deep-sea cores revealed a prominent 100,000-year cycle in the ebb and flow of ice ages. That corresponded to a predictable astronomical cycle of variations in the Earth's orbit. However, the cyclical changes of sunlight reaching the Earth seemed trivially small. The group of scientists who published the evidence thought the cycle of glacial periods must be almost self-sustaining, and the orbital changes only nudged it into the shifts between states.(28) They called the variations in the Earth's orbit the "pacemaker of the ice ages." In other words, the astronomical cycle triggered the timing of the advance and retreat of ice sheets but was not itself the driving force.(29) Without the timing set by this external stimulus, the ice cycles might wander without any pattern at all. Or changes could be set off arbitrarily with a nudge from any of various other forces that were easily as strong as the slight deviations of sunlight. Indeed the record showed, in addition to the main cycles, a great many fluctuations that looked entirely random and unrelated to orbital variations. Meanwhile, computer weather modelers were starting to admit they could find no way to circumvent Lorenz’s randomness.

 

 

 

 

 


<=>Rapid change

The new viewpoint was captured in a fine review by the leading meteorologist J. Murray Mitchell. He pointed out that climate is variable on all timescales from days to millions of years. There were naturally many theories trying to explain this multifarious system, he said, and almost any given theory might partly explain some aspect. "It is likely that no one process will be found adequate to account for all the variability that is observed on any given time scale of variation." Furthermore, the sheer randomness of things set a limit on how accurately scientists could predict future changes.(30)

 

 


<=>Simple models

Similar ideas were gradually becoming known during the 1970s to the entire scientific community and beyond under a new name: "catastrophe theory," later generalized as "chaos theory."(31) The magnification of tiny initial variations, and the unpredictable fluctuation among a few relatively stable states, were found to matter in many fields besides meteorology. Most people eventually heard some version of the question Lorenz asked at a 1979 meeting, "Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?" (Already in 1975 a science journalist had asked, "can I start an ice age by waving my arm?") Lorenz's answer — perhaps yes — became part of the common understanding of educated people.(32)

 

 

 


=>Public opinion

To be sure, generations of historians had debated the "want of a nail" or "Cleopatra's nose" question. How far could the course of human affairs be diverted by a chance event, such as the beauty of one person, or even the weather, like the typhoon that sank Kublai Khan's attempted invasion of Japan? Those who thought about the question more deeply recognized that you could not pin a great consequence on one particular butterfly or horseshoe nail, but that in certain circumstances the outcome might depend on the influence of a great many such factors, each individually insignificant.  
Until the 1970s, scientists had paid little attention to such ideas, concentrating their efforts on systems where analyzing a few simple influences could indeed predict the outcome. But once scientists started to look for less easily analyzed systems, important examples turned up in fields from astronomy to zoology. Could the configuration of a set of planets around a star happen to be radically different from our own solar system? Did blind chance determine the particular mix of species in an ecosystem? Lengthy computer runs, backed up by field observations, gave answers that mostly pointed toward unpredictability. During the 1980s, people began to describe these developments as a major scientific revolution.(33)

 

 

 

=>Rapid change

The meteorological questions that had launched chaos theory remained among the hardest to answer. Some scientists now insisted that the climate system's intrinsic fluctuations would utterly defeat any attempt to calculate its changes. Thus the 1980 edition of one classic textbook said that predictions of greenhouse effect warming were dubious because of chaotic "autovariations." Lorenz and others argued that the recently observed global warming might be no evidence of a greenhouse effect or any other external influence, but only a chance excursion in the drunkard's random walk.(34)

 



=>Government

Most scientists agreed that climate has features of a chaotic system, but they did not think it was wholly unpredictable. To be sure, it was impossible to predict well in advance, with any computer that could ever be built in the actual universe, that a tornado would hit a particular town in Texas on a particular day (not because of one guilty butterfly, of course, but as the net result of countless tiny initial influences). Yet tornado seasons came on schedule. That type of consistency showed up in the supercomputer simulations constructed in the 1980s and after. Start a variety of model runs with different initial conditions, and they would show, like most calculations with complex nonlinear feedbacks, random variations in the weather patterns computed for one or another region and season. However, their predictions for global average temperature usually remained within a fairly narrow range under given conditions. Critics replied that the computer models had been loaded with artificial assumptions in order to force them to produce regular-looking results. But gradually the most arbitrary assumptions were pared away. The models continued to reproduce, with increasing precision, many kinds of past changes, all the way back through the ice ages. As the computer work became more plausible, it set limits on the amount of variation that might be ascribed to pure chance. (In physics language, weather is an "initial conditions" problem, where everything depends on the precise values at the start of the calculation, whereas climate is a "boundary values" problem, where the system eventually settles into a particular general state regardless of the starting point.)  
What if you refused to trust computers? The fact remained that climate over millions of years had responded in a quite regular way to variations of sunlight (Milankovitch cycles). And when gigantic volcanic outbursts had massively polluted the upper atmosphere, weather patterns had reverted to normal within a few years. This set limits on how far the climate system could drive its own variations independent of outside forces.  
<=Climate cycles
As computer models grew ever more complex they remained fragile, requiring delicate balancing of ever more parameters. For example, around 2018 a major upgrading of one of the most important models had difficulties reproducing historical climates in every detail. Among other things, in many of the runs the model covered the Labrador Sea with ice to an extent that had never actually happened. The modelers found a workaround by starting a run onlywith initial conditions that gave realistic sea ice. Presumably they had discovered a real-world butterfly effect: if the weather had been slightly different in 1850, Labrador actually would have become ice-bound.(35)  
On the global scale, however any decent computer model, run with any plausible initial conditions plus a rise of greenhouse gases, predicted warming. As the world's average temperature did in fact climb, it seemed less and less likely that the match with the models was mere accident. However, different models got different results for the future climate in any particular region. And a given model for a given region might come up with a surprising shift of the weather pattern in the middle of a run. Some of these regional fluctuations might be fundamentally chaotic. Occasionally a run of an entire global model would diverge widely for a time, for example if an unusual combination of factors perturbed the delicate balance of ocean circulation. But these divergences were within limits set by the overall long-term average global warming. In fact, it had become a test of a good model that it should show fluctuations and variations, just as the real climate did. For predicting future climates, it became common practice to run a supercomputer model a few times (usually three to five), with slight variations in the initial conditions. The details of the results would differ only modestly, and the modeler would confidently publish an average of the numbers.(36)  
<=Models (GCMs)
To be sure, the models were built to be stable. When a new model was constructed it tended to run away into implausible climate states, until the modelers adjusted parameters to make it resemble the actual current climate. Meanwhile researchers kept turning up possible triggers for a change beyond anything known in recent centuries. Could freshwater from melting Arctic ice abruptly shut down the circulation of the North Atlantic? (Evidently just that had happened some ten thousand years ago.) Could the warming caused by emissions of methane gas make warming tundra or seabeds emit still more methane in a runaway feedback? (There were signs of something like that during a cataclysmic climate shift 55 million years back.) What about a runaway mechanism nobody had even imagined, as the planet warmed beyond anything seen in millions of years? An analysis of deep-sea records from warm periods in the distant past indicated that small perturbations had sometimes triggered processes, of an unknown nature, that brought extreme heating. Those events, however, had played out over tens of thousands of years. The odds against a sudden catastrophe seemed long, but it was impossible to be certain that the planet was not approaching some fatal "tipping point."

 

 

 

 

 

 
<=Rapid change

The term "tipping point" began to be widely applied to the climate system around 2005, but the concept was much older. Unlike the butterfly's flapping wing, which brought an unpredictably chaotic outcome, at a tipping point a slight change could make a generally stable state flip into a well-defined alternative state. For example, perhaps a minute increase of mean annual temperature could tip an ice-covered Arctic Ocean, which stayed cold by reflecting sunlight, into an ice-free ocean, which stayed warm by absorbing sunlight. The term suggested a rapid transition, and some preferred to speak instead of a 'critical threshold," a point where a transition became irreversible although it might take centuries for the change to work to its end..(37)  
By the early 2000s scientists had found at least half a dozen potential critical thresholds in the climate system. Mathematical techniques and computers were now powerful enough to begin to explore these situations with some rigor. At what global temperature might a given threshold be passed? Which transitions, once begun, could or could not be reversed? Could you analyze observations to detect that a given system was approaching its threshold? Potential tipping points are discussed in other essays, in particular Rapid Climate Change.  
Until the future actually came, there would be no way to say how well the modelers understood all the essential forces. What was no longer in doubt was the most important insight produced by countless computer experiments. Under some circumstances a small change in conditions, even something so slight as an increase of trace gases that made up a tiny fraction of the atmosphere, could nudge the planet's climate into a seriously different state. The climate looked less like a simple predictable system than like a confused beast, which a dozen different forces were prodding in different directions. It responded sluggishly, but once it began to move it would be hard to stop.

 

 

 

 

=>Public opinion


RELATED:

Home
General Circulation Models of the Atmosphere
Simple Models of Climate

 NOTES

1. Mason (1957), p. 192. BACK

2. Russell (1941), p. 91. BACK

3. Landsberg (1941, rev. ed. 1947, 1960), pp. 261-268; he cites Brückner (1890). BACK

4. Fultz et al. (1959). BACK

5. Bjerknes (1921). BACK

6. Lorenz (1967), p. 124. BACK

7. Fermi et al. (1965), see introduction by S. Ulam, pp. 977-78; Metropolis (1992), p.129; note also Ulam (1976), pp. 226-28. BACK

8. The authors called these "ergodic" fluctuations. Eriksson and Welander (1956), see p. 168. BACK

9. Richardson (1922); Rossby (1959), p. 30 [this is a translation of Rossby (1956)]; recent analysis shows that Richardson's primitive computation could have succeeded fairly well if he had started with perfect data. But his process of computation with a large time-step grossly magnified the wind data errors, which a human forecaster would have intuitively adjusted in gazing at the map. Worse, the process failed to filter out the random pressure oscillations ("gravity waves") that show up in the complete solution of equations for a fluid. See discussion by Lorenz (1967), p. 131; Norton and Suppe (2001), p. 93; for modern recalculation by P. Lynch, see Hayes (2001). BACK

10. Levenson (1989), p. 89. BACK

11. Norman Phillips, interview by T. Hollingsworth, W. Washington, J. Tribbia and A. Kasahara, Oct. 1989, p. 40, copies at National Center for Atmospheric Research, Boulder, CO, and AIP. BACK

12. C.E.P. Brooks quoted by Engel (1953); Nebeker (1995), p. 189. BACK

13. Wexler (1956), p. 480. BACK

14. Sutcliffe (1963), pp. 278-79. Instead of "external" he speaks of "extraneous" causes. BACK

15. "Self-amplification": Wiener (1956), p. 247, also warning that observations were "a very sketchy sampling of the true data"; by "misleading" Wiener meant von Neumann and Charney. Jule Charney and Walter Munk, "Early History of Computing in Meteorology," unpublished, copy from Arakawa's papers kindly furnished by Paul Edwards, p. 9. See also Cressman (1996), p. 31. BACK

16. "Dialogue between Phil Thompson and Ed Lorenz," 31 July 1986, copies at National Center for Atmospheric Research, Boulder, CO; Gleick (1987) 1295, pp. 11-18. BACK

17. Lorenz (1963), pp. 130, 141. This paper, now considered a classic, was not noticed by mathematicians for nearly a decade. Like nearly all the stories in these essays, there is a lot more to this one, notably work by Barry Saltzman and other mathematicians. For additional details, see Dalmedico (2001). BACK

18. His term for arbitrary was "aperiodic." Lorenz (1964); Gleick (1987), pp. 21-31, 168-169. BACK

19. Lorenz (1968), quote p. 3; he described the randomness of a system of 26 equations (which was not very many for meteorology), published in Lorenz (1965); see also Kraus and Lorenz (1966); Lorenz (1970). BACK

20. Mitchell said his printed "Concluding Remarks" were based on Roger Revelle's summary at the conference itself. Mitchell (1968), pp. 157-58. BACK

21. Wilson and Matthews (1971), p. 109. BACK

22. Stringer (1972), pp. 300, 307-08; Federal Council for Science and Technology (1974); this is included as an appendix in United States Congress (95:1) (1977), p. iii; another review accepting continual and unpredictable change was Kutzbach (1976), p. 475. BACK

23. In sum, "numerically small changes in climatic variables may produce significant environment changes," Bryson (1974), pp. 753, 756, 759. BACK

24. E.g., Newell (1974). BACK

25. Lorenz (1970); as cited by Sellers (1973), p. 253. BACK

26. GARP (1975), pp. 32-33. BACK

27. Hasselmann (1976). BACK

28. Hays et al. (1976). BACK

29. "orbital variations control the timing but not the amplitude." Hays, Imbrie and Shackleton, reply to Evans and Freeland (1977), p. 530. BACK

30. "stochastic" or "probabilistic" variability Mitchell (1976), p. 481. BACK

31.Catastrophe theory was developed by the French mathematician René Thom in the 1960s and popularized in the 1970s. Google's nGram viewer finds a spike of the phrase "catastrophe theory" in books starting in the mid 1970s, falling off after 1980 and overtaken ca. 1990 by "chaos theory". See Lorenz (1993), p. 120. BACK

32. "Predictability: Does the flap of a butterfly's wings..." was the title of an address by Lorenz to the American Association for the Advancement of Science, Washington, DC, Dec. 29, 1979. Waving arm: Calder (1975), p. 129 . BACK

33. Gleick (1987). BACK

34. Trewartha and Horn (1980) (5th edition), pp. 392-95. Lorenz continued to press this view into the 1990s. BACK

35. The Community Earth System Model (CESM2), Danabasoglu and Lamarque (2021). BACK

36. E.g., a computer run with a spontaneous North Atlantic excursion: Hall and Stouffer (2001).  Such an excursion could make detection difficult: Randall et al. (2007), p. 643. BACK

37. Arnscheidt and Rothman (2021). On the "tipping point" concept see this note in the essay on The Public and Climate. BACK

copyright © 2003-2024 Spencer Weart & American Institute of Physics