Posted by & filed under CSAG Blog.

In my experience, when scientists talk about climate change, many distinguish between weather and climate as noise about a trend; the noise being the weather and the climate being the trend, the signal hidden in the noise. I have a fundamental dislike for this treatment of weather as “noise” (see also blog by Roger Pielke Sr). It implies that weather is some sort of stochastic process that bears no influence on the climate. The description conjures up the image of Climate, the merciless Juggernaut pushing weather out of its way in a mission to heat (or cool) the world. However the notion of climate is no more than a conceptual statistical tool which allows us to think about the changing likelihood of weather conditions in the long-term; at least in relation to the atmosphere. In short, the noise matters!

“Climate” has no single definition which is unfortunate in the world of science and academia as it means scientists regularly talk over each other. I hope to write a blog shortly on the multiple definitions of climate, and more importantly why they matter, but I want to maintain focus for now on why we should not treat the current state of the atmosphere, ocean and other components of the climate system, as simply noise in relation to climate change. Moreover, I hope to convey that the initial state of the climate system (as observed temperature, pressure, humidity fields etc) is surely relevant for climate prediction on all time scales.

Initialisation of climate model experiments is seen as an essential component of improving decadal forecasts but why stop at decadal forecasts? Traditionally, climate change has been viewed as a boundary condition problem. However I struggle to understand the logic that because the fractional proportion of uncertainty decreases relative to other sources of uncertainty such as model error and the future GHG emissions scenario (Hawkins and Sutton, 2009), it is less important to consider initial conditions in projecting climate change at long prediction lead times. Less important for what? Are we less interested in the range of future states consistent with the climate system under different forcing conditions? It is no doubt important to continue improving the reliability of climate models and perhaps (if possible) reduce our emissions scenario uncertainty but we cannot do this at the expense of the very thing we are interested in: climate under climate change.

This blog was spurred by a recent publication by Miller et al. (2012) which provides evidence suggesting that the Little Ice Age (LIA) was initially triggered by volcansim and maintained for many centuries by a sea-ice feedback mechanism. The authors state:

“Increased sea ice export may have engaged a self-sustaining sea-ice/ocean feedback unique to the northern North Atlantic region that maintained suppressed summer air temperatures for centuries after volcanic aerosols were removed from the atmosphere…The persistence of cold summers is best explained by consequent sea-ice/ocean feedbacks during a hemispheric summer insolation minimum; large changes in solar radiance are not required.”

We can extract some important lessons about how we can best utilise models to forecast the future climate. Firstly, understanding how feedbacks in the climate system operate and ensuring models can recreate such feedbacks is an essential component of the climate forecasting problem. Secondly, determining the likelihood of such feedbacks being triggered requires a systematic appraisal of possible triggering mechanisms. The reason I am addressing initialisation of climate models here is largely in relation to the second point. Consequently, I present a hypothesis: the persistence of cold summers during the LIA was both a function of the underlying boundary conditions and the initial state of the climate system. If follows that for another initial state, the triggering of the sea ice feedback and the subsequent persistence of the LIA may not have occurred.

To test this hypothesis, we need to enter the world of chaos. The presence of chaos (broadly defined as sensitivity to initial conditions) is well known to place a limit on the accuracy of deterministic model predictions for global atmospheric circulation, which is in the order of two weeks and explains why a weather forecast beyond a few days is highly speculative. Yet the return of the seasons (related to Earth’s orbit around the sun) and the controlling influence of the oceans makes climate forecasting a possibility. This does not however mean that chaos is irrelevant to climate forecasting. Rather we become more interested in the presence of chaos in other more slowly evolving sub-systems of the climate which ultimately constrain the behaviour of the atmosphere: the ocean and cryosphere etc. There are some pretty big unresolved theoretical questions here relating to concepts such as ergodicity, transitivity, and coexisting attractors. The pioneering work of Edward Lorenz which explored the role of chaos in weather and climate prediction is still an active area of research today. However, the scope of this topic is a bit too large to begin dissecting within this blog post. Rather, I would simply like to illustrate the relevance of pursuing research to better understand the impact of initial conditions in climate forecasting.

One model run won’t give us a very good idea of what the model climate is under climate change. It will simply tell us what the particular model version does for a particular initial condition. To understand how models treat climate as a distribution of possible states consistent with a particular forcing scenario, we need to run large initial condition ensembles which are initialised within the uncertainty in the current state of the climate system. Some progress is being made in this area utilising the climateprediction.net platform. A project titled weatherathome is running for the southern African region to investigate possible climate system trajectories from multiple initial conditions. This project could result in some pretty illuminating results addressing intriguing questions such as, “what weather could have been experienced in the 1961 to 2010 period in southern Africa had the initial state of the system been slightly different?” Personally, I think this is a positive and much needed step for the climate modelling community if we are to make progress in understanding past, current and future climates under climate change.

To conclude, the goal of climate prediction should not be to improve the “skill” of climate forecasts as measured by improvements in the ability to reproduce traditional climate variables such as global mean temperature (see previous post). We may well be able to improve skill yet continue to provide forecasts that are woefully inadequate to inform societal decision making. Therefore the goal should be to provide forecast information that can be useful to decision makers. I struggle to see how initialisation of models can be deemed relevant for weather forecasting, seasonal climate forecasting and now decadal climate forecasting but irrelevant for long-term climate forecasts aimed at providing guidance for climate change adaptation.

5 Responses to “Climate change is an initial condition problem too!”

  1. Erica

    Ah – thanks for the link!

    Re the ensemble members, I suppose it depends how big and complicated you think the attractor is. It is surprisingly hard to wrap your brain or indeed your computer around more than a handful of dimensions. I can’t see how the attractor could be anything other than very big and very complicated, in which case you need not just 40 or 1000, but 10^lots of ensembles to explore it. Pretty difficult.

    (The volume of hyperspace is unexplorably large; the ratio of the volume of an n-sphere to an inscribed n-cube goes towards infinity frighteningly quickly)

    So how many ensemble members are you planning on?

    I suspect it is common practice to throw out the ensemble members that do anything interesting on grounds that if they do do anything interesting, the parameters must be wrong. This I find slightly frightening reasoning. May I have an e-copy of your thesis?

    Also, I agree climatology doesn’t mean anything and especially not in the absence of a definition of “climate” (I pointed this out to my MSc students last week to some consternation). And don’t get me started on “trends”…

  2. Joseph

    Sorry for the length of that previous comment…it’s just nice to be able to respond to comments on the blog. I may well be less enthusiastic in the future!

  3. Joseph

    Thanks for the comments! I definitely prefer music to noise…so then perhaps weather is an individual song and climate is the entire musical?! A really bad song can spoil a musical! (not sure I want to push that analogy any further)

    In relation to the Branstator and Teng paper, I point you towards the rebuttal in the blog post by Roger Pielke Sr. I too remain unconvinced that initial conditions have a limited “information” lifetime in a climate system which includes modes of variability on all time scales. It may be true that some measures of skill for particular variables in certain regions of the world show little improvement by including large initial condition ensembles. However, such measures often place too much weight on the mean which isn’t always the statistical measure of interest. If one member out of 100 does something interesting, it may not really affect the mean but it is surely still relevant; especially if we are designing systems to be robust to 0.5% probability events. I also don’t like the statement in the abstract “…beyond a range of about 10 yr is a boundary condition problem rather than an initial-value problem” – why can’t it be both? The nature of the problem surely just gets more complicated on longer time scales. Just because getting the boundary conditions right may create an improved skill score doesn’t mean the problem is no longer an initial-value problem. Furthermore, 40 ensemble members isn’t really very much! In my PhD thesis, I show experiments in the coupled Lorenz 84/Stommel 61 model where 3 initial condition members do one thing and the remaining 9997 members do another. Finally, the relevance of converging to climatology seems questionable given nonlinear and perhaps abrupt climate changes. Under transient climate change, climatology becomes a confused concept. The forcings (GHG, solar etc) associated with the climate in 1961 were not the same as in 1990 but we still combine 1961 and 1990 data (and all of the years between) together in the same distribution and then base decisions on the statistics of this distribution. It would be really nice to know how different that period could have been for different initial conditions. That’s why I am excited by the weatherathome project!

  4. Erica

    Also, returning to your first paragraph, Brian always says “it’s not noise, it’s music” – which I think is a useful perspective, although at times the music may be somewhat… modern?!

  5. Erica

    I struggle to see how initialisation of models can be deemed relevant for weather forecasting, seasonal climate forecasting and now decadal climate forecasting but irrelevant for long-term climate forecasts aimed at providing guidance for climate change adaptation.

    In some circumstances they may be irrelevant because the information in the model decays to climatology over that timescale. See for example Branstator and Teng JC 2010. On the other hand I think it’s an interesting question, if the initial conditions are giving no information after 10 years, where the “information” in the output is actually coming from?