In theory, in practice: Defending the climate model

climate-cold-glacier-iceberg (1)

In his famous allegory of the cave Plato suggests that there are a group of people, prisoners, facing a wall, unable to direct their vision at anything else. Behind them lies reality, which has its true form projected onto the wall the prisoners are doomed to watch indefinitely. The prisoners only see the shadows and the echoes of the reality behind them.

Plato held mathematics in high regard, and one purpose of this allegory is to draw a distinction between the people who are able to infer the true nature of reality from the snippets which are observable, and those who do not – who take the images to be reality itself.

The people who ponder on the true cause of what they see are practicing what we know today as Science. These people – let’s call them ‘Scientists’ –  piece together ideas and try to explain their observations by creating a model, which describes the way they believe something works from what (little) they may know.

The allegory serves to illustrate a simple symptom of thinking inductively as opposed to deductively, building an image of something that may never have been directly observed from the data available – science is hard. Few areas of research typify this point better than climate research and the construction of climate models.

The aim when making a climate model is to recreate our world in pure maths and code. In theory, by parameterising every single feature of the planet, from pressure to phytoplankton, it should be possible to generate a simulation that would act in parallel with our world. Building a climate model is an iterative process: start with the big variables like solar radiation and the reflectivity the earth has to this radiation; and keep pumping in more and more equations, those that describe ocean currents for example. What emerges from this enormous collection of physical laws and elbow grease is a model.

The aim when making a climate model is to recreate our world in pure maths and code.

However, as is necessarily the case, it isn’t quite that simple and the hurdles faced are large and numerous. Today’s most sophisticated climate models are called ‘General Climate Models’ or GCMs for short and they weigh in at about 500,000 lines of code all dedicated to recreating conditions here on earth. Clearly, solving these models is beyond the capabilities of the humble pen and paper, and is more suited to the relentless number crunching power of supercomputers – but even they aren’t perfect.

It is within the nature of a computer to have a maximum resolution, and to be pixelated. So whereas our universe switches from being continuous in nature to quantised on a quantum scale, the computers do so far sooner. The consequence of this is that the simulated universe is split up into three dimensional cells, within which all the values of the variables we have programmed in, like temperature and atmospheric methane concentration, are the same; which is obviously not the case in the world we live in. In fact, computers are also limited to how often they can run calculations, meaning that time must also be split up into intervals. Again, an imperfect fit for the smooth arrow of time that we perceive.

If we were able to shrink these time steps and cells to infinitesimal length and volume, then we would find we tend towards the nature of our world. But alas, such a feat is not possible given current computing power. In fact, typical cells are on the order of hundreds of kilometres in length and time steps range from minutes to hours. If we wish to improve the resolution of the simulation in a bid to make it a better fit, we forfeit time and the calculation will take much longer.

Further complications arise when it comes to stitching the correct bits of code together. A climate model is typically made in separate chunks – for example, scientists may model atmospheric, oceanic and land systems which then must be appropriately interwoven. Atmospheric CO2 levels will have a direct and significant effect on the concentration of CO2 in the oceans, and the blocks of coding must share this information accordingly.

Sometimes the models even exceed expectation. No one is entirely sure what the causes of the unpredictable changes in weather around the pacific ocean due to the El Niño are.

When it comes to firing up the model it is actually common practice to let the simulation do its thing. Start the sun shining and the earth spinning and let the climate reach equilibrium. From here it’s a case of feeding in data for different parts of the world and observing what the model, specifically what the aforementioned 500,000 lines of equations, think will happen in the future, given the changes happening now and in recent years.

But why should we trust the model? The Central England temperature data series, started in 1659, is the longest running temperature record, and over the last century humans have been watching our climate with ever-keener eyes. Thanks to the scientist’s visceral desire to measure things we are able to pose a question to our model, already knowing the answer. We may task our model to predict the climate in the year 1999 given data from the early 1900s. By exercising the model on the past like this, assessing whether it matches what actually happened, we are able to adapt and improve features of the code to greatly improve accuracy. If you’d prefer you could even feed your model on a diet of data from the last ice age and see if it correctly predicts observations from ice cores which we study today.

Sometimes the models even exceed expectation. No one is entirely sure what the causes of the unpredictable changes in weather around the pacific ocean due to the El Niño are, and so we are unable to directly account for an El Niño in the code. Despite this, however, they still appear in our model; simply from the complex interactions between basic equations and laws laid out in the code. The El Niño is generated by our model, while remaining a mystery to us.

So no, climate models, much like science in general, are not perfect, they are limited by computing power and human understanding. They will not tell us the weather forecast for January the 8th 2067. This does not, however, suggest we shouldn’t listen them, that we should compliantly rest our future in the hands of fate and those who look at the images on the wall and see just images on a wall. Thinking inductively is hard and sometimes leads to a dead-end, but the alternative involves waiting a few centuries so we can say with absolute deductive certainty that the world is indeed just as arid and devoid of life as our model predicted.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

The Latest

To Top