Rain, brains and climate change

MRI scan of the skull

14 June 2013

Comparing the results of different climate models is harder than you'd think; Adam Levy describes how he's tackling the problem with techniques developed to analyse medical images.

Five years ago, I spent half an hour in an MRI scanner trying to obey instructions not to look away from the huge crosshairs filling my vision. Many thoughts ran through my head as I lay there.

Mostly, I wondered whether the scientists could tell each time I cheated at their test by glancing away, and if this would jeopardise my payment. I did not lend a thought to how the images of my brain would be analysed, and it certainly did not occur to me that in five short years I'd be using the very same techniques to help predict climate change.

When I started my doctorate in 2011, one of the first things I learned about climate science was just how difficult it is. Although it's relatively straightforward to demonstrate that increased carbon dioxide will increase the Earth's temperature, understanding how this will affect different regions at different times of the year is much harder.

Climate scientists now use incredibly complex, physics-based computer models to try to simulate the climate under different scenarios. These allow us to visualise and understand the climate change we've seen over the past century, and help us predict the changes that we might see in the future.

To get images to line up better, we can stretch and squash them - but only within reason.

Frustratingly, though, these models have always struggled to simulate rainfall. Changes in rain patterns will have a huge impact on agriculture, potentially threatening the livelihoods and lives of millions of the world's poorest people. Yet when we compare the results of different climate models, not only do they disagree on how climate change will affect future rainfall in different regions of the world - they even struggle to simulate present-day rainfall.

It's possible that disagreement between the models is partly caused by each model simulating weather features in different locations. For example, if two models agree that a monsoon will become wetter in future years, but they always predict it in different places from each other, then comparisons will imply they disagree.

This is a bit like trying to compare two photos taken of the same scene, but from different viewpoints - even though they represent the same thing, if you put one on top of the other, the images won't line up. The purpose of my work is to find a way to get all the climate features in all the models lined up before we compare them... and this is where the brain scans come in.

Warped imagery

After scanning my brain, the scientists compared the images to the brain scans of other individuals, so they could understand what effects the experiment was having on me. To do this, they first needed to align the 3D images from the scans so that all the anatomical features overlapped.

Medical image-analysis researchers do this by using image-manipulation tools, which distort (or 'warp') an image of one patient's brain so that its features line up with those of another patient. Imagine our photos from different viewpoints are printed on rubber sheets - to get them to line up better, we can stretch and squash them, but only within reason. We don't want to fold or tear an image, as then we'll be losing parts of the scene it captured.

In just the same way, when we're comparing climate models, we want to get the climate features to overlap. Regardless of the original application, the idea makes good sense - in both fields we want to compare features, but have to make sure that they're correctly lined up first.

To test out the potential of this technique, I started by looking at 14 state-of-the-art climate models' simulations of the present day. I applied the medical imaging software in such a way that it could adjust the climate simulations instead of brain images. Each model's simulation was warped to get its rainfall features - the patterns where rain fell - to line up better with the observed patterns. In other words, the models' simulations end up looking more like what actually happened.

I then applied the same warps to the models' predictions of future rainfall under an extreme climate-change scenario. If the technique worked, the rainfall features in the predictions would be better lined up, and so there would be a reduction in disagreement between the models.

Calculating something like this is one of those moments when your heart stops as a scientist. I ran through the code that compares the models before and after applying my technique and waited for two numbers to pop up on my computer screen. If the second number was smaller, then the idea had worked. If not, I was going to have to think of another idea for my thesis.

The numbers appeared. I breathed a sigh of relief, and then quickly plugged the numbers into my calculator. I'd managed to remove 15 per cent of the disagreement between climate models by using software originally designed for looking at images of people's brains. Not only that, but in some areas this improvement was much higher, more than halving the disagreement.

Now that I've shown that these tools can help improve our predictions, the next step is to develop a version of the software that is designed from the get-go to look at rainfall. The medical software is not set up to deal with warping images on the surface of a sphere, and so there are limitations in applying it to global rainfall. Once I've corrected this, and dealt with some other shortcomings, I should be able to achieve bigger improvements in the agreement between models.

I'll then apply the software to a broad range of emissions scenarios, which could provide a much clearer understanding of how rainfall will change. My hope is that these results will help inform policy and adaptation to avoid some of the worst consequences of global warming.


Adam Levy is a doctoral student in the Atmospheric Physics department of the University of Oxford.