On Wednesday 26 May, the Leeds-Bradford local group
held an event titled ‘Meteorology: Climate change, coverage bias and rainfall’. We were lucky to hear talks from three speakers from around the UK.
Out first speaker was Kevin Cowtan from York Structural Biology Laboratory, University of York. He spoke about a method from the 1970s that accounts for the supposed false pause in global warming.
Kevin introduced an observed pause in global warming between 1940s to 1980s and asked whether it was genuine (figure 1). His central thesis was that the hiatus is a statistical artefact arising because of decisions made about sampling methods and how data is summarised. Contrary to intuition, more data will not always facilitate less-biased estimates. More observations are only better when those observations are random. Unfortunately, for a lot of climate data, observations are not random. To further complicate matters, the arithmetic mean is typically used to summarise the global temperature, but as a least-squares estimator, it assumes independence of observations. Alas, observations from around the globe are dependent in space and time.
Thankfully, Kevin provided a solution to overcome these problems: generalised least squares. With generalised least squares, we can use a covariance matrix to weight observations according to the amount of independent information they contain. For example, figure 2 shows a situation where we have complete coverage of one hemisphere but sparse coverage of the other (the colour scale indicates the magnitude of the weights). We can use generalised least squares to estimate the arithmetic mean global temperature by up-weighting the sparse observations. Note that the sparse observations at the poles have low weights because, contrary to the distorted map projection, they are very close to the polar observations from the other hemisphere. Hence, they don’t offer as much independent information to the global mean temperature as the equatorial observations do.
Kevin answered questions on the topic of the societal impacts of misunderstanding complex statistical concepts in climatology. During his talk, Kevin offered some wisdom that is relevant to any communicator: “Correcting misconceptions is much harder than teaching something new”. Hopefully, we can all to the right job the first time!
Detection and attribution of climate change trends
Our second speaker of the day was Donal Cummins, PhD researcher from the University of Exeter. He offered a back-to-basics appraisal of existing methods and assertions about detection and attribution of climate trends.
Common methods for attribution involve a linear regression of historical climate observations on simulated output from numerical climate models. Unfortunately, regressions of non-stationary variables are in general inconsistent and liable to produce spurious results. A key to overcome this problem is to parameterize models in terms of planetary energy balance and ocean heat storage. In other words, use our understanding of causal mechanisms to inform model design. This help make any associative estimations made using statistical models physical interpretable. You can check out some more of Donald’s previous work
on inverting climate models to estimate radiative forcing.
Analysing and predicting African rainfall
Our third speaker for the day was Professor Doug Parker from the School of Earth and Environment & School of Mathematics, University of Leeds. He presented on some ideas from a wide range of prediction timescales, from “nowcasting” to patterns with large-scale climatic drivers.
Doug's talk focused on West African where most of his work has been done. We were all taken aback by the sizes of West-African rain storms, which can be the size of whole countries (figure 3). Doug summed up the difficulty facing West-Africans as follows: rainfall is vital to economies and predictions can save lives and livelihoods but numerical prediction systems perform poorly both on the timescales of days and on decades. To make matters worse, climate change is happening faster than we can reduce the errors in our tropical rainfall predictions.
Fortunately, Doug is using event analysis to infer that meso-scale climatic systems arise because they extract more moisture than smaller systems. This is independent of pre-event moisture, which is why large storms can occur in places where water isn’t widely available. In other words, it is the rate at which moisture is taken into a storm that makes it intense, rather than how much moisture is taken in. Future climate models for West African might be better off focusing on the regional availability of moisture to make storm predictions.
Recordings of some of the talks are available on the Leeds-Bradford local group playlist within the RSS YouTube channel.
Ciarán McInerney, PhD, is secretary of the Leeds-Bradford local group. He is a research fellow in the School of Computing at University of Leeds and the NIHR Yorkshire & Humber Patient Safety Translational Research Centre, where he studies the design and evaluation of digital innovation for patient safety.