Are Seasonal forecasts for Europe skilful?
Computer generated weather forecasts exist for seasons ahead. There are several sources for these forecasts, with those from the European Centre for Medium-range Weather Forecasts (ECMWF) thought to be the most skilful single model. Skill varies across the globe, and for different parameters. Europe is not a highly skilful area, as shown in figure 1 below; indeed for parts of Europe one is better off using the seasonal normal rather than the forecast.
Much of the discussion of skill is for 3 month periods. However, decisions made using individual monthly forecasts are fairly common (e.g. https://www.irishtimes.com/news/environment/scientists-warn-that-drought-could-get-worse-as-little-rain-forecast-1.3590223). So how does the forecast for e.g. May vary as we get closer to May, and does the skill improve?
During a meeting at ECMWF earlier this year, there was an opportunity to provide feedback on the forecasts. Lake Street Consulting spoke of the ‘jumps’ that we’d seen in the seasonal forecasts from one month to the next – e.g. shifts in the forecast for May between that initialised in April and that initialised in May. However, due to our using different terminology to that used within ECMWF, it was a challenge to get our point across. So we went away and produced the following plots, thereby arguing our point in a common language. Below we share the plots.
- ECMWF seasonal (SEA5) 50 member ensemble forecasts
- Forecasts initialised from May 2017 onwards, valid for November 2017 through May 2018. (A small dataset.)
- French average temperature
- Climatology, or seasonal normal, is the average of observations for 1993-2016.
- Plot design is that of Linus Magnusson, ECMWF. Each plot is for a number of forecasts all valid for the same time. Below in figure 2 is a plot for forecasts valid for February 2018, initialised on the first of each month from August 2017 through February 2018.
- Boxes are 25-50% and 50-75%, and the whiskers are the ensemble maximum and minimum.
- The red dot is the verification – what actually happened.
- Each plot is for one verification date, with box & whisker plots for different initialisation dates (oldest to left), and climatology on the right.
- Below in figure 3 we show these grouping of forecasts for Nov 2017 (upper) to May 2018 (lower), offset so that the verification months line up vertically.
- All plots have a range of 14°C on the y-axis, so spread is comparable between plots.
What’s surprising about these plots?
Ideally, both box and whisker spreads would vary according to the skill in the ensemble forecast, with the overall trend being to lower spread/higher skill as the lead time of the forecast decreased. Varying sensitivity of the atmosphere in different weather patterns suggests that the decrease in spread will not be smooth.
What we notice is that quite often, the forecast spread narrows significantly between lead times of 2 months and 1 month (so from the forecast initialised in Jan 2018 to the forecast initialised in Feb 2018, both for Feb 2018, as in Figure 2). And also, the mean of the distribution shows a significant ‘jump’.
The forecast series valid for Nov 2017, Feb 2018 and May 2018 all have a ‘jump’ between forecasts with lead time of 2 months vs 1 month which is so large that 25-75% ranges between the forecast lead times do not overlap. On a positive note, this ‘jump’ seems to be in the direction of the actual verification. Skill seems to appear in the seasonal forecast for the front month, but is lacking for lead times of 2 months or more. To confirm this, we need to calculate the skill scores over a larger data set – a potential student project.
The results here are for French average temperature. Given this type of insight, there are a number of ways in which we adapt the forecasts we provide to our clients.