Alejandro Hermoso

and 1 more

Regional weather variability and extremes over Europe are strongly linked to variations in the North Atlantic jet stream, especially during the winter season. Projections of the evolution of the North Atlantic jet are essential for estimating the regional impacts of climate change. Therefore, separating forced trends in the North Atlantic jet from its natural variability is an extremely relevant task. Here, a deep learning based method, the Latent Linear Adjustment Autoencoder (LLAE), is used for this purpose on an ensemble of fully-coupled climate simulations. The LLAE is based on an autoencoder and an additional linear component. The model predicts the wind component affected by natural variability by using detrended temperature and geopotential as inputs. The residual between this prediction and the original wind field is interpreted as the forced component of the jet. The method is first tested for the geostrophic wind for which the forced trend can be obtained analytically from the difference between geostrophic wind computed from detrended and full geopotential. Despite the large variability of the original trends, the LLAE is shown to be effective in extracting the forced component of the trend for each individual ensemble member in both geostrophic and full wind fields. The LLAE-derived forced trend shows an increase in the upper-level zonal wind speed along a southwest-northeast oriented band over the ocean and a jet extension towards Europe. These are common characteristics over different periods and show some similarities to the upper-level zonal wind speed trend obtained from the ERA5 reanalysis.

Guillaume Bertoli

and 3 more

As climate modellers prepare their code for kilometre-scale global simulations, the computationally demanding radiative transfer parameterization is a prime candidate for machine learning (ML) emulation. Because of the computational demands, many weather centres use a reduced spatial grid and reduced temporal frequency for radiative transfer calculations in their forecast models. This strategy is known to affect forecast quality, which further motivates the use of ML-based radiative transfer parameterizations. This paper contributes to the discussion on how to incorporate physical constraints into an ML-based radiative parameterization, and how different neural network (NN) designs and output normalisation affect prediction performance. A random forest (RF) is used as a baseline method, with the European Centre for Medium-Range Weather Forecasts (ECMWF) model ecRad, the operational radiation scheme in the Icosahedral Nonhydrostatic Weather and Climate Model (ICON), used for training. Surprisingly, the RF is not affected by the top-of-atmosphere (TOA) bias found in all NNs tested (e.g., MLP, CNN, UNet, RNN) in this and previously published studies. At lower atmospheric levels, the RF can compete with all NNs tested, but its memory requirements quickly become prohibitive. For a fixed memory size, most NNs outperform the RF except at TOA. The most accurate emulator is a recurrent neural network architecture that closely imitates the physical process it emulates. The shortwave and longwave fluxes are normalized to reduce their dependence on the solar angle and surface temperature respectively. The model are, furthermore, trained with an additional heating rates penalty in the loss function.

Guillaume Bertoli

and 3 more

As climate modellers prepare their code for kilometre-scale global simulations, the computationally demanding radiative transfer parameterization is a prime candidate for machine learning (ML) emulation. Because of the computational demands, many weather centres use a reduced spatial grid and reduced temporal frequency for radiative transfer calculations in their forecast models. This strategy is known to affect forecast quality, which further motivates the use of ML-based radiative transfer parameterizations. This paper contributes to the discussion on how to incorporate physical constraints into an ML-based radiative parameterization, and how different neural network (NN) designs and output normalisation affect prediction performance. A random forest (RF) is used as a baseline method, with the European Centre for Medium-Range Weather Forecasts (ECMWF) model ecRad, the operational radiation scheme in the Icosahedral Nonhydrostatic Weather and Climate Model (ICON), used for training. Surprisingly, the RF is not affected by the top-of-atmosphere (TOA) bias found in all NNs tested (e.g., MLP, CNN, UNet, RNN) in this and previously published studies. At lower atmospheric levels, the RF can compete with all NNs tested, but its memory requirements quickly become prohibitive. For a fixed memory size, most NNs outperform the RF except at TOA. The most accurate emulator is a recurrent neural network architecture that closely imitates the physical process it emulates. The shortwave and longwave fluxes are normalized to reduce their dependence on the solar angle and surface temperature respectively. The model are, furthermore, trained with an additional heating rates penalty in the loss function.