Modelling uncertainties
Feigl et al. (2022) present a novel method of analyzing errors of
process-based models attributing the model errors at each time step to
specific input variables and model settings. This approach is helping to
understand where model processes might need improvement, model input
data might be of low quality or where model processes might be missing.
The presented approach is a novel combination of (a) Machine Learning
(using a data-driven model to learn predicting model errors), (b)
Shapley Additive exPlanations and Principal Component Analysis
(attributing errors to model inputs and variables), and (c) clustering
(deriving groups of time steps that show similar error generation
characteristics). The methodology is applied to the water temperature
model HFLUX for a 3.45 km2 Canadian catchment. The
results show that errors can be clustered in three groups related to
specific processes indicating where model adjustments can lead to
improved performance.
Moraga et al. (2022) present a new framework to quantify and partition
the uncertainty in hydrological projections originating from climate
models and natural climate variability. The approach is tested in the
478 km2 Kleine Emme and the 1730 km2Thur mountainous catchments in Switzerland. The study uses one emission
scenario and nine climate models. The outputs of the climate models are
stochastically downscaled using a two-dimensional weather generator
producing a 90-member ensemble covering the period 2010-2089, and the
hydrology is simulated using the spatially distributed TOPKAPI-ETH
model. The results show that uncertainty of the annual streamflow
projections is dominated by stochastic uncertainty due to large natural
variability of precipitation. The same applies to extreme high flows. In
contrary, snowmelt and liquid precipitation exhibit robust climate
signals illustrating that streamflow uncertainty during warm seasons and
at high altitudes are dominated by climate model uncertainty.