Quantifying and Interpreting Uncertainty in Time Series Forecasting
Phipps, Kaleb 1 1 Institut für Automation und angewandte Informatik (IAI), Karlsruher Institut für Technologie (KIT)
Abstract (englisch):
For a wide variety of sectors, including energy, retail, and mobility, time series data is increasingly gaining importance. Within these sectors, critical applications include dispatch management in energy systems, warehouse storage optimisation in the retail sector, and traffic congestion management within the mobility sector. However, for such applications to be successful, they require reliable and trustworthy forecasts of the relevant time series. Unfortunately, any forecast of the future contains an inherent component of uncertainty. Therefore, to ensure these forecasts are trustworthy, they should quantify this uncertainty, i.e., probabilistic forecasts. However, quantifying this uncertainty through a probabilistic forecast may not sufficiently increase the trust in the forecast. The quantified uncertainty should also be interpreted in a manner that is useful for the considered application.
Therefore, the present dissertation takes a holistic approach by considering both quantifying and interpreting uncertainty in time series forecasts. To quantify uncertainty, we first investigate whether the meteorological uncertainty affecting many time series can be linked to the uncertainty in the forecast time series. ... mehrWe show that this link can be established, but post-processing is required to generate calibrated probabilistic forecasts. Second, we consider if the unknown data distribution of a time series can be used to include uncertainty in a forecast. Thereby, we present a novel approach for generating probabilistic forecasts from arbitrary point forecasts using a conditional invertible neural network and show how our approach outperforms benchmark probabilistic forecasts on common evaluation metrics. Third, we extend this approach using automated hyperparameter optimisation to generate probabilistic forecasts whose properties can be customised depending on the loss metric considered. This customisation occurs without retraining the underlying forecasting model and can further increase trust in the forecast by providing probabilistic forecasts tailored to specific requirements.
To interpret the uncertainty, we first introduce an approach that explains the origins of uncertainty in a probabilistic forecast using existing methods from explainable artificial intelligence. Our method is applicable to a wide range of probabilistic forecasting models, and we show that the resulting explanations deliver valuable insights. Second, we investigate regions of uncertainty that are particularly critical for mobility applications. We further propose various representations of this quantified uncertainty, which highlight these critical regions and can be particularly useful to the considered mobility application.
Overall, by considering multiple approaches to quantify and interpret uncertainty, this dissertation introduces multiple contributions that can be applied to increase trust in time series forecasts.