scispace - formally typeset
Open AccessJournal ArticleDOI

Hydrological concept formation inside long short-term memory (LSTM) networks

TLDR
In this article , a simple regression approach was used to map the LSTM state vector to the target stores (soil moisture and snow) of interest, and good correlations between the probe outputs and the target variables of interest provided evidence that LSTMs contain information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores.
Abstract
Abstract. Neural networks have been shown to be extremely effective rainfall-runoff models, where the river discharge is predicted from meteorological inputs. However, the question remains: what have these models learned? Is it possible to extract information about the learned relationships that map inputs to outputs, and do these mappings represent known hydrological concepts? Small-scale experiments have demonstrated that the internal states of long short-term memory networks (LSTMs), a particular neural network architecture predisposed to hydrological modelling, can be interpreted. By extracting the tensors which represent the learned translation from inputs (precipitation, temperature, and potential evapotranspiration) to outputs (discharge), this research seeks to understand what information the LSTM captures about the hydrological system. We assess the hypothesis that the LSTM replicates real-world processes and that we can extract information about these processes from the internal states of the LSTM. We examine the cell-state vector, which represents the memory of the LSTM, and explore the ways in which the LSTM learns to reproduce stores of water, such as soil moisture and snow cover. We use a simple regression approach to map the LSTM state vector to our target stores (soil moisture and snow). Good correlations (R2>0.8) between the probe outputs and the target variables of interest provide evidence that the LSTM contains information that reflects known hydrological processes comparable with the concept of variable-capacity soil moisture stores. The implications of this study are threefold: (1) LSTMs reproduce known hydrological processes. (2) While conceptual models have theoretical assumptions embedded in the model a priori, the LSTM derives these from the data. These learned representations are interpretable by scientists. (3) LSTMs can be used to gain an estimate of intermediate stores of water such as soil moisture. While machine learning interpretability is still a nascent field and our approach reflects a simple technique for exploring what the model has learned, the results are robust to different initial conditions and to a variety of benchmarking experiments. We therefore argue that deep learning approaches can be used to advance our scientific goals as well as our predictive goals.

read more

Citations
More filters
Journal ArticleDOI

The Great Lakes Runoff Intercomparison Project Phase 4: the Great Lakes (GRIP-GL)

TL;DR: In this article , the authors performed a model intercomparison study to test and compare the simulated outputs of various model setups over the same study domain, where they brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product and locations of performance evaluation across the 1×106 km2 study domain.
Journal ArticleDOI

Improving hydrologic models for predictions and process understanding using neural ODEs

TL;DR: Hydrologic neural ordinary differential equation (ODE) models that perform as well as state-of-the-art deep learning methods in stream flow prediction while maintaining the ease of interpretability of conceptual hydrologic models are introduced.
Journal ArticleDOI

On strictly enforced mass conservation constraints for modelling the Rainfall‐Runoff process

TL;DR: In this paper , the roll of strictly enforced mass conservation for matching a long-term mass balance between precipitation input and streamflow output using physics-informed machine learning was analyzed and found that enforcing closure in the rainfall runoff mass balance does appear to harm the overall skill of hydrological models; however, this "closure" effect accounts for only a small fraction of the difference in predictive skill between deep learning and conceptual models.
Journal ArticleDOI

Continuous streamflow prediction in ungauged basins: long short-term memory neural networks clearly outperform traditional hydrological models

TL;DR: In this paper , the authors investigated the ability of LSTM neural networks to perform streamflow prediction at ungauged basins using a set of state-of-the-art hydrological model-dependent regionalization methods.
Journal ArticleDOI

Using Deep Learning Algorithms for Intermittent Streamflow Prediction in the Headwaters of the Colorado River, Texas

Farhang Forghanparast, +1 more
- 22 Sep 2022 - 
TL;DR: Three deep learning algorithms, especially the LSTM-based models, outperformed the ELM with respect to all evaluation metrics and offered overall higher accuracy and better stability (more robustness against overfitting) and compared against a baseline Extreme Learning Machine model for monthly streamflow prediction in the headwaters of the Texas Colorado River.
References
More filters
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI

Deep learning and process understanding for data-driven Earth system science

TL;DR: It is argued that contextual cues should be used as part of deep learning to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales.
Journal ArticleDOI

The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.

TL;DR: In this article, the authors ask whether or not a supervised machine learning model will work in deployment, and what else can it tell you about the world, besides its predictive capabilities.
Journal ArticleDOI

Getting the right answers for the right reasons: Linking measurements, analyses, and models to advance the science of hydrology

TL;DR: In this article, the authors argue that scientific progress will mostly be achieved through the collision of theory and data, rather than through increasingly elaborate and parameter-rich models that may succeed as mathematical marionettes, dancing to match the calibration data even if their underlying premises are unrealistic.
Related Papers (5)