scispace - formally typeset
Journal ArticleDOI

Explore a deep learning multi-output neural network for regional multi-step-ahead air quality forecasts

Reads0
Chats0
TLDR
Results demonstrated that the proposed DM-LSTM model incorporated with three deep learning algorithms could significantly improve the spatio-temporal stability and accuracy of regional multi-step-ahead air quality forecasts.
About
This article is published in Journal of Cleaner Production.The article was published on 2019-02-01. It has received 178 citations till now. The article focuses on the topics: Air quality index & Overfitting.

read more

Citations
More filters
Journal ArticleDOI

PM2.5 Prediction Based on Random Forest, XGBoost, and Deep Learning Using Multisource Remote Sensing Data

TL;DR: In this paper, the authors used 23 features, including satellite and meteorological data, ground-measured PM2.5, and geographical data, in the modeling of air pollution in Tehran's urban area.
Journal ArticleDOI

Exploring a Long Short-Term Memory based Encoder-Decoder framework for multi-step-ahead flood forecasting

TL;DR: It is concluded that the proposed LSTM-ED that translates and links the rainfall sequence with the runoff sequence can improve the reliability of flood forecasting and increase the interpretability of model internals.
Journal ArticleDOI

Daily urban air quality index forecasting based on variational mode decomposition, sample entropy and LSTM neural network

TL;DR: This study proposes a hybrid AQI forecasting model that had a high rate of correct AQI class forecasting, which existing single models cannot achieve, while other hybrid models can only reflect AQI series trends with limited prediction accuracy.
Journal ArticleDOI

Real-time probabilistic forecasting of river water quality under data missing situation: Deep learning plus post-processing techniques

TL;DR: This study introduced a novel methodology for probabilistic water quality forecasting conditional on point forecasts using a Multivariate Bayesian Uncertainty Processor to probabilistically model the relationship between the point forecasts made by a deep learning artificial neural network and their corresponding observed water quality.
Journal ArticleDOI

Artificial intelligence in business: state of the art and future research agenda

TL;DR: The evolution of research on AI in business over time is presented, highlighting seminal works in the field, and the leading publication venues are highlighted, and a research agenda is proposed to guide the directions of future AI research in business addressing the identified trends and challenges.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Related Papers (5)