scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ensemble Prediction Approach Based on Learning to Statistical Model for Efficient Building Energy Consumption Management

02 Mar 2021-Symmetry (Multidisciplinary Digital Publishing Institute)-Vol. 13, Iss: 3, pp 405
TL;DR: In this paper, an ensemble approach based on learning to a statistical model to predict the short-term energy consumption of a multifamily residential building was proposed, which utilizes Long Short-Term Memory (LSTM) and Kalman Filter (KF) to build an ensemble prediction model.
Abstract: With the development of modern power systems (smart grid), energy consumption prediction becomes an essential aspect of resource planning and operations. In the last few decades, industrial and commercial buildings have thoroughly been investigated for consumption patterns. However, due to the unavailability of data, the residential buildings could not get much attention. During the last few years, many solutions have been devised for predicting electric consumption; however, it remains a challenging task due to the dynamic nature of residential consumption patterns. Therefore, a more robust solution is required to improve the model performance and achieve a better prediction accuracy. This paper presents an ensemble approach based on learning to a statistical model to predict the short-term energy consumption of a multifamily residential building. Our proposed approach utilizes Long Short-Term Memory (LSTM) and Kalman Filter (KF) to build an ensemble prediction model to predict short term energy demands of multifamily residential buildings. The proposed approach uses real energy data acquired from the multifamily residential building, South Korea. Different statistical measures are used, such as mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and R2 score, to evaluate the performance of the proposed approach and compare it with existing models. The experimental results reveal that the proposed approach predicts accurately and outperforms the existing models. Furthermore, a comparative analysis is performed to evaluate and compare the proposed model with conventional machine learning models. The experimental results show the effectiveness and significance of the proposed approach compared to existing energy prediction models. The proposed approach will support energy management to effectively plan and manage the energy supply and demands of multifamily residential buildings.
Citations
More filters
Journal ArticleDOI
TL;DR: A topical survey of the application and impact of software-defined networking on the Internet of things networks, carried out from the different perspectives ofSoftware-based Internet of Things networks, including wide-area networks, edge networks, and access networks.
Abstract: In recent years, rapid development has been made to the Internet of Things communication technologies, infrastructure, and physical resources management. These developments and research trends address challenges such as heterogeneous communication, quality of service requirements, unpredictable network conditions, and a massive influx of data. One major contribution to the research world is in the form of software-defined networking applications, which aim to deploy rule-based management to control and add intelligence to the network using high-level policies to have integral control of the network without knowing issues related to low-level configurations. Machine learning techniques coupled with software-defined networking can make the networking decision more intelligent and robust. The Internet of Things application has recently adopted virtualization of resources and network control with software-defined networking policies to make the traffic more controlled and maintainable. However, the requirements of software-defined networking and the Internet of Things must be aligned to make the adaptations possible. This paper aims to discuss the possible ways to make software-defined networking enabled Internet of Things application and discusses the challenges solved using the Internet of Things leveraging the software-defined network. We provide a topical survey of the application and impact of software-defined networking on the Internet of things networks. We also study the impact of machine learning techniques applied to software-defined networking and its application perspective. The study is carried out from the different perspectives of software-based Internet of Things networks, including wide-area networks, edge networks, and access networks. Machine learning techniques are presented from the perspective of network resources management, security, classification of traffic, quality of experience, and quality of service prediction. Finally, we discuss challenges and issues in adopting machine learning and software-defined networking for the Internet of Things applications.

39 citations

Journal ArticleDOI
Hyeun Sung Kim1
TL;DR: In this article , the authors proposed an IoT task management mechanism based on predictive optimization for energy consumption minimization in smart residential buildings, which has a predictive optimization module based on prediction and an optimization module for solving energy consumption optimization problems.

29 citations

Journal ArticleDOI
23 May 2021-Energies
TL;DR: A spatial and temporal ensemble forecasting model for short-term electric consumption forecasting that has efficiently captured the dynamic electric consumption characteristics to exploit ensemble model diversities and achieved lower forecasting error is presented.
Abstract: Due to the availability of smart metering infrastructure, high-resolution electric consumption data is readily available to study the dynamics of residential electric consumption at finely resolved spatial and temporal scales. Analyzing the electric consumption data enables the policymakers and building owners to understand consumer’s demand-consumption behaviors. Furthermore, analysis and accurate forecasting of electric consumption are substantial for consumer involvement in time-of-use tariffs, critical peak pricing, and consumer-specific demand response initiatives. Alongside its vast economic and sustainability implications, such as energy wastage and decarbonization of the energy sector, accurate consumption forecasting facilitates power system planning and stable grid operations. Energy consumption forecasting is an active research area; despite the abundance of devised models, electric consumption forecasting in residential buildings remains challenging due to high occupant energy use behavior variability. Hence the search for an appropriate model for accurate electric consumption forecasting is ever continuing. To this aim, this paper presents a spatial and temporal ensemble forecasting model for short-term electric consumption forecasting. The proposed work involves exploring electric consumption profiles at the apartment level through cluster analysis based on the k-means algorithm. The ensemble forecasting model consists of two deep learning models; Long Short-Term Memory Unit (LSTM) and Gated Recurrent Unit (GRU). First, the apartment-level historical electric consumption data is clustered. Later the clusters are aggregated based on consumption profiles of consumers. At the building and floor level, the ensemble models are trained using aggregated electric consumption data. The proposed ensemble model forecasts the electric consumption at three spatial scales apartment, building, and floor level for hourly, daily, and weekly forecasting horizon. Furthermore, the impact of spatial-temporal granularity and cluster analysis on the prediction accuracy is analyzed. The dataset used in this study comprises high-resolution electric consumption data acquired through smart meters recorded on an hourly basis over the period of one year. The consumption data belongs to four multifamily residential buildings situated in an urban area of South Korea. To prove the effectiveness of our proposed forecasting model, we compared our model with widely known machine learning models and deep learning variants. The results achieved by our proposed ensemble scheme verify that model has learned the sequential behavior of electric consumption by producing superior performance with the lowest MAPE of 4.182 and 4.54 at building and floor level prediction, respectively. The experimental findings suggest that the model has efficiently captured the dynamic electric consumption characteristics to exploit ensemble model diversities and achieved lower forecasting error. The proposed ensemble forecasting scheme is well suited for predictive modeling and short-term load forecasting.

26 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of emerging IoT technologies, machine learning, and blockchain for healthcare applications is presented in healthcare domains and the presented future directions in this domain can significantly help the scholarly community determine research gaps to address.
Abstract: Internet of Things (IoT) communication technologies have brought immense revolutions in various domains, especially in health monitoring systems. Machine learning techniques coupled with advanced artificial intelligence techniques detect patterns associated with diseases and health conditions. Presently, the scientific community is focused on enhancing IoT-enabled applications by integrating blockchain technology with machine learning models to benefit medical report management, drug traceability, tracking infectious diseases, etc. To date, contemporary state-of-the-art techniques have presented various efforts on the adaptability of blockchain and machine learning in IoT applications; however, there exist various essential aspects that must also be incorporated to achieve more robust performance. This study presents a comprehensive survey of emerging IoT technologies, machine learning, and blockchain for healthcare applications. The reviewed articles comprise a plethora of research articles published in the web of science. The analysis is focused on research articles related to keywords such as ‘machine learning’, blockchain, ‘Internet of Things or IoT’, and keywords conjoined with ‘healthcare’ and ‘health application’ in six famous publisher databases, namely IEEEXplore, Nature, ScienceDirect, MDPI, SpringerLink, and Google Scholar. We selected and reviewed 263 articles in total. The topical survey of the contemporary IoT-based models is presented in healthcare domains in three steps. Firstly, a detailed analysis of healthcare applications of IoT, blockchain, and machine learning demonstrates the importance of the discussed fields. Secondly, the adaptation mechanism of machine learning and blockchain in IoT for healthcare applications are discussed to delineate the scope of the mentioned techniques in IoT domains. Finally, the challenges and issues of healthcare applications based on machine learning, blockchain, and IoT are discussed. The presented future directions in this domain can significantly help the scholarly community determine research gaps to address.

18 citations

Journal ArticleDOI
TL;DR: In this article , a comprehensive survey on using AI-big data analytics in building automation and management systems (BAMSs) is presented, focusing on energy anomaly detection in residential and office buildings and energy and performance optimization in sports facilities.
Abstract: Abstract In theory, building automation and management systems (BAMSs) can provide all the components and functionalities required for analyzing and operating buildings. However, in reality, these systems can only ensure the control of heating ventilation and air conditioning system systems. Therefore, many other tasks are left to the operator, e.g. evaluating buildings’ performance, detecting abnormal energy consumption, identifying the changes needed to improve efficiency, ensuring the security and privacy of end-users, etc. To that end, there has been a movement for developing artificial intelligence (AI) big data analytic tools as they offer various new and tailor-made solutions that are incredibly appropriate for practical buildings’ management. Typically, they can help the operator in (i) analyzing the tons of connected equipment data; and; (ii) making intelligent, efficient, and on-time decisions to improve the buildings’ performance. This paper presents a comprehensive systematic survey on using AI-big data analytics in BAMSs. It covers various AI-based tasks, e.g. load forecasting, water management, indoor environmental quality monitoring, occupancy detection, etc. The first part of this paper adopts a well-designed taxonomy to overview existing frameworks. A comprehensive review is conducted about different aspects, including the learning process, building environment, computing platforms, and application scenario. Moving on, a critical discussion is performed to identify current challenges. The second part aims at providing the reader with insights into the real-world application of AI-big data analytics. Thus, three case studies that demonstrate the use of AI-big data analytics in BAMSs are presented, focusing on energy anomaly detection in residential and office buildings and energy and performance optimization in sports facilities. Lastly, future directions and valuable recommendations are identified to improve the performance and reliability of BAMSs in intelligent buildings.

18 citations

References
More filters
Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations

Journal ArticleDOI
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

15,055 citations

Proceedings Article
08 Dec 2014
TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Abstract: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

12,299 citations

Journal ArticleDOI
TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Abstract: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

11,201 citations

Journal ArticleDOI
TL;DR: It is concluded that split-sample validation is inefficient, and bootstrapping is recommended for estimation of internal validity of a predictive logistic regression model.

2,155 citations