scispace - formally typeset
Search or ask a question
Author

Antonio Sánchez-Esguevillas

Bio: Antonio Sánchez-Esguevillas is an academic researcher from University of Valladolid. The author has contributed to research in topics: Smart grid & The Internet. The author has an hindex of 21, co-authored 52 publications receiving 1968 citations. Previous affiliations of Antonio Sánchez-Esguevillas include Telefónica & Washington State University.


Papers
More filters
Journal ArticleDOI
TL;DR: A new technique for NTC based on a combination of deep learning models that can be used for IoT traffic provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models.
Abstract: A network traffic classifier (NTC) is an important part of current network monitoring systems, being its task to infer the network service that is currently used by a communication flow (e.g., HTTP and SIP). The detection is based on a number of features associated with the communication flow, for example, source and destination ports and bytes transmitted per packet. NTC is important, because much information about a current network flow can be learned and anticipated just by knowing its network service (required latency, traffic volume, and possible duration). This is of particular interest for the management and monitoring of Internet of Things (IoT) networks, where NTC will help to segregate traffic and behavior of heterogeneous devices and services. In this paper, we present a new technique for NTC based on a combination of deep learning models that can be used for IoT traffic. We show that a recurrent neural network (RNN) combined with a convolutional neural network (CNN) provides best detection results. The natural domain for a CNN, which is image processing, has been extended to NTC in an easy and natural way. We show that the proposed method provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models. A complete study is presented on several architectures that integrate a CNN and an RNN, including the impact of the features chosen and the length of the network flows used for training.

469 citations

Journal ArticleDOI
TL;DR: This review discusses the most relevant studies on electric demand prediction over the last 40 years, and presents the different models used as well as the future trends, and analyzes the latest studies on demand forecasting in the future environments that emerge from the usage of smart grids.
Abstract: Recently there has been a significant proliferation in the use of forecasting techniques, mainly due to the increased availability and power of computation systems and, in particular, to the usage of personal computers. This is also true for power network systems, where energy demand forecasting has been an important field in order to allow generation planning and adaptation. Apart from the quantitative progression, there has also been a change in the type of models proposed and used. In the `70s, the usage of non-linear techniques was generally not popular among scientists and engineers. However, in the last two decades they have become very important techniques in solving complex problems which would be very difficult to tackle otherwise. With the recent emergence of smart grids, new environments have appeared capable of integrating demand, generation, and storage. These employ intelligent and adaptive elements that require more advanced techniques for accurate and precise demand and generation forecasting in order to work optimally. This review discusses the most relevant studies on electric demand prediction over the last 40 years, and presents the different models used as well as the future trends. Additionally, it analyzes the latest studies on demand forecasting in the future environments that emerge from the usage of smart grids.

398 citations

Journal ArticleDOI
26 Aug 2017-Sensors
TL;DR: This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.
Abstract: The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.

193 citations

Journal ArticleDOI
TL;DR: This work presents a novel application of several deep reinforcement learning (DRL) algorithms to intrusion detection using a labeled dataset, and shows that DRL can improve the results of intrusion detection in comparison with current machine learning techniques.
Abstract: The application of new techniques to increase the performance of intrusion detection systems is crucial in modern data networks with a growing threat of cyber-attacks. These attacks impose a greater risk on network services that are increasingly important from a social end economical point of view. In this work we present a novel application of several deep reinforcement learning (DRL) algorithms to intrusion detection using a labeled dataset. We present how to perform supervised learning based on a DRL framework. The implementation of a reward function aligned with the detection of intrusions is extremely difficult for Intrusion Detection Systems (IDS) since there is no automatic way to identify intrusions. Usually the identification is performed manually and stored in datasets of network features associated with intrusion events. These datasets are used to train supervised machine learning algorithms for classifying intrusion events. In this paper we apply DRL using two of these datasets: NSL-KDD and AWID datasets. As a novel approach, we have made a conceptual modification of the classic DRL paradigm (based on interaction with a live environment), replacing the environment with a sampling function of recorded training intrusions. This new pseudo-environment, in addition to sampling the training dataset, generates rewards based on detection errors found during training. We present the results of applying our technique to four of the most relevant DRL models: Deep Q-Network (DQN), Double Deep Q-Network (DDQN), Policy Gradient (PG) and Actor-Critic (AC). The best results are obtained for the DDQN algorithm. We show that DRL, with our model and some parameter adjustments, can improve the results of intrusion detection in comparison with current machine learning techniques. Besides, the classifier obtained with DRL is faster than alternative models. A comprehensive comparison of the results obtained with other machine learning models is provided for the AWID and NSL-KDD datasets, together with the lessons learned from the application of several design alternatives to the four DRL models.

176 citations

Journal ArticleDOI
TL;DR: This article presents a multi-agent system model for virtual power plants, a new power plant concept in which generation no longer occurs in big installations, but is the result of the cooperation of smaller and more intelligent elements.
Abstract: Recent technological advances in the power generation and information technologies areas are helping to change the modern electricity supply system in order to comply with higher energy efficiency and sustainability standards. Smart grids are an emerging trend that introduce intelligence in the power grid to optimize resource usage. In order for this intelligence to be effective, it is necessary to retrieve enough information about the grid operation together with other context data such as environmental variables, and intelligently modify the behavior of the network elements accordingly. This article presents a multi-agent system model for virtual power plants, a new power plant concept in which generation no longer occurs in big installations, but is the result of the cooperation of smaller and more intelligent elements. The proposed model is not only focused on the management of the different elements, but includes a set of agents embedded with artificial neural networks for collaborative forecasting of disaggregated energy demand of domestic end users, the results of which are also shown in this article.

173 citations


Cited by
More filters
01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations

Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

975 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain.
Abstract: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.

903 citations

Journal ArticleDOI
TL;DR: This paper provides a comprehensive review of various DR schemes and programs, based on the motivations offered to the consumers to participate in the program, and presents various optimization models for the optimal control of the DR strategies that have been proposed so far.
Abstract: The smart grid concept continues to evolve and various methods have been developed to enhance the energy efficiency of the electricity infrastructure. Demand Response (DR) is considered as the most cost-effective and reliable solution for the smoothing of the demand curve, when the system is under stress. DR refers to a procedure that is applied to motivate changes in the customers' power consumption habits, in response to incentives regarding the electricity prices. In this paper, we provide a comprehensive review of various DR schemes and programs, based on the motivations offered to the consumers to participate in the program. We classify the proposed DR schemes according to their control mechanism, to the motivations offered to reduce the power consumption and to the DR decision variable. We also present various optimization models for the optimal control of the DR strategies that have been proposed so far. These models are also categorized, based on the target of the optimization procedure. The key aspects that should be considered in the optimization problem are the system's constraints and the computational complexity of the applied optimization algorithm.

854 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive and systematic literature review of Artificial Intelligence based short-term load forecasting techniques and provide the major objective of this study is to review, identify, evaluate and analyze the performance of artificial Intelligence based load forecast models and research gaps.
Abstract: Electrical load forecasting plays a vital role in order to achieve the concept of next generation power system such as smart grid, efficient energy management and better power system planning. As a result, high forecast accuracy is required for multiple time horizons that are associated with regulation, dispatching, scheduling and unit commitment of power grid. Artificial Intelligence (AI) based techniques are being developed and deployed worldwide in on Varity of applications, because of its superior capability to handle the complex input and output relationship. This paper provides the comprehensive and systematic literature review of Artificial Intelligence based short term load forecasting techniques. The major objective of this study is to review, identify, evaluate and analyze the performance of Artificial Intelligence (AI) based load forecast models and research gaps. The accuracy of ANN based forecast model is found to be dependent on number of parameters such as forecast model architecture, input combination, activation functions and training algorithm of the network and other exogenous variables affecting on forecast model inputs. Published literature presented in this paper show the potential of AI techniques for effective load forecasting in order to achieve the concept of smart grid and buildings.

673 citations