scispace - formally typeset
Search or ask a question
Author

Charles Tripp

Bio: Charles Tripp is an academic researcher from National Renewable Energy Laboratory. The author has contributed to research in topics: Reinforcement learning & Modular design. The author has an hindex of 1, co-authored 4 publications receiving 3 citations.

Papers
More filters
Proceedings ArticleDOI
17 Nov 2020
TL;DR: This study proposes a cost-effective approach for training RL control policies for homes at scale, which ultimately reduces the controller's implementation costs, increases the adoption rate of RL controllers, and makes more homes grid-interactive.
Abstract: To harness the great amount of untapped resources on the demand side, smart home technology plays a vital role in solving the "last mile" problem in smart grid. Reinforcement learning (RL), which has demonstrated an outstanding performance in solving many sequential decision-making problems, can be a great candidate to be used in smart home control. For instance, many studies have started investigating the appliance scheduling problem under dynamic pricing scheme. Based on those, this study aims at providing an affordable solution to encourage a higher smart home adoption rate. Specifically, we investigate combining transfer learning (TL) with RL to reduce the training cost of an optimal RL control policy. Given an optimal policy for a benchmark home, TL can jump-start the RL training of a policy for a new home, which has different appliances and user preferences. Simulation results show that by leveraging TL, RL training converges faster and requires much less computing time for new homes that are similar to the benchmark home. In all, this study proposes a cost-effective approach for training RL control policies for homes at scale, which ultimately reduces the controller's implementation costs, increases the adoption rate of RL controllers, and makes more homes grid-interactive.

13 citations

Posted ContentDOI
26 Jul 2021
TL;DR: It is argued that the evolution strategies class of derivative-free optimization methods is well-suited to the parameterized hybrid layout problem, and it is demonstrated how hard layout constraints can be transformed into soft constraints that are amenable to optimization using evolution strategies.
Abstract: . Wind plant layout optimization is a difficult, complex problem with a large number of variables and many local minima. Layout optimization only becomes more difficult with the addition of solar generation. In this paper, we propose a parameterized approach to wind and solar hybrid power plant layout optimization that greatly reduces problem dimensionality while guaranteeing that the generated layouts have a desirable regular structure. We argue that the evolution strategies class of derivative-free optimization methods is well-suited to the parameterized hybrid layout problem, and we demonstrate how hard layout constraints (e.g. placement restrictions) can be transformed into soft constraints that are amenable to optimization using evolution strategies. Next, we present experimental results on four test sites, demonstrating the viability, reliability, and effectiveness of the parameterized ES approach for generating optimized hybrid plant layouts. Completing the tool kit for parameterized ES layout generation, we include a brief tutorial describing how the parameterized ES approach can be inspected, understood, and debugged when applied to hybrid plant layouts.

4 citations

Proceedings ArticleDOI
19 Sep 2021
TL;DR: In this article, three deep RL methods-proximal policy optimization (PPO), Ape-X deep Q-network (DQN), and asynchronous advantage actor-critic agents (A3C)-were explored for ramp meter signal control to maximize vehicle speed and traffic throughput, as well as to minimize energy consumption and emissions at freeway on-ramp merging areas in a connected environment.
Abstract: Freeway bottlenecks such as on-ramp merging areas account for about 40% of recurring freeway congestion. It is generally agreed that building more roads and adding more lanes to existing infrastructure does not solve the congestion problem, and so dynamic traffic control measures offer a more cost-effective alternative. Ramp meters, traffic signal devices that regulate traffic flow entering freeways, are among the most effective measures to mitigate congestion at on-ramp merging areas on freeways. The confluence of deep reinforcement learning (RL) and connectivity provides a possible solution to advance ramp meter signal control. Deep RL is a group of machine-learning methods that enables an agent learning from the environment to improve its performance. In this study, three deep RL methods-proximal policy optimization (PPO), Ape-X deep Q-network (DQN), and asynchronous advantage actor-critic agents (A3C)-are explored for ramp meter signal control to maximize vehicle speed and traffic throughput, as well as to minimize energy consumption and emissions at freeway on-ramp merging areas in a connected environment. The low computational requirement and scalability of deep RL for deployment make it a powerful optimization tool for time-sensitive applications such as ramp meter signal control. The results of this study show that deep RL methods yield superior performance to both a fixed-time controller and ALINE A, a state-of-the-art feedback controller.

2 citations

Posted Content
27 May 2021
TL;DR: In this article, a modular framework for fleet rebalancing based on model-free reinforcement learning (RL) is proposed, which can leverage an existing dispatch method to minimize system cost.
Abstract: Mobility on demand (MoD) systems show great promise in realizing flexible and efficient urban transportation. However, significant technical challenges arise from operational decision making associated with MoD vehicle dispatch and fleet rebalancing. For this reason, operators tend to employ simplified algorithms that have been demonstrated to work well in a particular setting. To help bridge the gap between novel and existing methods, we propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL) that can leverage an existing dispatch method to minimize system cost. In particular, by treating dispatch as part of the environment dynamics, a centralized agent can learn to intermittently direct the dispatcher to reposition free vehicles and mitigate against fleet imbalance. We formulate RL state and action spaces as distributions over a grid partitioning of the operating area, making the framework scalable and avoiding the complexities associated with multiagent RL. Numerical experiments, using real-world trip and network data, demonstrate that this approach has several distinct advantages over baseline methods including: improved system cost; high degree of adaptability to the selected dispatch method; and the ability to perform scale-invariant transfer learning between problem instances with similar vehicle and request distributions.

Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of DRL for SBEM from the perspective of system scale is provided and the existing unresolved issues are identified and possible future research directions are pointed out.
Abstract: Global buildings account for about 30% of the total energy consumption and carbon emission, raising severe energy and environmental concerns. Therefore, it is significant and urgent to develop novel smart building energy management (SBEM) technologies for the advance of energy efficient and green buildings. However, it is a nontrivial task due to the following challenges. First, it is generally difficult to develop an explicit building thermal dynamics model that is both accurate and efficient enough for building control. Second, there are many uncertain system parameters (e.g., renewable generation output, outdoor temperature, and the number of occupants). Third, there are many spatially and temporally coupled operational constraints. Fourth, building energy optimization problems can not be solved in real time by traditional methods when they have extremely large solution spaces. Fifthly, traditional building energy management methods have respective applicable premises, which means that they have low versatility when confronted with varying building environments. With the rapid development of Internet of Things technology and computation capability, artificial intelligence technology find its significant competence in control and optimization. As a general artificial intelligence technology, deep reinforcement learning (DRL) is promising to address the above challenges. Notably, the recent years have seen the surge of DRL for SBEM. However, there lacks a systematic overview of different DRL methods for SBEM. To fill the gap, this article provides a comprehensive review of DRL for SBEM from the perspective of system scale. In particular, we identify the existing unresolved issues and point out possible future research directions.

99 citations

Journal ArticleDOI
TL;DR: In this paper , a comprehensive overview of transfer learning applications in smart buildings, classifying and analyzing 77 papers according to their applications, algorithms, and adopted metrics, is presented, highlighting the role of deep learning in transfer learning in smart building applications.
Abstract: • Review of applications of transfer learning (TL) for smart buildings. • Identification of main application areas of TL in smart buildings. • Insights on the most-effective TL techniques for each application area. • Discussion on current research gaps and future opportunities. Smart buildings play a crucial role toward decarbonizing society, as globally buildings emit about one-third of greenhouse gases. In the last few years, machine learning has achieved a notable momentum that, if properly harnessed, may unleash its potential for advanced analytics and control of smart buildings, enabling the technique to scale up for supporting the decarbonization of the building sector. In this perspective, transfer learning aims to improve the performance of a target learner exploiting knowledge in related environments. The present work provides a comprehensive overview of transfer learning applications in smart buildings, classifying and analyzing 77 papers according to their applications, algorithms, and adopted metrics. The study identified four main application areas of transfer learning: (1) building load prediction, (2) occupancy detection and activity recognition, (3) building dynamics modeling, and (4) energy systems control. Furthermore, the review highlighted the role of deep learning in transfer learning applications that has been used in more than half of the analyzed studies. The paper also discusses how to integrate transfer learning in a smart building’s ecosystem, identifying, for each application area, the research gaps and guidelines for future research directions.

80 citations

Journal ArticleDOI
01 Aug 2021
TL;DR: This paper investigates the application of transfer learning applied to a deep reinforcement learning-based heat pump control to leverage energy efficiency in a microgrid, and proposes an algorithm for domestic hot water temperature control and PV self-consumption optimisation.
Abstract: Domestic hot water accounts for approximately 15% of the total residential energy consumption in Europe, and most of this usage happens during specific periods of the day, resulting in undesirable peak loads. The increase in energy production from renewables adds additional complexity in energy balancing. Machine learning techniques for heat pump control have demonstrated efficacy in this regard. However, reducing the amount of time and data required to train effective policies can be challenging. This paper investigates the application of transfer learning applied to a deep reinforcement learning-based heat pump control to leverage energy efficiency in a microgrid. First, we propose an algorithm for domestic hot water temperature control and PV self-consumption optimisation. Secondly, we perform transfer learning to speed-up the convergence process. The experiments were deployed in a simulated environment using real data from two residential demand response projects. The results show that the proposed algorithm achieved up to 10% of savings after transfer learning was applied, also contributing to load-shifting. Moreover, the learning time to train near-optimal control policies was reduced by more than a factor of 5.

13 citations

Journal ArticleDOI
TL;DR: In this article , an intelligent grid-interactive building controller is proposed to optimize building operation during both normal hours and demand response (DR) events. And the controller makes real-time decisions based on a near-optimal control policy.
Abstract: This paper develops an intelligent grid-interactive building controller, which optimizes building operation during both normal hours and demand response (DR) events. To avoid costly on-demand computation and to adapt to non-linear building models, the controller utilizes reinforcement learning (RL) and makes real-time decisions based on a near-optimal control policy. Learning such a policy typically amounts to solving a hard non-convex optimization problem. We propose to address this problem with a novel global-local policy search method. In the first stage, an RL algorithm based on zero-order gradient estimation is leveraged to search for the optimal policy globally, due to its scalability and the potential to escape some poor performing local optima. The obtained policy is then fine-tuned locally to bring the first-stage solution closer to that of the original unsmoothed problem. Experiments on a simulated five-zone commercial building demonstrate the advantages of the proposed method over existing learning approaches. They also show that the learned control policy outperforms a pragmatic linear model predictive controller (MPC) and approaches the performance of an oracle MPC in testing scenarios. Using a state-of-the-art advanced computing system, we demonstrate that the controller can be learned and deployed within hours of training.

7 citations

Book ChapterDOI
01 Nov 2019
TL;DR: In this chapter, wind turbine optimization has been demonstrated as an effective approach to exploring complex design decisions, however, it is not a push-button solution and effective use requires a multidisciplinary design team and expertise in optimization algorithms.
Abstract: In this chapter, wind turbine optimization has been demonstrated as an effective approach to exploring complex design decisions. However, it is not a push-button solution and effective use requires a multidisciplinary design team and expertise in optimization algorithms. Common techniques include using reduced-order models for computationally intensive portions of the analysis, taking advantage of gradients to solve large problems, and more recently to include UQ to allow for robust design decisions. Current efforts continue to push for increased model fidelity, more comprehensive disciplinary coverage, and incorporation of uncertainty

4 citations