scispace - formally typeset
M

Madeleine Gibescu

Researcher at Utrecht University

Publications -  205
Citations -  5436

Madeleine Gibescu is an academic researcher from Utrecht University. The author has contributed to research in topics: Electric power system & Wind power. The author has an hindex of 29, co-authored 201 publications receiving 4119 citations. Previous affiliations of Madeleine Gibescu include Alstom & Eindhoven University of Technology.

Papers
More filters
Journal ArticleDOI

Impacts of Wind Power on Thermal Generation Unit Commitment and Dispatch

TL;DR: In this paper, the impacts of large-scale wind power on system operations from cost, reliability, and environmental perspectives are assessed using a time series of observed and predicted 15-min average wind speeds at foreseen onshore and offshore-wind farm locations.
Journal ArticleDOI

Deep learning for estimating building energy consumption

TL;DR: In this article, the authors investigate two newly developed stochastic models for time series prediction of energy consumption, namely, CRBM and Factored Conditional Restricted Boltzmann Machine (FCRBM).
Journal ArticleDOI

On-Line Building Energy Optimization Using Deep Reinforcement Learning

TL;DR: In this article, the benefits of using deep reinforcement learning (RL) to perform on-line optimization of schedules for building energy management systems are explored. But, the authors do not consider the impact of different types of data.
Journal ArticleDOI

Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

TL;DR: This article proposed sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Renyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning.
Journal ArticleDOI

Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science

TL;DR: A method to design neural networks as sparse scale-free networks, which leads to a reduction in computational time required for training and inference, which has the potential to enable artificial neural networks to scale up beyond what is currently possible.