scispace - formally typeset
E

Eugene Vinitsky

Researcher at University of California, Berkeley

Publications -  34
Citations -  1067

Eugene Vinitsky is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 13, co-authored 30 publications receiving 625 citations. Previous affiliations of Eugene Vinitsky include University of California & University of Delaware.

Papers
More filters
Posted Content

Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control.

TL;DR: This work uses Flow to develop reliable controllers for complex problems, such as controlling mixed-autonomy traffic (involving both autonomous and human-driven vehicles) in a ring road, and shows that even simple neural network policies can solve the stabilization task across density settings and generalize to out-of-distribution settings.

Benchmarks for reinforcement learning in mixed-autonomy traffic

TL;DR: New benchmarks in the use of deep reinforcement learning to create controllers for mixed-autonomy traffic, where connected and autonomous vehicles (CAVs) interact with human drivers and infrastructure are released.

Emergent Behaviors in Mixed-Autonomy Traffic

TL;DR: The present article formulates and approaches the mixed-autonomy traffic control problem using the powerful framework of deep reinforcement learning (RL) to provide insight for the potential for automation of traffic through mixed fleets of automated and manned vehicles.
Posted Content

Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design

TL;DR: This work proposes Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
Proceedings ArticleDOI

Lagrangian Control through Deep-RL: Applications to Bottleneck Decongestion

TL;DR: Using deep reinforcement learning, novel control policies for autonomous vehicles are derived to improve the throughput of a bottleneck modeled after the San Francisco-Oakland Bay Bridge and it is shown that the AV controller provides comparable performance to ramp metering without the need to build new Ramp metering infrastructure.