J
Joel Lehman
Researcher at Uber
Publications - 102
Citations - 7252
Joel Lehman is an academic researcher from Uber . The author has contributed to research in topics: Reinforcement learning & Artificial neural network. The author has an hindex of 33, co-authored 98 publications receiving 5588 citations. Previous affiliations of Joel Lehman include IT University of Copenhagen & OpenAI.
Papers
More filters
Journal ArticleDOI
Abandoning objectives: Evolution through the search for novelty alone
Joel Lehman,Kenneth O. Stanley +1 more
TL;DR: In the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective.
Posted Content
Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning
Felipe Petroski Such,Vashisht Madhavan,Edoardo Conti,Joel Lehman,Kenneth O. Stanley,Jeff Clune +5 more
TL;DR: It is shown that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms fail, and expands the sense of the scale at which GAs can operate.
Journal Article
Exploiting Open-Endedness to Solve Problems Through the Search for Novelty
Joel Lehman,Kenneth O. Stanley +1 more
TL;DR: Decoupling the idea of open-ended search from only artificial life worlds, the raw search for novelty can be applied to real world problems and significantly outperforms objective-based search in the deceptive maze navigation task.
Journal ArticleDOI
Designing neural networks through neuroevolution
TL;DR: This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.
Proceedings Article
An intriguing failing of convolutional neural networks and the CoordConv solution
Rosanne Liu,Joel Lehman,Piero Molino,Felipe Petroski Such,Eric Frank,Alex Sergeev,Jason Yosinski +6 more
TL;DR: CoordConv as discussed by the authors proposes to give convolution access to its own input coordinates through the use of extra coordinate channels, allowing networks to learn either complete translation invariance or varying degrees of translation dependence, as required by the end task.