scispace - formally typeset
D

Devendra Singh Chaplot

Researcher at Carnegie Mellon University

Publications -  48
Citations -  2501

Devendra Singh Chaplot is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 20, co-authored 47 publications receiving 1705 citations. Previous affiliations of Devendra Singh Chaplot include Facebook & Samsung.

Papers
More filters
Posted Content

On Evaluation of Embodied Navigation Agents

TL;DR: The present document summarizes the consensus recommendations of a working group to study empirical methodology in navigation research and discusses different problem statements and the role of generalization, present evaluation measures, and provides standard scenarios that can be used for benchmarking.
Proceedings Article

Learning to Explore using Active Neural SLAM

TL;DR: This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM', which leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies.
Posted Content

Playing FPS Games with Deep Reinforcement Learning

TL;DR: This paper presents the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states, and substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios.
Posted Content

Object Goal Navigation using Goal-Oriented Semantic Exploration

TL;DR: A modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category and outperforms a wide range of baselines including end-to-end learning-based methods as well as modular map- based methods.
Posted Content

Gated-Attention Architectures for Task-Oriented Language Grounding

TL;DR: This paper propose an end-to-end trainable neural architecture for task-oriented language grounding in 3D environments which assumes no prior linguistic or perceptual knowledge and requires only raw pixels from the environment and the natural language instruction as input.