D
Deepak Pathak
Researcher at Carnegie Mellon University
Publications - 110
Citations - 14332
Deepak Pathak is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Computer science & Reinforcement learning. The author has an hindex of 25, co-authored 77 publications receiving 10120 citations. Previous affiliations of Deepak Pathak include Indian Institute of Technology Kanpur & University of California, Berkeley.
Papers
More filters
Proceedings ArticleDOI
Context Encoders: Feature Learning by Inpainting
TL;DR: It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
Posted Content
Context Encoders: Feature Learning by Inpainting
TL;DR: Context Encoders as mentioned in this paper is a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings, which can be used for semantic inpainting tasks, either stand-alone or as initialization for nonparametric methods.
Proceedings Article
Toward Multimodal Image-to-Image Translation
Jun-Yan Zhu,Richard Zhang,Deepak Pathak,Trevor Darrell,Alexei A. Efros,Oliver Wang,Eli Shechtman +6 more
TL;DR: In this article, a generative model is used to model a distribution of possible outputs in a conditional generative modeling setting, and the ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time.
Proceedings ArticleDOI
Curiosity-Driven Exploration by Self-Supervised Prediction
TL;DR: In this article, the authors formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model.
Proceedings ArticleDOI
Constrained Convolutional Neural Networks for Weakly Supervised Segmentation
TL;DR: This work proposes Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space of a CNN, and demonstrates the generality of this new learning framework.