scispace - formally typeset
N

Nikhil Mishra

Researcher at University of California, Berkeley

Publications -  16
Citations -  1823

Nikhil Mishra is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Reinforcement learning & Recurrent neural network. The author has an hindex of 8, co-authored 14 publications receiving 1504 citations. Previous affiliations of Nikhil Mishra include University of California, Irvine & Northwood University.

Papers
More filters
Posted Content

A Simple Neural Attentive Meta-Learner

TL;DR: This work proposes a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information.
Proceedings Article

A Simple Neural Attentive Meta-Learner

TL;DR: The authors propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information.
Posted Content

Meta-Learning with Temporal Convolutions.

TL;DR: This work proposes a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain- agnostic and has no particular strategy or algorithm encoded into it and outperforms state-of-the-art methods that are less general and more complex.
Proceedings Article

PixelSNAIL: An Improved Autoregressive Generative Model

TL;DR: In this paper, a new generative model architecture that combines causal convolutions with self-attention is proposed, which achieves state-of-the-art results on CIFAR-10 (2.85 bits per dim) and ImageNet (3.80 bits per degree).
Posted Content

PixelSNAIL: An Improved Autoregressive Generative Model

TL;DR: This work introduces a new generative model architecture that combines causal convolutions with self attention and presents state-of-the-art log-likelihood results on CIFAR-10 and ImageNet.