scispace - formally typeset
A

Arun Mallya

Researcher at Nvidia

Publications -  47
Citations -  4824

Arun Mallya is an academic researcher from Nvidia. The author has contributed to research in topics: Artificial neural network & Rendering (computer graphics). The author has an hindex of 21, co-authored 44 publications receiving 2592 citations. Previous affiliations of Arun Mallya include University of Illinois at Urbana–Champaign.

Papers
More filters
Proceedings ArticleDOI

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

TL;DR: In this article, a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting is presented, which exploits redundancies in large deep networks to free up parameters that can then be employed to learn new tasks.
Proceedings ArticleDOI

Importance Estimation for Neural Network Pruning

TL;DR: A novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores and two variations of this method using the first and second-order Taylor expansions to approximate a filter's contribution are described.
Proceedings ArticleDOI

Few-Shot Unsupervised Image-to-Image Translation

TL;DR: In this article, a few-shot, unsupervised image-to-image translation algorithm is proposed that works on previously unseen target classes that are specified, at test time, only by a few example images.
Posted Content

Few-Shot Unsupervised Image-to-Image Translation.

TL;DR: This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets.
Book ChapterDOI

Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

TL;DR: In this paper, a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks is presented, where binary masks are learned in an end-to-end differentiable fashion.