J
Jason Yosinski
Researcher at Uber
Publications - 72
Citations - 21503
Jason Yosinski is an academic researcher from Uber . The author has contributed to research in topics: Artificial neural network & Convolutional neural network. The author has an hindex of 37, co-authored 70 publications receiving 17256 citations. Previous affiliations of Jason Yosinski include Cornell University & Columbia University.
Papers
More filters
Posted Content
How transferable are features in deep neural networks
TL;DR: This paper quantifies the generality versus specificity of neurons in each layer of a deep convolutional neural network and reports a few surprising results, including that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.
Proceedings Article
How transferable are features in deep neural networks
TL;DR: In this paper, the authors quantify the transferability of features from the first layer to the last layer of a deep neural network and show that transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task and (2) optimization difficulties related to splitting networks between co-adapted neurons.
Proceedings ArticleDOI
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
TL;DR: In this article, the authors show that it is possible to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence.
Posted Content
Understanding Neural Networks Through Deep Visualization
TL;DR: This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks.
Proceedings ArticleDOI
Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
TL;DR: This paper introduces an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions than previous generative models, and does so for all 1000 ImageNet categories.