scispace - formally typeset
J

Jessica Yung

Researcher at Google

Publications -  10
Citations -  1285

Jessica Yung is an academic researcher from Google. The author has contributed to research in topics: Convolutional neural network & Transfer (computing). The author has an hindex of 6, co-authored 9 publications receiving 613 citations.

Papers
More filters
Posted Content

Big Transfer (BiT): General Visual Representation Learning

TL;DR: By combining a few carefully selected components, and transferring using a simple heuristic, Big Transfer achieves strong performance on over 20 datasets and performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples.
Book ChapterDOI

Big Transfer (BiT): General Visual Representation Learning

TL;DR: Big Transfer (BiT) as discussed by the authors uses pre-trained representations to improve sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision, achieving state-of-the-art performance on 20 datasets.
Posted Content

MLP-Mixer: An all-MLP Architecture for Vision

TL;DR: MLP-Mixer as discussed by the authors is an architecture based exclusively on multi-layer perceptrons (MLP), which contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with LSTM applied across patches, and it achieves competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-theart models.
Posted Content

On Robustness and Transferability of Convolutional Neural Networks

TL;DR: It is found that increasing both the training set and model sizes significantly improve the distributional shift robustness and it is shown that, perhaps surprisingly, simple changes in the preprocessing can significantly mitigate robustness issues in some cases.
Posted Content

Large Scale Learning of General Visual Representations for Transfer.

TL;DR: The paradigm of pre-training on large supervised datasets and fine-tuning the weights on the target task is revisited, and a simple recipe that is called Big Transfer (BiT) is created, which achieves strong performance on over 20 datasets.