scispace - formally typeset
N

Neil Alldrin

Researcher at Google

Publications -  24
Citations -  2113

Neil Alldrin is an academic researcher from Google. The author has contributed to research in topics: Image processing & Feature detection (computer vision). The author has an hindex of 12, co-authored 22 publications receiving 1917 citations. Previous affiliations of Neil Alldrin include Vision-Sciences, Inc. & University of California, Berkeley.

Papers
More filters
Journal ArticleDOI

The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale

TL;DR: Open Images V4 as mentioned in this paper is a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection from Flickr without a predefined list of class names or tags.
Proceedings ArticleDOI

Learning from Noisy Large-Scale Datasets with Minimal Supervision

TL;DR: In this article, a multi-task network is proposed to combine clean and noisy data to improve the performance of image classification. But the clean data does not fully leverage the information contained in the clean set, and the clean annotations are used to reduce the noise in the large dataset before fine-tuning the network.
Journal ArticleDOI

The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale

TL;DR: Open Images V4 as discussed by the authors is a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection from Flickr without a predefined list of class names or tags.
Proceedings ArticleDOI

Photometric stereo with non-parametric and spatially-varying reflectance

TL;DR: This work presents a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images by employing novel bi-variate approximations of isotropic reflectance functions.
Posted Content

Learning From Noisy Large-Scale Datasets With Minimal Supervision

TL;DR: An approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations and is particularly effective for a large number of classes with wide range of noise in annotations.