scispace - formally typeset
A

Alexander C. Berg

Researcher at University of North Carolina at Chapel Hill

Publications -  111
Citations -  92856

Alexander C. Berg is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Object detection & Natural language. The author has an hindex of 57, co-authored 109 publications receiving 67829 citations. Previous affiliations of Alexander C. Berg include Facebook & Stanford University.

Papers
More filters
Proceedings ArticleDOI

Detecting Avocados to Zucchinis: What Have We Done, and Where Are We Going?

TL;DR: A large-scale study on the Image Net Large Scale Visual Recognition Challenge data, inspired by the recent work of Hoiem et al, shows that this dataset provides many of the same detection challenges as the PASCAL VOC.
Proceedings ArticleDOI

An Evaluation of the NVIDIA TX1 for Supporting Real-Time Computer-Vision Workloads

TL;DR: The Jetson TX1 board is evaluated via benchmarking efforts, blackbox evaluations of GPU behavior, and case-study evaluations involving computer-vision workloads inspired by autonomousdriving use cases.
Proceedings ArticleDOI

Synthesizing Training Data for Object Detection in Indoor Scenes.

TL;DR: In this article, the authors explore the ability of using synthetically generated composite images for training state-of-the-art object detectors, especially for object instance detection, by superimposing 2D images of textured object models into images of real environments at variety of locations and scales.
Proceedings ArticleDOI

Leveraging Long-Range Temporal Relationships Between Proposals for Video Object Detection

TL;DR: A novel temporal relation module, operating on object proposals, that learns the similarities between proposals from different frames and selects proposals from past and/or future to support current proposals improves the accuracy of a single-frame detector significantly with negligible compute overhead.
Proceedings Article

Generalizing Image Captions for Image-Text Parallel Corpus

TL;DR: This work introduces the new task of image caption generalization, formulated as visually-guided sentence compression, and presents an efficient algorithm based on dynamic beam search with dependency-based constraints and releases a new large-scale corpus with 1 million image-caption pairs achieving tighter content alignment between images and text.