scispace - formally typeset
C

Cheng-Yang Fu

Researcher at University of North Carolina at Chapel Hill

Publications -  20
Citations -  34195

Cheng-Yang Fu is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Synchronization (computer science) & Multi-core processor. The author has an hindex of 11, co-authored 20 publications receiving 23695 citations. Previous affiliations of Cheng-Yang Fu include National Tsing Hua University.

Papers
More filters
Book ChapterDOI

SSD: Single Shot MultiBox Detector

TL;DR: The approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, which makes SSD easy to train and straightforward to integrate into systems that require a detection component.
Book ChapterDOI

SSD: Single Shot MultiBox Detector

TL;DR: SSD as mentioned in this paper discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, and combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
Posted Content

DSSD : Deconvolutional Single Shot Detector.

TL;DR: This paper combines a state-of-the-art classifier with a fast detection framework and augments SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects.
Posted Content

RetinaMask: Learning to predict masks improves state-of-the-art single-shot detection for free

TL;DR: This paper improves training for the state-of-the-art single-shot detector, RetinaNet, in three ways: integrating instance mask prediction for the first time, making the loss function adaptive and more stable, and including additional hard examples in training.
Proceedings ArticleDOI

Fast Single Shot Detection and Pose Estimation

TL;DR: In this paper, the authors combine detection and pose estimation at the same level using a deep learning approach, where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image.