scispace - formally typeset
Y

Yinan Yu

Researcher at Baidu

Publications -  25
Citations -  2867

Yinan Yu is an academic researcher from Baidu. The author has contributed to research in topics: Object detection & Feature extraction. The author has an hindex of 16, co-authored 25 publications receiving 2218 citations. Previous affiliations of Yinan Yu include Chinese Academy of Sciences.

Papers
More filters
Posted Content

DenseBox: Unifying Landmark Localization with End to End Object Detection

TL;DR: DenseBox is introduced, a unified end-to-end FCN framework that directly predicts bounding boxes and object class confidences through all locations and scales of an image and shows that when incorporating with landmark localization during multi-task learning, DenseBox further improves object detection accuray.
Proceedings ArticleDOI

Look and Think Twice: Capturing Top-Down Visual Attention with Feedback Convolutional Neural Networks

TL;DR: The background of feedbacks in the human visual cortex is introduced, which motivates the development of a computational feedback mechanism in deep neural networks, and a feedback loop is introduced to infer the activation status of hidden layer neurons according to the "goal" of the network.
Proceedings ArticleDOI

Deep multiple instance learning for image classification and auto-annotation

TL;DR: This paper attempts to model deep learning in a weakly supervised learning (multiple instance learning) framework, where each image follows a dual multi-instance assumption, where its object proposals and possible text annotations can be regarded as two instance sets.
Proceedings ArticleDOI

High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection

TL;DR: Wu et al. as mentioned in this paper simplified pedestrian detection as a straightforward center and scale prediction task through convolutions, and the proposed method enjoys an anchor-free setting, and it presented competitive accuracy and good speed on challenging pedestrian detection benchmarks.
Proceedings ArticleDOI

A Deep Visual Correspondence Embedding Model for Stereo Matching Costs

TL;DR: A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities, and it is proved that the new measure of pixel dissimilarity outperforms traditional matching costs.