scispace - formally typeset
W

Wenbo Li

Researcher at Samsung

Publications -  41
Citations -  2806

Wenbo Li is an academic researcher from Samsung. The author has contributed to research in topics: Video tracking & Computer science. The author has an hindex of 13, co-authored 38 publications receiving 2195 citations. Previous affiliations of Wenbo Li include SenseTime & University at Albany, SUNY.

Papers
More filters
Book ChapterDOI

The Visual Object Tracking VOT2016 Challenge Results

Matej Kristan, +140 more
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Proceedings ArticleDOI

The Visual Object Tracking VOT2017 Challenge Results

Matej Kristan, +104 more
TL;DR: The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative; results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years.
Book ChapterDOI

POI: Multiple Object Tracking with High Performance Detection and Appearance Feature

TL;DR: Li et al. as mentioned in this paper explored the high-performance detection and deep learning based appearance feature, and showed that they lead to significantly better MOT results in both online and offline setting.
Posted Content

POI: Multiple Object Tracking with High Performance Detection and Appearance Feature

TL;DR: This paper explores the high-performance detection and deep learning based appearance feature, and shows that they lead to significantly better MOT results in both online and offline setting.
Proceedings ArticleDOI

Object-Driven Text-To-Image Synthesis via Adversarial Training

TL;DR: A thorough comparison between the classic grid attention and the new object-driven attention is provided through analyzing their mechanisms and visualizing their attention layers, showing insights of how the proposed model generates complex scenes in high quality.