scispace - formally typeset
G

Georg Nebehay

Researcher at Austrian Institute of Technology

Publications -  19
Citations -  2506

Georg Nebehay is an academic researcher from Austrian Institute of Technology. The author has contributed to research in topics: Video tracking & Smart camera. The author has an hindex of 11, co-authored 19 publications receiving 2286 citations. Previous affiliations of Georg Nebehay include Graz University of Technology & Vienna University of Technology.

Papers
More filters
Proceedings ArticleDOI

The Visual Object Tracking VOT2015 Challenge Results

TL;DR: The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance and presents a new VOT 2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute.
Journal ArticleDOI

A Novel Performance Evaluation Methodology for Single-Target Trackers

TL;DR: The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison and a fully-annotated dataset with per-frame annotations with several visual attributes, which is the largest benchmark to date.
Book ChapterDOI

The Visual Object Tracking VOT2014 challenge results

TL;DR: The evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset are presented, offering a more systematic comparison of the trackers.
Proceedings ArticleDOI

The Visual Object Tracking VOT2013 Challenge Results

TL;DR: The evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset are presented, offering a more systematic comparison of the trackers.
Proceedings ArticleDOI

Clustering of static-adaptive correspondences for deformable object tracking

TL;DR: This work proposes a novel method for establishing correspondences on deformable objects for single-target object tracking that outperforms the state of the art on a dataset of 77 sequences and builds a keypoint-based tracker that outputs rotated bounding boxes.