T
Thomas Brox
Researcher at University of Freiburg
Publications - 353
Citations - 127470
Thomas Brox is an academic researcher from University of Freiburg. The author has contributed to research in topics: Segmentation & Optical flow. The author has an hindex of 99, co-authored 329 publications receiving 94431 citations. Previous affiliations of Thomas Brox include Dresden University of Technology & University of California, Berkeley.
Papers
More filters
Posted Content
Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild
TL;DR: This paper boosts CNN-based optical flow estimation in real scenes with the help of the freely available self-supervised task of next-frame prediction, and experiments with the prediction of "next-flow" instead of estimation of the current flow, which is intuitively closer to the task ofnext-frames prediction and yields favorable results.
A Silhouette Based Human Motion Tracking System
TL;DR: A human model generation system, which uses a set of input images to automatically generate a free-form surface model of a human upper torso, and a correspondence module, which relates image data to model data and a pose estimation module are presented.
Journal ArticleDOI
Ranking Info Noise Contrastive Estimation: Boosting Contrastive Learning via Ranked Positives
TL;DR: RINCE is introduced, a new member in the family of InfoNCE losses that preserves a ranked ordering of positive samples that yields higher classification accuracy, retrieval rates and performs better on out-of-distribution detection than the standardInfoNCE loss.
Book ChapterDOI
Learning for multi-view 3d tracking in the context of particle filters
TL;DR: In this paper, an approach to use prior knowledge in the particle filter framework for 3D tracking, i.e. estimating the state parameters such as joint angles of a 3D object, is presented.
Posted Content
TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning
TL;DR: In this article, the authors re-examine the role of TD in modern deep RL, using specially designed environments that control for specific factors that affect performance, such as reward sparsity, reward delay, and perceptual complexity of the task.