G
Gregory D. Hager
Researcher at Johns Hopkins University
Publications - 587
Citations - 30440
Gregory D. Hager is an academic researcher from Johns Hopkins University. The author has contributed to research in topics: Robot & Task (project management). The author has an hindex of 78, co-authored 565 publications receiving 26682 citations. Previous affiliations of Gregory D. Hager include University of California, Irvine & University of Pennsylvania.
Papers
More filters
Journal ArticleDOI
A tutorial on visual servo control
TL;DR: This article provides a tutorial introduction to visual servo control of robotic manipulators by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process.
Journal ArticleDOI
Advances in computational stereo
TL;DR: This work reviews recent advances in computational stereo, focusing primarily on three important topics: correspondence methods, methods for occlusion, and real-time implementations.
Journal ArticleDOI
Efficient region tracking with parametric models of geometry and illumination
TL;DR: This work develops a computationally efficient method for handling the geometric distortions produced by changes in pose and combines geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes.
Journal ArticleDOI
Fast and globally convergent pose estimation from video images
TL;DR: It is shown that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space, and an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent is derived.
Proceedings ArticleDOI
Temporal Convolutional Networks for Action Segmentation and Detection
TL;DR: A class of temporal models that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection, which are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks.