scispace - formally typeset
G

Guillermo Gallego

Researcher at University of Zurich

Publications -  75
Citations -  4728

Guillermo Gallego is an academic researcher from University of Zurich. The author has contributed to research in topics: Event (computing) & Motion blur. The author has an hindex of 28, co-authored 73 publications receiving 2834 citations. Previous affiliations of Guillermo Gallego include Georgia Institute of Technology & Technical University of Madrid.

Papers
More filters
Journal ArticleDOI

Event-based Vision: A Survey

TL;DR: This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
Journal ArticleDOI

The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM:

TL;DR: In this article, the authors proposed a dynamic and active-pixel vision sensor DAVIS, which incorporated a conventional global-shutter camera and an event-based sensor in the same pixel array.
Proceedings ArticleDOI

Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars

TL;DR: A deep neural network approach is presented that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle, and outperforms state-of-the-art algorithms based on standard cameras.
Journal ArticleDOI

EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time

TL;DR: Evo, an event-based visual odometry algorithm that successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional map of the environment, makes significant progress in simultaneous localization and mapping.
Proceedings ArticleDOI

A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation

TL;DR: In this paper, a unifying framework is presented to solve several computer vision problems with event cameras, including motion, depth and optical flow estimation, by finding the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function.