scispace - formally typeset
Open AccessJournal ArticleDOI

Asynchronous Event-Based Visual Shape Tracking for Stable Haptic Feedback in Microrobotics

TLDR
An event-based iterative closest point algorithm to track a microgripper's position at a frequency of 4 kHz is introduced, using an asynchronous address event representation silicon retina and a conventional frame-based camera.
Abstract
Micromanipulation systems have recently been receiving increased attention. Teleoperated or automated micromanipulation is a challenging task due to the need for high-frequency position or force feedback to guarantee stability. In addition, the integration of sensors within micromanipulation platforms is complex. Vision is a commonly used solution for sensing; unfortunately, the update rate of the frame-based acquisition process of current available cameras cannot ensure-at reasonable costs-stable automated or teleoperated control at the microscale level, where low inertia produces highly unreachable dynamic phenomena. This paper presents a novel vision-based microrobotic system combining both an asynchronous address event representation silicon retina and a conventional frame-based camera. Unlike frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events in a manner similar to the output cells of a biological retina, enabling high update rates. This paper introduces an event-based iterative closest point algorithm to track a microgripper's position at a frequency of 4 kHz. The temporal precision of the asynchronous silicon retina is used to provide a haptic feedback to assist users during manipulation tasks, whereas the frame-based camera is used to retrieve the position of the object that must be manipulated. This paper presents the results of an experiment on teleoperating a sphere of diameter around 50 μm using a piezoelectric gripper in a pick-and-place task.

read more

Citations
More filters
Journal ArticleDOI

Event-based Vision: A Survey

TL;DR: This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
Journal ArticleDOI

Finding a roadmap to achieve large neuromorphic hardware systems.

TL;DR: The authors provide a glimpse at what the technology evolution roadmap looks like for Neuromorphic systems so that Neuromorph engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago.
Posted Content

Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age.

TL;DR: What is now the de-facto standard formulation for SLAM is presented, and a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers are reviewed.
Journal ArticleDOI

Event-Based Vision: A Survey

TL;DR: Event cameras as discussed by the authors are bio-inspired sensors that differ from conventional frame cameras: instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes.
Proceedings ArticleDOI

Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers

TL;DR: This paper presents the first onboard perception system for 6-DOF localization during high-speed maneuvers using a Dynamic Vision Sensor (DVS), and provides a versatile method to capture ground-truth data using a DVS.
References
More filters
Journal ArticleDOI

A method for registration of 3-D shapes

TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Book

Multiple view geometry in computer vision

TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.

Multiple View Geometry in Computer Vision.

TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Proceedings ArticleDOI

Efficient variants of the ICP algorithm

TL;DR: An implementation is demonstrated that is able to align two range images in a few tens of milliseconds, assuming a good initial guess, and has potential application to real-time 3D model acquisition and model-based tracking.
Journal ArticleDOI

A 128 $\times$ 128 120 dB 15 $\mu$ s Latency Asynchronous Temporal Contrast Vision Sensor

TL;DR: This silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements by providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output.
Related Papers (5)