scispace - formally typeset
Journal ArticleDOI

HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition

TLDR
The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood and it is demonstrated that this concept can robustly be used at all stages of an event-based hierarchical model.
Abstract
This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

read more

Citations
More filters

Feature Representation and Compression Methods for Event-Based Data

TL;DR: Wang et al. as discussed by the authors proposed two event-based data compression methods by analyzing the statistical features of the event characteristic parameters, which reduced the redundancy of delimiters between modules and improved the compression coefficient.
Posted Content

MEFNet: Multi-scale Event Fusion Network for Motion Deblurring

TL;DR: The Multi-Scale Event Fusion Network (MEFNet) as discussed by the authors proposes an event mask gated connection between the two stages of the network so as to avoid information loss and achieves state-of-the-art performance on the HQBlur dataset.
Proceedings ArticleDOI

Optical Flow Estimation through Fusion Network based on Self-supervised Deep Learning

TL;DR: Zhang et al. as discussed by the authors proposed a novel unsupervised learning estimation method with both event data and gray image frames as the input, which directly fuses synthesized event frames and grey image frames and adopts a local squeeze extraction weights adaptive mechanism.
Proceedings ArticleDOI

Encoding Event-Based Data With a Hybrid SNN Guided Variational Auto-encoder in Neuromorphic Hardware

TL;DR: In this paper , a Hybrid Guided Variational Autoencoder (VAE) is proposed to encode event-based data sensed by a DVS into a latent space representation using an SNN.
Book ChapterDOI

Event Camera Visualization

Ada Ferrer
TL;DR: In this article , the authors present a detailed comparison and analysis of the three most common existing methods for event visualization in terms of principles, and suggest improvement directions for existing visualization methods that can facilitate further research and application of event cameras.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Emergence of simple-cell receptive field properties by learning a sparse code for natural images

TL;DR: It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.
Proceedings Article

Large Scale Distributed Deep Networks

TL;DR: This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.
Related Papers (5)