scispace - formally typeset
Search or ask a question
Conference

International Conference on Event-based Control, Communication, and Signal Processing 

About: International Conference on Event-based Control, Communication, and Signal Processing is an academic conference. The conference publishes majorly in the area(s): Control system & Event (computing). Over the lifetime, 228 publications have been published by the conference receiving 1271 citations.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
13 Jun 2016
TL;DR: This work presents the first algorithm to detect and track visual features using both the frames and the event data provided by the DAVIS, a novel vision sensor which combines a standard camera and an asynchronous event-based sensor in the same pixel array.
Abstract: Because standard cameras sample the scene at constant time intervals, they do not provide any information in the blind time between subsequent frames. However, for many high-speed robotic and vision applications, it is crucial to provide high-frequency measurement updates also during this blind time. This can be achieved using a novel vision sensor, called DAVIS, which combines a standard camera and an asynchronous event-based sensor in the same pixel array. The DAVIS encodes the visual content between two subsequent frames by an asynchronous stream of events that convey pixel-level brightness changes at microsecond resolution. We present the first algorithm to detect and track visual features using both the frames and the event data provided by the DAVIS. Features are first detected in the grayscale frames and then tracked asynchronously in the blind time between frames using the stream of events. To best take into account the hybrid characteristics of the DAVIS, features are built based on large, spatial contrast variations (i.e., visual edges), which are the source of most of the events generated by the sensor. An event-based algorithm is further presented to track the features using an iterative, geometric registration approach. The performance of the proposed method is evaluated on real data acquired by the DAVIS.

94 citations

Proceedings ArticleDOI
13 Jun 2016
TL;DR: Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.
Abstract: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor “frames” that consist of a constant number of DAVIS ON and OFF events. The network is thus “data driven” at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.

92 citations

Proceedings ArticleDOI
17 Jun 2015
TL;DR: Results have proven that digital image processing technique is very useful and reliable for IR-image analysis to identify possible defects during the PV module inspection procedure.
Abstract: PV modules as main component of PV system might be subjected to various internal or external stresses hence monitoring and maintenance are a crucial issue to assure PV module lifetime and appropriate energy performance. Current experimental research proposes an applicable approach of digital image processing technique in PV module inspection by thermography assessment. For this purpose, an algorithm was designed to perform IR images analysis. The monitoring was performed in order to detect defects and failures due to particular events on the PV systems by thermography analysis. The investigation was carried out by mounted IR thermo-camera (Flir A35) on a light Unmanned Aerial System (UAS) in Solar Tech Laboratory. The captured IR-images have been processed by proposed algorithm to determine specific defect and degradation percentage. The results have proven that digital image processing technique is very useful and reliable for IR-image analysis to identify possible defects during the PV module inspection procedure.

50 citations

Proceedings ArticleDOI
17 Jun 2015
TL;DR: Analysis of the information pattern underlying the triggering decision reveals a fundamental advantage of triggers employing the real-time measurement in their decision over those that do not (VBT), and numerical simulation studies support this finding and provide a quantitative evaluation of the triggers in terms of their average estimation versus communication performance.
Abstract: In event-based state estimation, the event trigger decides whether or not a measurement is used for updating the state estimate. In a remote estimation scenario, this allows for trading off estimation performance for communication, and thus saving resources. In this paper, popular event triggers for estimation, such as send-on-delta (SoD), measurement-based triggering (MBT), variance-based triggering (VBT), and relevant sampling (RS), are compared for the scenario of a scalar linear process with Gaussian noise. First, the analysis of the information pattern underlying the triggering decision reveals a fundamental advantage of triggers employing the real-time measurement in their decision (such as MBT, RS) over those that do not (VBT). Second, numerical simulation studies support this finding and, moreover, provide a quantitative evaluation of the triggers in terms of their average estimation versus communication performance.

47 citations

Proceedings ArticleDOI
17 Jun 2015
TL;DR: A dynamic vision sensor (DVS), which provides event-based information about changes in contrast over time at each pixel location, is used to extract optic flow from this information, and a plane-fitting algorithm estimating the relative velocity in a small spatio-temporal cuboid is used.
Abstract: Any mobile agent, whether biological or robotic, needs to avoid collisions with obstacles. Insects, such as bees and flies, use optic flow to estimate the relative nearness to obstacles. Optic flow induced by ego-motion is composed of a translational and a rotational component. The segregation of both components is computationally and thus energetically expensive. Flies and bees actively separate the rotational and translational optic flow components via behaviour, i.e. by employing a saccadic strategy of flight and gaze control. Although robotic systems are able to mimic this gaze-strategy, the calculation of optic-flow fields from standard camera images remains time and energy consuming. To overcome this problem, we use a dynamic vision sensor (DVS), which provides event-based information about changes in contrast over time at each pixel location. To extract optic flow from this information, a plane-fitting algorithm estimating the relative velocity in a small spatio-temporal cuboid is used. The depth-structure is derived from the translational optic flow by using local properties of the retina. A collision avoidance direction is then computed from the event-based depth-structure of the environment. The system has successfully been tested on a robotic platform in open loop.

31 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
202215
202117
202021
201918
20183
201730