Steering a predator robot using a mixed frame/event-driven convolutional neural network
Citations
697 citations
Cites methods from "Steering a predator robot using a m..."
...of popular traditional computer vision datasets, such as MNIST and Caltech101, have been obtained by using saccade-like motions [219], [252]. These datasets have been used in [16], [17], [18], [106], [124], [125], among others, to benchmark event-based recognition algorithms. The DVS emulator in [83] and the simulator in [205] are based on the operation principle of an ideal DVS pixel (2). Given a virt...
[...]
... it to decay exponentially down to 0 over time [17], [18]. Image reconstruction methods (Section4.6) may also be used. Some recognition approaches rely on converting spikes to frames during inference [124], [212], while others convert the trained artificial neural network to a spiking neural network (SNN) which can operate directly on the event data [106]. Similar ideas can be applied for tasks other th...
[...]
...sors for further analysis. Model free (Deep Learning): So-called model free methods operating on groups of events typically consist of a deep neural network. Sample applications include classification [124], [125], steering angle prediction [126], [127], and estimation of optical flow [33], [128], [129], depth [128] or ego-motion [129]. These methods differentiate themselves mainly in the representation ...
[...]
...xtraction, optical flow, de-rotation using IMU, CNN and RNN 11.https://www.speck.ai/ 12.https://jaerproject.org inference, etc. Several non-mobile robots [8], [10], [72], [247] and even one mobile DVS [124] robot have been built in jAER, although Java is not ideal for mobile robots. It provides a desktop GUI based interface for easily recording and playing data that also exposes the complex internal con...
[...]
344 citations
Cites background from "Steering a predator robot using a m..."
...The capabilities of event cameras to provide rich data for solving pattern recognition problems has been initially shown in [16, 17, 18, 19, 10]....
[...]
...However, the goal of this work is not to develop a framework to actually control an autonomous car or robot, as already proposed in [10]....
[...]
...This is the case, for example, of the predator-prey robots in [10], where a network trained on the combined input of events and grayscale frames from a Dynamic and Active-pixel Vision Sensor (DAVIS) [20] produced one of four outputs: the prey is on the left, center, or right of the predator’s field of view (FOV), or it is not visible in the FOV....
[...]
277 citations
263 citations
241 citations
References
111,197 citations
"Steering a predator robot using a m..." refers background in this paper
...…the global electronic shutter and the DVS event generation mechanism causes a burst of DVS events on each frame [2] and creates events correlated with the sample rate of the APS, filling up the 5’000 events allowed in the DVS histogram, sometimes covering up the prey robot (especially if far away)....
[...]
10,161 citations
3,601 citations
"Steering a predator robot using a m..." refers background in this paper
...The rest of the ambiguous images are the ones where the prey robot is very close to the predator and more than one LCRN region is covered by it....
[...]
1,927 citations
"Steering a predator robot using a m..." refers background in this paper
...1B shows the overall system architecture of the predator robot as described in later sections....
[...]
1,784 citations