An Address-Event Fall Detector for Assisted Living Applications
read more
Citations
A survey on fall detection: Principles and approaches
Challenges, issues and trends in fall detection systems
STDP and STDP variations with memristors for spiking neuromorphic learning systems.
Finding a roadmap to achieve large neuromorphic hardware systems.
Event-Based Visual Flow
References
A smart sensor to detect the falls of the elderly
A Smart and Passive Floor-Vibration Based Fall Detector for Elderly
A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change
SPEEDY:a fall detector in a wrist watch
Smart home care network using sensor fusion and distributed vision-based reasoning
Related Papers (5)
CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking
Frequently Asked Questions (15)
Q2. What is the key feature of the ATC vision sensor?
A temporal contrast vision sensor extracts changing pixels (motion events) from the background [13] and reports temporal contrast, which is equivalent to image reflectance change when lighting is constant.
Q3. Why is a 16-bit microcontroller available for the centroid computation and thresholding?
Due to its low computation complexity, a low-power and low-cost 16-bit microcontroller [31] is commercially available for the centroid computation and thresholding.
Q4. What is the advantage of the ATC vision sensor?
A major advantage of this ATC image sensor is that it pushes information to the receiver once a predefined condition is satisfied.
Q5. How much power is needed to run the detector?
The power budget of the detector is approximately 31 mW, including 30 mW for the image sensor [19] and 1 mW for the 16-bit microcontroller.
Q6. What are the fall types that the authors tested in this work?
The fall scenarios the authors tested in this work included a variety of fall types, such as fall forward, fall backward, and fall sideways.
Q7. What is the reason for using the same frame twice in two subsequent differences?
Using the same frame twice in two subsequent differences is not necessary, since the event resolution is not increased, but the overall number of events increases, at the expense of more computation after readout.
Q8. What is the key feature of the ATC vision sensor used in the fall detection experiment?
In the ATC vision sensor used here, every pixel reports a change in illumination above a certain threshold with an asynchronous event, i.e., pixels are not scanned with a regular frame rate but every pixel is self-timed.
Q9. What are the advantages of ATC vision sensors?
ATC vision sensors have two main advantages when compared to frame-based image sensors: first, the ATC vision sensor has a higher temporal resolution in high-speed tracking applications.
Q10. How is the vertical velocity invariant to distance?
In order to be invariant to distance, the vertical velocity is divided by the height of the subject in pixels, , as shown in (3).
Q11. Why did the authors choose not to use the ATC frame for comparison purposes?
The noise events can be filtered out by the fact that they are spatio-temporally uncorrelated [27] but the authors chose not to do so to keep the computational model closely matched to cheap embedded architectures.
Q12. What is the average of the events in the buffer?
As a new event comes in, a computation cycle starts with removing the expired events and appending the incoming event in the buffer.
Q13. How can the authors generate an ATC frame for comparison purposes?
Notice that the authors can generate an ATC frame for comparison purposes by collecting events for 30 ms and then generating an histogram frame.
Q14. How many subtractions and thresholding operations are performed by a PC?
For this manipulation 8,192-bit subtractions and thresholdings are performed by a PC ( subtractions and an identical number of thresholding operations).
Q15. How many meters away from the camera is the human centroid?
In the experiment, when both subjects are 2 m away from the camera, the vertical address of human centroid fluctuates around 30 to 40, while a pet is approximately at 10.