scispace - formally typeset
Search or ask a question
Author

K. G. Gunale

Bio: K. G. Gunale is an academic researcher. The author has contributed to research in topics: Background subtraction. The author has an hindex of 1, co-authored 1 publications receiving 12 citations.

Papers
More filters
Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work presents an automatic approach for detecting and recognizing falls of elderly people in the home environments using video based technology, with a focus on the protection and assistance to the elderly people.
Abstract: Fall is an unusual activity and it is a serious problem among the elderly people. In the proposed system, we present an automatic approach for detecting and recognizing falls of elderly people in the home environments using video based technology. The focus is on the protection and assistance to the elderly people. Fall causes a very high risk of the elderly's life that may cause death. The fall incident is automatically extracted from the video data represents itself, unique information that can be used to alert emergency or to make a decision whether the fall is confirmed. The main motivation of this work is to provide such a system which automatically detects the fall and intimate the respective authority. Proposed method uses background subtraction to detect the moving object and mark those objects with a rectangular and elliptical bounding box followed by extracting the features like aspect ratio, fall angle, silhouette height. In the proposed system, an Adaboost classifier to classify the normal and fall event is used. The system is implemented using OpenCV libraries and Python. The accuracy of the proposed system on Le2i dataset is 79.31%.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Using fall motion vector, this work is able to efficiently identify fall events in varieties of scenarios, such as the narrow angle camera (Le2i dataset), wide angles camera (URFall dataset), and multiple cameras (Montreal dataset).
Abstract: Representation of spatio-temporal properties of human body silhouette and human-to-ground relationship, significantly contribute to the fall detection process. So, we propose an approach to efficiently model the spatio-temporal features using fall motion vector. First, we construct a Gaussian mixture model (GMM) called fall motion mixture model (FMMM) using histogram of optical flow and motion boundary histogram features to implicitly capture motion attributes in both the fall and non-fall videos. The FMMM contains both fall and non-fall attributes resulting in a high-dimensional representation. In order to extract only the relevant attributes for a particular fall or non-fall videos, we perform factor analysis on FMMM to get a low dimensional representation known as fall motion vector. Using fall motion vector, we are able to efficiently identify fall events in varieties of scenarios, such as the narrow angle camera (Le2i dataset), wide angle camera (URFall dataset), and multiple cameras (Montreal dataset). In all these scenarios, we show that the proposed fall motion vector achieves better performance than the existing methods.

67 citations

Journal ArticleDOI
Bo-Hua Wang1, Jie Yu1, Kuo Wang1, Xuan-Yu Bao1, Ke-Ming Mao1 
TL;DR: This paper presents a novel visual-based fall detection approach by Dual-Channel Feature Integration that divides the fall event into two parts: falling-state and fallen-state, which describes the fall events from dynamic and static perspectives.
Abstract: Falls have caught great harm to the elderly living alone at home. This paper presents a novel visual-based fall detection approach by Dual-Channel Feature Integration. The proposed approach divides the fall event into two parts: falling-state and fallen-state, which describes the fall events from dynamic and static perspectives. Firstly, the object detection model (Yolo) and the human posture detection model (OpenPose) are used for preprocessing to obtain key points and the position information of a human body. Then, a dual-channel sliding window model is designed to extract the dynamic features of the human body (centroid speed, upper limb velocity) and static features (human external ellipse). After that, MLP (Multilayer Perceptron) and Random Forest are applied to classify the dynamic and static feature data separately. Finally, the classification results are combined for fall detection. Experimental results show that the proposed approach achieves an accuracy of 97.33% and 96.91% when tested with UR Fall Detection Dataset and Le2i Fall Detection Dataset.

25 citations

Journal ArticleDOI
TL;DR: A spatiotemporal method to detect fall form videos filmed by surveillance cameras is presented and it is found that SVM is the best classifier to the method.
Abstract: In the area of health care, fall is a dangerous problem for aged persons. Sometimes, they are a serious cause of death. In addition to that, the number of aged persons will increase in the future. Therefore, it is necessary to develop an accurate system to detect fall. In this paper, we present spatiotemporal method to detect fall form videos filmed by surveillance cameras. Firstly, we computed key points of human skeleton. We calculated distances and angles between key points of each two pair sequences frames. After that, we applied Principal Component Analysis (PCA) to unify the dimension of features. Finally, we utilized Support Vector Machine (SVM), Decision Tree, Random Forest and K Nearest Neighbors (KNN) to classify features. We found that SVM is the best classifier to our method. The results of our algorithm are as follow: accuracy is 98.5%, sensitivity is 97% and the specificity is 100%.

18 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The proposed method extracts six postures of physically movements of human including lying, sitting, standing, getting up, walking, and falling from a video camera using a mixture of Gaussian model combined with average filter models.
Abstract: Fall accident whose rates increase exponentially is the major risk for the elderly, especially those living alone. A fall accident detection system to detect the fall accident and call for an emergency is essential for elderly. This paper proposes to extract human from a video camera using a mixture of Gaussian model combined with average filter models. The proposed method extracts six postures of physically movements of human including lying, sitting, standing, getting up, walking, and falling. Unique features such as inter-frames information, shape description from a silhouette aspect ratio, and orientation of principal component are obtained. The method could automatically alarm when the fall is detected. The experimental results show the detection rate up to 86.21% of the 58 videos from the Le2i dataset.

18 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: Improvement of Fall Detection Using Consecutive-frame Voting using a mixture of Gaussian models (MoG) combined with average filter model to implement the subtraction results and results show improvement of the accuracy.
Abstract: The Centers for Disease Control and Prevention (CDC) reported the older adult statistics that in every second there is an older adult fall down, 25% of elderly reported a fall in 2014, and it is the first cause of hip fracture in the USA. A fall accident detection system, which can automatically detect the fall accident and call for help, is essential for elderly. This paper proposes Improvement of Fall Detection Using Consecutive-frame Voting. The first step is human detection we propose background subtraction using a mixture of Gaussian models (MoG) combined with average filter model to implement the subtraction results. In feature extraction section, the orientation, aspect ratio and area ratio are calculated from the Principal Component Analysis (PCA) of a human silhouette. The moving object can be classified from the human centroid distance in human centroid tracking section. Each posture will be classified in event classification. Finally, majority voting of the results from consecutive is finally performed. The experimental results show improvement of the accuracy of the proposed method with our previous work which tested on the Le2i dataset.

16 citations