Bio: Arwa DarwishAlzughaibi is an academic researcher. The author has contributed to research in topics: Foreground detection & Background subtraction. The author has an hindex of 1, co-authored 1 publications receiving 16 citations.
TL;DR: This paper provides a review of the human motion detection methods focusing on background subtraction technique and concludes that current methods for detecting objects in motion within videos from static cameras are inadequate.
Abstract: For the majority of computer vision applications, the ability to identify and detect objects in motion has become a crucial necessity. Background subtraction, also referred to as foreground detection is an innovation used with image processing and computer vision fields when trying to detect an object in motion within videos from static cameras. This is done by deducting the present image from the image in the background or background module. There has been comprehensive research done in this field as an effort to precisely obtain the region for the use of further processing (e.g. object recognition). This paper provides a review of the human motion detection methods focusing on background subtraction technique.
TL;DR: The proposed forest fire detection algorithm consists of background subtraction applied to movement containing region detection, and temporal variation is employed to differentiate between fire and fire-color objects.
Abstract: Forest fires represent a real threat to human lives, ecological systems, and infrastructure. Many commercial fire detection sensor systems exist, but all of them are difficult to apply at large open spaces like forests because of their response delay, necessary maintenance needed, high cost, and other problems. In this paper a forest fire detection algorithm is proposed, and it consists of the following stages. Firstly, background subtraction is applied to movement containing region detection. Secondly, converting the segmented moving regions from RGB to YCbCr color space and applying five fire detection rules for separating candidate fire pixels were undertaken. Finally, temporal variation is then employed to differentiate between fire and fire-color objects. The proposed method is tested using data set consisting of 6 videos collected from Internet. The final results show that the proposed method achieves up to 96.63% of true detection rates. These results indicate that the proposed method is accurate and can be used in automatic forest fire-alarm systems.
TL;DR: This is the first study based on a novel combination of 3D-convolutional neural networks fed by optical flow and long short-term memory networks (LSTM) fed by auxiliary information over video frames for the purpose of human activity recognition.
Abstract: Human activity recognition is a challenging problem with many applications including visual surveillance, human-computer interactions, autonomous driving and entertainment. In this study, we propose a hybrid deep model to understand and interpret videos focusing on human activity recognition. The proposed architecture is constructed combining dense optical flow approach and auxiliary movement information in video datasets using deep learning methodologies. To the best of our knowledge, this is the first study based on a novel combination of 3D-convolutional neural networks (3D-CNNs) fed by optical flow and long short-term memory networks (LSTM) fed by auxiliary information over video frames for the purpose of human activity recognition. The contributions of this paper are sixfold. First, a 3D-CNN, also called multiple frames is employed to determine the motion vectors. With the same purpose, the 3D-CNN is secondly used for dense optical flow, which is the distribution of apparent velocities of movement in captured imagery data in video frames. Third, the LSTM is employed as auxiliary information in video to recognize hand-tracking and objects. Fourth, the support vector machine algorithm is utilized for the task of classification of videos. Fifth, a wide range of comparative experiments are conducted on two newly generated chess datasets, namely the magnetic wall chess board video dataset (MCDS), and standard chess board video dataset (CDS) to demonstrate the contributions of the proposed study. Finally, the experimental results reveal that the proposed hybrid deep model exhibits remarkable performance compared to the state-of-the-art studies.
••01 Sep 2018
TL;DR: Thanks to this work, students will not have to look for a place to work when the library is crowded and will not bother other working students and it is believed that this project will serve all students.
Abstract: In this study, a real-time system which counts the number of people with the help of a camera was demonstrated. The system can send the number of people to a mobile application via Internet of Things (IoT) and monitor simultaneously. This work is carried out in the main library of Inonu University. Background subtraction method was used to recognize moving humans on the visual field of the camera. According to motion information of humans, a counter was used to count the number of people in the saloon by determining whether going inside or outside. The counter will inform the users about what percentage of the saloon is empty. Matlab and Thingspeak combination help to send counter information to internet environment. A mobile application was used to track the counter information from Android and iOS smartphones. The results were presented in Matlab environment and mobile application simultaneously. Thanks to this work, students will not have to look for a place to work when the library is crowded and will not bother other working students. It is believed that this project will serve all students.
01 Aug 2018
TL;DR: The experimental results show that the combination of the three-frame difference method and the background difference method can effectively remove the noise and ghosting, which avoids inaccurate object extraction, caused by the background different method and avoids the incomplete moving object of the inter- frame difference method.
Abstract: Aiming at the problem of ghosting detected by two-frame different method and the defects existed in the independent detection by inter-frame difference method and background difference method, three-frame difference method is proposed in this paper. That is, the different result between frame and frame can be calculated. The outline of the moving target is roughly marked to solve the problem of ghosting. At the same time, we combine the three frame difference method and the background difference method to get the algorithm of this paper. The algorithm uses a mixed Gaussian method to establish a background model and modify the variance update so that the background model fits with the real background and performs morphological processing to extract the moving target. The experimental results show that the combination of the three-frame difference method and the background difference method can effectively remove the noise and ghosting, which avoids inaccurate object extraction, caused by the background difference method and avoids the incomplete moving object of the inter-frame difference method.
01 Sep 2020
TL;DR: A robust approach, including an adaptive distressed human detection algorithm running every N input image frames combined with a much faster human tracking algorithm, is proposed, which can be achieved using a single, low cost day/night NIR camera.
Abstract: This paper presents the study and the evaluation of GPS/GNSS techniques combined with advanced image processing algorithms for the precise detection, positioning and tracking of distressed humans. In particular, the issue of human detection on both terrestrial and marine environments, as the human silhouette in a marine environment may differ substantially from a land one, is addressed. A robust approach, including an adaptive distressed human detection algorithm running every N input image frames combined with a much faster human tracking algorithm, is proposed. Real time or near-real-time distressed human detection rates, under several illumination and background conditions, can be achieved using a single, low cost day/night NIR camera. It is mounted onboard a fully autonomous UAV for Search and Rescue (SAR) missions. Moreover, the collection of a novel dataset, suitable for training the computer vision algorithms is also presented. Details about both hardware and software configuration as well as the assessment of the proposed approach performance are discussed. Last, a comparison of the proposed approach to other human detection methods used in the literature is presented.