scispace - formally typeset
Search or ask a question
Author

Abdulwahab Alawi

Bio: Abdulwahab Alawi is an academic researcher. The author has contributed to research in topics: Background subtraction & Probabilistic logic. The author has an hindex of 1, co-authored 1 publications receiving 7 citations.

Papers
More filters
01 Jan 2013
TL;DR: A performance comparison of different background subtraction algorithms is carried out from the literature as well as through implementation, and shows that simple techniques such as approximation median filter can produce good results with much lower computational complexity.
Abstract: Background subtraction is the one of the crucial step in detecting the moving object. Many techniques were proposed for detected moving object however there are few comparative studies carried out to verify their performance. In this paper a performance comparison of different background subtraction algorithms is carried out from the literature as well as through implementation. We investigate some of the techniques which varying from simple techniques such as frame differencing and approximation median filter, to more complicated probabilistic modeling techniques. Our results show that simple techniques such as approximation median filter can produce good results with much lower computational complexity.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comparative analytical framework can be beneficial for every researcher in this field by simplifying accurate selection and development of human motion recognition methods in future works.
Abstract: According to the rapid spread of multimedia data and online observations by users, the importance of researching on machine vision also, analyzing and automatic understanding of video data content is progressively increasing. Human motion recognition in video data is a crucial research subject in machine vision science that has plenty of applications, for instance, video surveillance, video indexing, robotics, human-computer interface and multimedia retrieval. Despite a high number of researches conducted on this topic, there is a necessity to achieve a more in-depth understanding, complete classification, and evaluation of existing human motion recognition stages. The novelty of this paper, our comparative analytical framework includes three major parts. Firstly, three different stages are introduced in recognizing human motion consisting of background subtraction, feature extraction, and machine learning classification. Secondly, five essential criteria are defined for evaluating the proposed human motion recognition methods. Finally, our comparative analysis of human motion recognition stages comprises two models. The analysis of background subtraction methods is based on applying the criteria for a qualitative comparison. Next, the feature extraction and machine learning classification methods are examined by specifying their main idea, benefits and challenges. Our comparative analytical framework can be beneficial for every researcher in this field by simplifying accurate selection and development of human motion recognition methods in future works.

16 citations

Journal Article
TL;DR: This paper presents a machine-learning-enhanced longitudinal scanline method to extract vehicle trajectories from high-angle traffic cameras and fundamentally addressed many quality issues found in NGSIM trajectory data.
Abstract: This paper presents a machine-learning-enhanced longitudinal scanline method to extract vehicle trajectories from high-angle traffic cameras. The Dynamic Mode Decomposition (DMD) method is applied to extract vehicle strands by decomposing the Spatial-Temporal Map (STMap) into the sparse foreground and low-rank background. A deep neural network named Res-UNet+ was designed for the semantic segmentation task by adapting two prevalent deep learning architectures. The Res-UNet+ neural networks significantly improve the performance of the STMap-based vehicle detection, and the DMD model provides many interesting insights for understanding the evolution of underlying spatial-temporal structures preserved by STMap. The model outputs were compared with the previous image processing model and mainstream semantic segmentation deep neural networks. After a thorough evaluation, the model is proved to be accurate and robust against many challenging factors. Last but not least, this paper fundamentally addressed many quality issues found in NGSIM trajectory data. The cleaned high-quality trajectory data are published to support future theoretical and modeling research on traffic flow and microscopic vehicle control. This method is a reliable solution for video-based trajectory extraction and has wide applicability.

3 citations

Journal ArticleDOI
TL;DR: A median based background updating algorithm which determines the median of a buffer containing values that are highly correlated is proposed which will make it possible to implement on devices that do not have much computation resources.
Abstract: Image processing techniques for object tracking, identification and classification have become common today as a result of improved quality of cameras as well as prices of cameras becoming cheaper and cheaper day by day. The use of cameras also make it possible for human analysis of video streams or images where it is difficult for robots or algorithms or machines to effectively deal with the images. However, the use of cameras for basic tracking and analysing do not come without challenges such as issues with sudden changes in illumination, shadows, occlusion, noise, and high computational time and space complexities of algorithms. A typical image processing task may involve several subtasks such as capturing, and pre-processing which demand high computational resources to complete. One of the main pre-processing tasks used in image processing is image segmentation which enables images to be divided into sections of interest in order to perform analysis on them. Background Subtraction is commonly used to segment images into Background and Foreground for further processing. Algorithms producing highly accurate results during this segmentation task normally demand high computation time or memory space, while algorithms that use smaller memory space and shorter time to complete this segmentation task may also suffer from limitations that may lead to undesired results at some point in time. Poor outputs from algorithms will eventually lead to system failure which must be avoided as much as possible. This paper proposes a median based background updating algorithm which determines the median of a buffer containing values that are highly correlated. The algorithm achieves this by deletingan extreme valuefrom the buffer whenever data is to be added to it.Experiments show that the method produces good results with less computational time which will make it possible to implement on devices that do not have much computation resources.

3 citations

Book ChapterDOI
19 Nov 2019
TL;DR: In this article, an effective method for object detection based on active contour with camera motion compensation is proposed, where the principle of active contours is to evolve an initial curve towards the object of interest that corresponds to the boundaries of the moving objects.
Abstract: The background subtraction is a widely used approach for Detecting Moving Objects (DMO) by a static camera using a simple algorithm; however, it is very sensitive to the local gradual changes of illumination, shadows, non-rigid moving objects and partial or full target occlusion. In order to overcome these issues, and to bring more potential to the solution, we propose an effective method for object detection based on active contour with camera motion compensation. The principle of active contours is to evolve an initial curve towards the object of interest that corresponds to the boundaries of the moving objects. Once the object has been detected, then it can be tracked by the Kalman filter. This latter requires many preassumptions about models and noise characteristics. As an alternative, a new method based on the Smooth Variable Structure Filter (SVSF) is implemented. The SVSF was introduced in an effort to provide a more robust estimation strategy. Detection and tracking algorithms require stable images to recognize a real moving target position. Therefore, the captured image from cameras which are placed on the moving platform, have undesired jitters, shakes and blurs. Video stabilization becomes an indispensable technique which is focusing on removing unnecessary camera vibrations from image sequences using a homographic matrix by extracting features using FAST corner detector and FREAK feature descriptor. The proposed algorithm is validated in real-world and the obtained results confirm the efficiency and robustness of our approach.

2 citations

Dissertation
01 Jan 2017
TL;DR: This paper aims to demonstrate the efforts towards in-situ applicability of EMMARM, as to provide real-time information about concrete mechanical properties such as E-modulus and compressive strength, and to provide a chronology of the mechanical properties of these properties.
Abstract: .......................................................................................................................................... III Acknowledgement .......................................................................................................................... V Dissemination ................................................................................................................................. VI Table of

1 citations