scispace - formally typeset
Search or ask a question
Author

Behzad Mirmahboub

Bio: Behzad Mirmahboub is an academic researcher from University of Southern Brittany. The author has contributed to research in topics: Tree (data structure) & Region growing. The author has an hindex of 7, co-authored 17 publications receiving 221 citations. Previous affiliations of Behzad Mirmahboub include Istituto Italiano di Tecnologia & Isfahan University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes to use variations in silhouette area that are obtained from only one camera to find the silhouette, and shows that the proposed feature is view invariant.
Abstract: Population of old generation is growing in most countries. Many of these seniors are living alone at home. Falling is among the most dangerous events that often happen and may need immediate medical care. Automatic fall detection systems could help old people and patients to live independently. Vision-based systems have advantage over wearable devices. These visual systems extract some features from video sequences and classify fall and normal activities. These features usually depend on camera's view direction. Using several cameras to solve this problem increases the complexity of the final system. In this paper, we propose to use variations in silhouette area that are obtained from only one camera. We use a simple background separation method to find the silhouette. We show that the proposed feature is view invariant. Extracted feature is fed into a support vector machine for classification. Simulation of the proposed method using a publicly available dataset shows promising results.

157 citations

Journal ArticleDOI
TL;DR: A novel energy function which combines the information from saliency map, depth map and gradient map is proposed which reduces shape deformations and visual artifacts in salient regions of images and produces better quality output images.
Abstract: Retargeting algorithms are used to transfer and display images on devices with various sizes and resolutions. All of these algorithms try to preserve the important parts of image against distortions while producing a retargeted image with visual quality comparable with the original one. The main challenge in different algorithms is to find a suitable energy function that properly estimates the importance of each pixel in image. Hence the energy map needs to be improved. In this paper we propose a novel energy function which combines the information from saliency map, depth map and gradient map. We also present an algorithm to adaptively assign proper weights to these three importance maps for each input image. Then we calculate a switching threshold based on energy map that determines when to apply seam carving or scaling. The idea is to use a combination of seam carving and scaling to preserve the structure of important parts of image against distortion when the image size decreases beyond a point. This method reduces shape deformations and visual artifacts in salient regions of images and produces better quality output images. The results of the proposed method show superior visual quality in produced images in comparison to the state-of-the-arts.

23 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper proposes innovative pre-processing and adaptive 3D region growing methods with subject-specific conditions with effective contrast enhancement algorithm to obtain strong edges and high contrast.
Abstract: Automatic liver segmentation plays a vital role in computer-aided diagnosis or treatment. Manual segmentation of organs is a tedious and challenging task and is prone to human errors. In this paper, we propose innovative pre-processing and adaptive 3D region growing methods with subject-specific conditions. To obtain strong edges and high contrast, we propose effective contrast enhancement algorithm then we use the atlas intensity distribution of most probable voxels in probability maps along with location before designing conditions for our 3D region growing method. We also incorporate the organ boundary to restrict the region growing. We compare our method with the label fusion of 13 organs on state-of-the-art Deeds registration method and achieved Dice score of 92.56%.

11 citations

Proceedings ArticleDOI
10 Dec 2015
TL;DR: A new bone segmentation method in which an image goes through preprocessing steps such as noise cancellation and edge detection and analysis of intensity fluctuations in all rows of the image results in more accurate segmentation of bone regions.
Abstract: Segmentation of X-ray bone images is of concern in many medical applications such as detection of osteoporosis and bone fractures. Segmentation of such images is a challenging process. Varying brightness throughout the image makes it difficult to separate bones from background and soft tissue. Costume made as well as standard segmentation methods, such as active contour and region growing, have been applied to bone X-ray images. Although each method could perform well for some images, due to variety of bone structures and lighting conditions none of these methods can be considered as complete. In this paper we present a new bone segmentation method in which an image goes through preprocessing steps such as noise cancellation and edge detection. Analysis of intensity fluctuations in all rows of the image results in more accurate segmentation of bone regions. Visual evaluation show that the proposed algorithm segments bones better than conventional and some recent bone segmentation approaches.

11 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: The visual artifacts that cause shape deformation in salient objects and deteriorates geometrical consistency of the scene are considerably reduced in the proposed algorithm.
Abstract: Retargeting algorithms are needed to transfer an image from a device to another with different size and resolution. The goal is to preserve the best visual quality for important objects of the original image. In order to reduce image size, pixels should be removed from less important parts of the image. Therefore, we need an energy function to select less important pixels in seam carving. Various energy functions have been proposed in previous works to minimize the distortion in salient objects. In this paper we combine three different importance maps to form a new energy map. We first use both gradient and depth maps to highlight the values in the saliency map, eventually generates the final energy map. Experimental results using the proposed energy map show better visual appearance in comparison to previous algorithms even at high resizing percentage. The visual artifacts that cause shape deformation in salient objects and deteriorates geometrical consistency of the scene are considerably reduced in our proposed algorithm.

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This is one of the first surveys to provide such breadth of coverage across different wearable sensor systems for activity classification, and found that these single sensing modalities laid the foundation for hybrid works that tackle a mix of global and local interaction-type activities.
Abstract: Activity detection and classification are very important for autonomous monitoring of humans for applications, including assistive living, rehabilitation, and surveillance. Wearable sensors have found wide-spread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, such as accelerometer, gyroscope, and camera, it has become more feasible to develop activity monitoring algorithms employing one or more of these sensors with increased accessibility. We provide a complete and comprehensive survey on activity classification with wearable sensors, covering a variety of sensing modalities, including accelerometer, gyroscope, pressure sensors, and camera- and depth-based systems. We discuss differences in activity types tackled by this breadth of sensing modalities. For example, accelerometer, gyroscope, and magnetometer systems have a history of addressing whole body motion or global type activities, whereas camera systems provide the context necessary to classify local interactions, or interactions of individuals with objects. We also found that these single sensing modalities laid the foundation for hybrid works that tackle a mix of global and local interaction-type activities. In addition to the type of sensors and type of activities classified, we provide details on each wearable system that include on-body sensor location, employed learning approach, and extent of experimental setup. We further discuss where the processing is performed, i.e., local versus remote processing, for different systems. This is one of the first surveys to provide such breadth of coverage across different wearable sensor systems for activity classification.

320 citations

Journal ArticleDOI
Xin Ma1, Haibo Wang1, Bingxia Xue1, Mingang Zhou1, Bing Ji1, Yibin Li1 
TL;DR: An automated fall detection approach that requires only a low-cost depth camera and a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM is presented.
Abstract: Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.

239 citations

Journal ArticleDOI
TL;DR: A three-dimensional convolutional neural network (3-D CNN) based method for fall detection is developed, which only uses video kinematic data to train an automatic feature extractor and could circumvent the requirement for large fall dataset of deep learning solution.
Abstract: Fall detection is an important public healthcare problem. Timely detection could enable instant delivery of medical service to the injured. A popular nonintrusive solution for fall detection is based on videos obtained through ambient camera, and the corresponding methods usually require a large dataset to train a classifier and are inclined to be influenced by the image quality. However, it is hard to collect fall data and instead simulated falls are recorded to construct the training dataset, which is restricted to limited quantity. To address these problems, a three-dimensional convolutional neural network (3-D CNN) based method for fall detection is developed, which only uses video kinematic data to train an automatic feature extractor and could circumvent the requirement for large fall dataset of deep learning solution. 2-D CNN could only encode spatial information, and the employed 3-D convolution could extract motion feature from temporal sequence, which is important for fall detection. To further locate the region of interest in each frame, a long short-term memory (LSTM) based spatial visual attention scheme is incorporated. Sports dataset Sports-1 M with no fall examples is employed to train the 3-D CNN, which is then combined with LSTM to train a classifier with fall dataset. Experiments have verified the proposed scheme on fall detection benchmark with high accuracy as 100%. Superior performance has also been obtained on other activity databases.

222 citations

Journal ArticleDOI
TL;DR: In general, older adults appear to be interested in using fall-detection devices although they express concerns over privacy and understanding exactly what the device is doing at specific times.
Abstract: BACKGROUND:: Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. OBJECTIVES:: The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. DATA SOURCES:: A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. STUDY ELIGIBILITY CRITERIA AND INTERVENTIONS:: Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. STUDY APPRAISAL AND SYNTHESIS METHODS:: Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. RESULTS:: This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. LIMITATIONS:: This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. CONCLUSIONS AND IMPLICATIONS OF KEY FINDINGS:: There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices. Language: en

176 citations

Journal ArticleDOI
TL;DR: A distinguished fall accident detection accuracy up to 92% on the sensitivity and 99.75%" on the specificity can be obtained when a set of 450 test actions in nine different kinds of activities are estimated by using the proposed cascaded classifier, which justifies the superiority of the proposed algorithm.
Abstract: We propose in this paper a novel algorithm as well as architecture for the fall accident detection and corresponding wide area rescue system based on a smart phone and the third generation (3G) networks. To realize the fall detection algorithm, the angles acquired by the electronic compass (ecompass) and the waveform sequence of the triaxial accelerometer on the smart phone are used as the system inputs. The acquired signals are then used to generate an ordered feature sequence and then examined in a sequential manner by the proposed cascade classifier for recognition purpose. Once the corresponding feature is verified by the classifier at current state, it can proceed to next state; otherwise, the system will reset to the initial state and wait for the appearance of another feature sequence. Once a fall accident event is detected, the user's position can be acquired by the global positioning system (GPS) or the assisted GPS, and sent to the rescue center via the 3G communication network so that the user can get medical help immediately. With the proposed cascaded classification architecture, the computational burden and power consumption issue on the smart phone system can be alleviated. Moreover, as we will see in the experiment that a distinguished fall accident detection accuracy up to 92% on the sensitivity and 99.75% on the specificity can be obtained when a set of 450 test actions in nine different kinds of activities are estimated by using the proposed cascaded classifier, which justifies the superiority of the proposed algorithm.

166 citations