scispace - formally typeset
Search or ask a question
Author

Aditi Jahagirdar

Bio: Aditi Jahagirdar is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Haar-like features & Artificial intelligence. The author has an hindex of 3, co-authored 11 publications receiving 47 citations.

Papers
More filters
Proceedings Article
03 Oct 2012
TL;DR: In this paper, an efficient method for skin color segmentation on color photos is implemented and can be used as a preprocessing step to find regions that potentially have human faces and limbs in images.
Abstract: Skin detection is the process of finding skin-colored pixels and regions in an image or a video. This process is typically used as a preprocessing step to find regions that potentially have human faces and limbs in images. Several computer vision approaches have been developed for skin detection. Skin detectors typically transform a given pixel into an appropriate color space and then use a skin classifier to label the pixel whether it is a skin or a non-skin pixel. In this paper, an efficient method for skin color segmentation on color photos is implemented. This

24 citations

Proceedings ArticleDOI
03 Dec 2020
TL;DR: In this article, the authors analyzed the symptoms of lung cancer in different age groups and used various machine learning techniques such as KNN, SVM, Decision Trees and Random Forest to calculate the presence or decisiveness of cancer in correspondence with the symptoms shown by patients.
Abstract: The integration of the machine learning techniques in healthcare can be of huge benefit aimed at curing illness of millions of people A lot of effort has been taken by researchers to detect and provide early-stage insights into cancer diagnosis In the machine learning research community, various algorithms – KNN, SVM, Decision Trees and Random Forest have been applied to calculate the presence or decisiveness of cancer in correspondence with the symptoms shown by the patients This paper aims to analyze the symptoms of the different age groups Youth, Working Class and Elderly Tree-based algorithms like Decision Trees, Random Forest and XGBoost have been used to identify the underlying data patterns in order to calculate relative feature importances It has been concluded that Coughing of Blood, Clubbing of Finger Nails, Genetic Risk, Passive Smoking and Snoring are the factors that are responsible for lung cancer in all the age groups in most of the cases

8 citations

Proceedings ArticleDOI
31 Dec 2012
TL;DR: A novel face detection approach that adopts a `reference white' to realize light compensation and realizes the detection of candidate face region using Yeber skin color model and uses well known opening and closing morphological operations to exclude the impact of noise on color segmentation.
Abstract: In this paper we propose a novel face detection approach. In general, lighting in images can have negative impact on the performance of face detection systems. Many current face detection systems are only reliable under controlled condition such as indoors under stable lighting conditions. So there is need to devise a solution to overcome illumination problems faced by face detection system. As one of the solution proposed approach firstly adopts a ‘reference white’ to realize light compensation; Then it realizes the detection of candidate face region using Ycbcr skin color model. The third, paper uses well known opening and closing morphological operations to exclude the impact of noise on color segmentation. Last Adaboost algorithm is improved and used to accurate face detection. The system uses java to achieve and is verified in self designed photo library, and is particularly suitable for illumination variations and complex background.

8 citations

Book ChapterDOI
01 Jan 2018
TL;DR: The method proposes to integrate HOG feature and PCA feature effectively to form a feature descriptor which is used further to train KNN classifier, which is comparable with existing methods.
Abstract: Human action recognition has become vital aspect of video analytics. This study explores methods for the classification of human actions by extracting silhouette of object and then applying feature extraction. The method proposes to integrate HOG feature and PCA feature effectively to form a feature descriptor which is used further to train KNN classifier. HOG gives local shape-oriented variations of the object while PCA gives global information about frequently moving parts of human body. Experiments conducted on Weizmann and KTH datasets show results comparable with existing methods.

5 citations

Journal ArticleDOI
TL;DR: This work proposes a systematic approach to detect face using cascading classifiers, recognize the hand-gesture and annotate the video using webcam available at affordable prices, and shows that the system can work under a varying degree of background lightning condition as well as illumination.
Abstract: In today’s expeditiously changing world, the physical classrooms are soon getting replaced by virtual classrooms. In such an environment, it is important to capture the raising-ofhand gesture of the participants and convey it to the lecturer. Gesture Recognition is one of the optimal techniques which help us capture such an event. The existing systems make’s user wear colored gloves which is not a user-friendly approach. Some require costly in-depth sensor camera’s for using them. We propose a systematic approach to detect face using cascading classifiers, recognize the hand-gesture and annotate the video. This is made possible using webcam available at affordable prices. The results obtained show that the system can work under a varying degree of background lightning condition as well as illumination. The location and the hand orientation in accordance with the face detected help us in achieving greater efficiency.. Keywords— Raising-Hand, Colored Gloves, User-Friendly, Cascade Classifiers, Hand-Gesture, Background Lighting, Illumination.

5 citations


Cited by
More filters
01 Jan 2018
TL;DR: In this paper, the authors survey major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition, and explore the current challenges in detail.
Abstract: The ability of computers to recognise hand gestures visually is essential for progress in human-computer interaction. Gesture recognition has applications ranging from sign language to medical assistance to virtual reality. However, gesture recognition is extremely challenging not only because of its diverse contexts, multiple interpretations, and spatio-temporal variations but also because of the complex non-rigid properties of the hand. This study surveys major constraints on vision-based gesture recognition occurring in detection and pre-processing, representation and feature extraction, and recognition. Current challenges are explored in detail.

63 citations

Proceedings ArticleDOI
23 May 2014
TL;DR: A robust system where facial expressions, head tilting and lane departure for fatigue will be detected collectively is proposed collectively to detect driver fatigue and show good accuracy and reliable performance to avoid road accidents.
Abstract: Driver fatigue causes serious damages amongst all other road accidents. Around 20% of fatal road accidents involve driver fatigue. This paper describes a modern approach which will detect driver fatigue considering most of the fatigue symptoms. Eye closure, yawning, head tilting is the major symptoms of fatigue behavior. Inattentive vehicle movement in the road under fatigue condition is also accountable for driver fatigue. The goal of this paper is to detect these symptoms for better driving condition in the road. These symptoms are monitored by two cameras. This paper proposes a robust system where facial expressions, head tilting and lane departure for fatigue will be detected collectively. Experimental results of the proposed method are compared with the previous method. The results show good accuracy and reliable performance to avoid road accidents compared to the previous method. The proposed system is very simple and avoids any complexity.

24 citations

Journal ArticleDOI
TL;DR: Experimental results show that proposed skin detection method can more accurately segment out the skin-coloured regions and minimizes the overall detection error.

23 citations

Journal ArticleDOI
TL;DR: In this paper, an efficie nt driver’s drowsiness detection system is designed using yawning detection and eye detection and mouth detection so road accidents can avoid successfully.
Abstract: Driver fatigue is the main reason for fatal road ac cidents around the world. In this paper, an efficie nt driver’s drowsiness detection system is designed using yawning detection.Here, we consider eye detection and mouth detection. So tha t road accidents can avoid successfully. Mouth features points are identified using the redness property. Firstly detecting the d river’s face using YCbCr method then face tracking will perform using canny edge de tector. After that , eyes and mouth positions by us ing Haar features. Lastly yawning detection is perform by using mouth geometr ic features. This method is tested on images from v ideos. Also proposed system should then alert to the driver in case of inattention.

17 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: In this article, a detailed overview of the primary methodologies in vision-based hand gesture recognition for human-computer communication (HCI) has been presented, including different types of gestures, gesture acquisition systems, major problems of the gesture recognition system, steps in gesture recognition like acquisition, detection and pre-processing, representation and feature extraction, and recognition.
Abstract: Hand gesture recognition is viewed as a significant field of exploration in computer vision with assorted applications in the human-computer communication (HCI) community. The significant utilization of gesture recognition covers spaces like sign language, medical assistance and virtual reality-augmented reality and so on. The underlying undertaking of a hand gesture-based HCI framework is to acquire raw data which can be accomplished fundamentally by two methodologies: sensor based and vision based. The sensor-based methodology requires the utilization of instruments or the sensors to be genuinely joined to the arm/hand of the user to extract information. While vision-based plans require the obtaining of pictures or recordings of the hand gestures through a still/video camera. Here, we will essentially discuss vision-based hand gesture recognition with a little prologue to sensor-based data obtaining strategies. This paper overviews the primary methodologies in vision-based hand gesture recognition for HCI. Major topics include different types of gestures, gesture acquisition systems, major problems of the gesture recognition system, steps in gesture recognition like acquisition, detection and pre-processing, representation and feature extraction, and recognition. Here, we have provided an elaborated list of databases, and also discussed the recent advances and applications of hand gesture-based systems. A detailed discussion is provided on feature extraction and major classifiers in current use including deep learning techniques. Special attention is given to classify the schemes/approaches at various stages of the gesture recognition system for a better understanding of the topic to facilitate further research in this area.

16 citations