scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Camera Based Driver Distraction System Using Image Processing

TL;DR: The system defined in this paper can reduce the road accident which is one of the main causes due to driver distraction and also evaluates the driver's mind become active or not by accurately detecting yawning, face detection, eye detection and to find whether the driver consumed an alcoholic or not.
Abstract: Distraction is nothing but the lacking of the attention to activities necessary for secure driving. Inattention can either be an intended or an unintended diversion of concentration from drivers side. Driver distraction can be defined as anything that is necessary for identification of information to securely sustain the lateral and longitudinal control of the vehicle due to any some event, the persons, inside or outside the vehicle that might be forced or tends cause the drivers shifting attention to another place from the fundamental driving task. The main cause of driver distraction is to compete for event activity that turns to reduce driving performance that outcome in road traffic crashes. The system defined in this paper given solution to the above problem and can reduce the road accident which is one of the main causes due to driver distraction. It also evaluates the driver's mind become active or not by accurately detecting yawning, face detection, eye detection and to find whether the driver consumed an alcoholic or not. To attain an alcohol detection through the sensor. It is done for the safety of people traveling by car. The Haar cascade algorithm can be used for object detection in real time.
Citations
More filters
Proceedings ArticleDOI
04 May 2020
TL;DR: This paper addresses the learning task of estimating driver drowsiness from the signals of car acceleration sensors as a weakly supervised learning, and derives a scalable stochastic optimization method as a way of implementing the algorithm.
Abstract: This paper addresses the learning task of estimating driver drowsiness from the signals of car acceleration sensors. Since even drivers themselves cannot perceive their own drowsiness in a timely manner unless they use burdensome invasive sensors, obtaining labeled training data for each timestamp is not a realistic goal. To deal with this difficulty, we formulate the task as a weakly supervised learning. We only need to add labels for each complete trip, not for every timestamp independently. By assuming that some aspects of driver drowsiness increase over time due to tiredness, we formulate an algorithm that can learn from such weakly labeled data. We derive a scalable stochastic optimization method as a way of implementing the algorithm. Numerical experiments on real driving datasets demonstrate the advantages of our algorithm against baseline methods.

5 citations


Cites background from "Camera Based Driver Distraction Sys..."

  • ...Most studies taking the first approach used invasive and/or intrusive equipments to capture their data, which include physiological sensors, such as EEG and Electrooculogram (EOG) [1, 2, 8], image sensors [2, 9], and/or the managed simulated experiments [8]....

    [...]

Journal Article
TL;DR: The paper accounts for the automated detection of pedestrians associate degree objects that will cause an accident in the dark time from a vehicle mistreatment an automotive visual modality system and a thermal camera and a biometric authentication system for asleep and alcoholic driver's mistreatment MTCC.
Abstract: The paper accounts for the automated detection of pedestrians associate degree objects that will cause an accident in the dark time from a vehicle mistreatment an automotive visual modality system and a thermal camera. As per the accident survey team of the Asian nation, most of the accident's area unit caused because of the low vision of the drivers that result in the most dangerous and better range of accidents in the dark with relation to daytime. To avoid accidents in the dark time automotive visual modality system is employed. This method includes an IR vision camera that detects the article with the assistance of IR diode and photo-diode pair, this camera can notice the article up to 100 m. Besides the planning of hardware additionally, a software package half for the automated detection of pedestrians is intended just in case of a distracted driver in a way of alcoholic or yawning. The software package for object detection and classification uses trendy digital signal process algorithms like connected element labeling (CCL), a bar chart of headed gradients (HOG), and support vector machine (SVM). Moreover, besides the bestowed visual modality system, our system incorporates a biometric authentication system for asleep and alcoholic driver's mistreatment MTCC and additionally an RGB filter rule to notice the red or inexperienced signal lights of the vehicle and also the traffic signals. And for the restricted field of vision, it uses CMOS image sensing.
References
More filters
01 Jan 2015

1,187 citations


"Camera Based Driver Distraction Sys..." refers methods in this paper

  • ...Shewata Maralappanavar et al.[2] have used a viola Jones algorithm[4] for face detection and eye regions are identified....

    [...]

Journal ArticleDOI
TL;DR: Challenges in achieving effective modeling, detection, and assessment of driver distraction using both UTDrive instrumented vehicle data and naturalistic driving data are highlighted.
Abstract: Vehicle technologies have advanced significantly over the past 20 years, especially with respect to novel in-vehicle systems for route navigation, information access, infotainment, and connected vehicle advancements for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connectivity and communications. While there is great interest in migrating to fully automated, self-driving vehicles, factors such as technology performance, cost barriers, public safety, insurance issues, legal implications, and government regulations suggest it is more likely that the first step in the progression will be multifunctional vehicles. Today, embedded controllers as well as a variety of sensors and high-performance computing in present-day cars allow for a smooth transition from complete human control toward semisupervised or assisted control, then to fully automated vehicles. Next-generation vehicles will need to be more active in assessing driver awareness, vehicle capabilities, and traffic and environmental settings, plus how these factors come together to determine a collaborative safe and effective driver-vehicle engagement for vehicle operation. This article reviews a range of issues pertaining to driver modeling for the detection and assessment of distraction. Examples from the UTDrive project are used whenever possible, along with a comparison to existing research programs. The areas addressed include 1) understanding driver behavior and distraction, 2) maneuver recognition and distraction analysis, 3) glance behavior and visual tracking, and 4) mobile platform advancements for in-vehicle data collection and human-machine interface. This article highlights challenges in achieving effective modeling, detection, and assessment of driver distraction using both UTDrive instrumented vehicle data and naturalistic driving data

52 citations


"Camera Based Driver Distraction Sys..." refers background in this paper

  • ...The machine learning classification algorithms [1] like hidden Markov models proven to be valuable for envisaging driver actions....

    [...]

Journal ArticleDOI
TL;DR: Results suggest that the probability of yellow light running increases with the increase in driving speed at the onset of yellow, and both young and middle-aged drivers reveal reduced propensity foryellow light running whilst distracted across the entire speed range, exhibiting possible risk compensation during this critical driving situation.

48 citations

Journal ArticleDOI
TL;DR: This work proposes an efficient method to solve the problem of the eye gaze point by locating the eye region by modifying the characteristics of the Active Appearance Model, and employing the Support Vector Machine to estimate the five gazing directions through classification.
Abstract: In recent years, research on human-computer interaction is becoming popular, most of which uses body movements, gestures or eye gaze direction. Until now, gazing estimation is still an active research domain. We propose an efficient method to solve the problem of the eye gaze point. We first locate the eye region by modifying the characteristics of the Active Appearance Model (AAM). Then by employing the Support Vector Machine (SVM), we estimate the five gazing directions through classification. The original 68 facial feature points in AAM are modified into 36 eye feature points. According to the two-dimensional coordinates of feature points, we classify different directions of eye gazing. The modified 36 feature points describe the contour of eyes, iris size, iris location, and the position of pupils. In addition, the resolution of cameras does not affect our method to determine the direction of line of sight accurately. The final results show the independence of classifications, less classification errors, and more accurate estimation of the gazing directions.

47 citations

01 Jan 2013
TL;DR: In this article, a new method for head posture and gaze direction estimation is proposed, where three models of head position are established and postures are judged based on triangle attribute constituted by eyes and mouth.
Abstract: A new method for head posture and gaze direction estimation is proposed. Firstly, three models of head position are established and postures are judged based on triangle attribute constituted by eyes and mouth. Then pupil is located using Hough transform in eye area. With the method of horizontal and vertical projection and eye prior knowledge, the normal eye outline is fitted. Finally, gaze direction is estimated according to the position of pupil in normal state eye and head posture. The experimental results demonstrate the proposed method can accurately detect head posture and gaze direction. For considering head posture, the method has more accuracy in gaze estimation.

16 citations


"Camera Based Driver Distraction Sys..." refers background in this paper

  • ...In horizontal integral projection [3], peaks are the vertical coordinates of eyelid; troughs are vertical coordinates which gives the inner and outer corner of the eye....

    [...]