scispace - formally typeset
Search or ask a question

Showing papers by "Patrick Lucey published in 2009"


Proceedings ArticleDOI
13 Nov 2009
TL;DR: The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automate Facial Image Analysis System (AFA) system, and will detect the seven universal expressions of emotion.
Abstract: Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.

98 citations


Proceedings ArticleDOI
08 Dec 2009
TL;DR: Using image data from patients with rotator-cuff injuries, this paper describes an AAM-based automatic system which can detect pain on a frame-by-frame level through the fusion of individual AU detectors.
Abstract: Pain is generally measured by patient self-report, normally via verbal communication. However, if the patient is a child or has limited ability to communicate (i.e. the mute, mentally impaired, or patients having assisted breathing) self-report may not be a viable measurement. In addition, these self-report measures only relate to the maximum pain level experienced during a sequence so a frame-by-frame measure is currently not obtainable. Using image data from patients with rotator-cuff injuries, in this paper we describe an AAM-based automatic system which can detect pain on a frame-by-frame level. We do this two ways: directly (straight from the facial features); and indirectly (through the fusion of individual AU detectors). From our results, we show that the latter method achieves the optimal results as most discriminant features from each AU detector (i.e. shape or appearance) are used.

98 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: Using image data from patients with rotator-cuff injuries, this work describes an AAM-based automatic system that decouples shape and appearance to detect AUs on a frame-by-frame basis and found specific relationships between action units and types of facial features.
Abstract: Recent psychological research suggests that facial movements are a reliable measure of pain. Automatic detection of facial movements associated with pain would contribute to patient care but is technically challenging. Facial movements may be subtle and accompanied by abrupt changes in head orientation. Active appearance models (AAM) have proven robust to naturally occurring facial behavior, yet AAM-based efforts to automatically detect action units (AUs) are few. Using image data from patients with rotator-cuff injuries, we describe an AAM-based automatic system that decouples shape and appearance to detect AUs on a frame-by-frame basis. Most current approaches to AU detection use only appearance features. We explored the relative efficacy of shape and appearance for AU detection. Consistent with the experience of human observers, we found specific relationships between action units and types of facial features. Several AU (e.g. AU4, 12, and 43) were more discriminable by shape than by appearance, whilst the opposite pattern was found for others (e.g. AU6, 7 and 10). AU-specific feature sets may yield optimal results.

30 citations


01 Jan 2009
TL;DR: In this article, the Viola-Jones algorithm was used to locate and track the driver's lips in real-time, despite the visual variability of illumination and head pose of the driver.
Abstract: Acoustically, vehicles are extremely noisy environments and as a consequence audio-only in-car voice recognition systems perform very poorly. Seeing that the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem. However, implementing such an approach requires a system being able to accurately locate and track the driver’s face and facial features in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using this system, we present our results which show that using the Viola-Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose.