scispace - formally typeset
Proceedings ArticleDOI

Parallel hidden Markov models for American sign language recognition

TLDR
A novel approach to ASL recognition that aspires to being a solution to the scalability problems, based on parallel HMMs (PaHMMs), which model the parallel processes independently and can be trained independently, and do not require consideration of the different combinations at training time.
Abstract
The major challenge that faces American Sign Language (ASL) recognition now is to develop methods that will scale well with increasing vocabulary size. Unlike in spoken languages, phonemes can occur simultaneously in ASL. The number of possible combinations of phonemes after enforcing linguistic constraints is approximately 5.5/spl times/10/sup 8/. Gesture recognition, which is less constrained than ASL recognition, suffers from the same problem. Thus, it is not feasible to train conventional hidden Markov models (HMMs) for large-scab ASL applications. Factorial HMMs and coupled HMMs are two extensions to HMMs that explicitly attempt to model several processes occuring in parallel. Unfortunately, they still require consideration of the combinations at training time. In this paper we present a novel approach to ASL recognition that aspires to being a solution to the scalability problems. It is based on parallel HMMs (PaHMMs), which model the parallel processes independently. Thus, they can also be trained independently, and do not require consideration of the different combinations at training time. We develop the recognition algorithm for PaHMMs and show that it runs in time polynomial in the number of states, and in time linear in the number of parallel processes. We run several experiments with a 22 sign vocabulary and demonstrate that PaHMMs can improve the robustness of HMM-based recognition even on a small scale. Thus, PaHMMs are a very promising general recognition scheme with applications in both gesture and ASL recognition.

read more

Citations
More filters
Journal ArticleDOI

Hand and Mind: What Gestures Reveal about Thought

TL;DR: McNeill as discussed by the authors discusses what Gestures reveal about Thought in Hand and Mind: What Gestures Reveal about Thought. Chicago and London: University of Chicago Press, 1992. 416 pp.
Journal ArticleDOI

A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation

TL;DR: A unified framework for simultaneously performing spatial segmentation, temporal segmentsation, and recognition is introduced and can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds.
Journal ArticleDOI

A survey on activity recognition and behavior understanding in video surveillance

TL;DR: This paper provides an overview of benchmark databases for activity recognition, the market analysis of video surveillance, and future directions to work on for this application.
Journal ArticleDOI

A review of hand gesture and sign language recognition techniques

TL;DR: A thorough review of state-of-the-art techniques used in recent hand gesture and sign language recognition research, suitably categorized into different stages: data acquisition, pre-processing, segmentation, feature extraction and classification.
Journal ArticleDOI

A Framework for Recognizing the Simultaneous Aspects of American Sign Language

TL;DR: This paper presents a novel framework to ASL recognition that aspires to being a solution to the scalability problems, based on breaking down the signs into their phonemes and modeling them with parallel hidden Markov models.
References
More filters
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Hand and Mind: What Gestures Reveal about Thought

TL;DR: McNeill et al. as mentioned in this paper argue that gestures do not simply form a part of what is said and meant but have an impact on thought itself, and that gestures are global, synthetic, idiosyncratic, and imagistic.
Journal ArticleDOI

Factorial Hidden Markov Models

TL;DR: A generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner, and a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model.
Proceedings ArticleDOI

Coupled hidden Markov models for complex action recognition

TL;DR: Algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions are presented.
Journal ArticleDOI

Hand and Mind: What Gestures Reveal about Thought

TL;DR: McNeill as discussed by the authors discusses what Gestures reveal about Thought in Hand and Mind: What Gestures Reveal about Thought. Chicago and London: University of Chicago Press, 1992. 416 pp.
Related Papers (5)