scispace - formally typeset
Search or ask a question
Patent•

A hand gesture recognition system and method

20 Dec 1996-
TL;DR: In this article, the rotational vectors calculated using a real-valued centroid are used to segment the hand region independently of pixel quantization, and color segmentation is used to identify hand-color regions, followed by region labeling to filter out noise regions based on region size.
Abstract: Noise problems in processing small images or large-granularity images are reduced by representing hand images as rotational vectors calculated using a real-valued centroid. The hand region is therefore sectored independently of pixel quantization. Color segmentation is used to identify hand-color regions, followed by region labelling to filter out noise regions based on region size. Principal component analysis is used to plot gesture models.
Citations
More filters
Patent•
09 May 2008
TL;DR: In this article, the authors described a system for processing touch inputs with respect to a multipoint sensing device and identifying at least one multipoint gesture based on the data from the multi-point sensing device.
Abstract: Methods and systems for processing touch inputs are disclosed. The invention in one respect includes reading data from a multipoint sensing device such as a multipoint touch screen where the data pertains to touch input with respect to the multipoint sensing device, and identifying at least one multipoint gesture based on the data from the multipoint sensing device.

2,584 citations

Patent•
25 Jan 1999
TL;DR: In this paper, a simple proximity transduction circuit is placed under each electrode to maximize the signal-to-noise ratio and to reduce wiring complexity, and segmentation processing of each proximity image constructs a group of electrodes corresponding to each distinguishable contacts and extracts shape, position and surface proximity features for each group.
Abstract: Apparatus and methods are disclosed for simultaneously tracking multiple finger (202-204) and palm (206, 207) contacts as hands approach, touch, and slide across a proximity-sensing, compliant, and flexible multi-touch surface (2). The surface consists of compressible cushion (32), dielectric electrode (33), and circuitry layers. A simple proximity transduction circuit is placed under each electrode to maximize the signal-to-noise ratio and to reduce wiring complexity. Scanning and signal offset removal on electrode array produces low-noise proximity images. Segmentation processing of each proximity image constructs a group of electrodes corresponding to each distinguishable contacts and extracts shape, position and surface proximity features for each group. Groups in successive images which correspond to the same hand contact are linked by a persistent path tracker (245) which also detects individual contact touchdown and liftoff. Classification of intuitive hand configurations and motions enables unprecedented integration of typing, resting, pointing, scrolling, 3D manipulation, and handwriting into a versatile, ergonomic computer input device.

2,576 citations

Patent•
19 Jul 2005
TL;DR: In this article, a user interface method for detecting a touch and then determining user interface mode when a touch is detected is presented. And the method further includes activating one or more GUI elements based on the user interface modes and in response to the detected touch.
Abstract: A user interface method is disclosed. The method includes detecting a touch and then determining a user interface mode when a touch is detected. The method further includes activating one or more GUI elements based on the user interface mode and in response to the detected touch.

1,390 citations

Patent•
30 Sep 2005
TL;DR: Proximity based systems and methods that are implemented on an electronic device are disclosed in this article, where the method includes sensing an object spaced away and in close proximity to the electronic device.
Abstract: Proximity based systems and methods that are implemented on an electronic device are disclosed. The method includes sensing an object spaced away and in close proximity to the electronic device. The method also includes performing an action in the electronic device when an object is sensed.

1,337 citations

Patent•
08 Jun 2007
TL;DR: In this paper, liquid-crystal display (LCD) touch screens that integrate the touch sensing elements with the display circuitry are discussed. But the integration may take a variety of forms.
Abstract: Disclosed herein are liquid-crystal display (LCD) touch screens that integrate the touch sensing elements with the display circuitry. The integration may take a variety of forms. Touch sensing elements can be completely implemented within the LCD stackup but outside the not between the color filter plate and the array plate. Alternatively, some touch sensing elements can be between the color filter and array plates with other touch sensing elements not between the plates. In another alternative, all touch sensing elements can be between the color filter and array plates. The latter alternative can include both conventional and in-plane-switching (IPS) LCDs. In some forms, one or more display structures can also have a touch sensing function. Techniques for manufacturing and operating such displays, as well as various devices embodying such displays are also disclosed.

1,083 citations

References
More filters
Patent•DOI•
TL;DR: A system for the control from a distance of machines having displays incls hand gesture detection in which the hand gesture causes movement of an on-screen hand icon over anon-screen machine control icon, with the hand icon moving the machine control icons in accordance with sensed hand movements to effectuate machine control.

838 citations

Patent•
30 Jul 1993
TL;DR: In this paper, a low-level model-free dynamic and static hand gesture recognition system utilizes either a 1-D histogram of frequency of occurrence vs. spatial orientation angle for static gestures or a 2-D space-time orientation histogram for dynamic gestures.
Abstract: A low-level model-free dynamic and static hand gesture recognition system utilizes either a 1-D histogram of frequency of occurrence vs. spatial orientation angle for static gestures or a 2-D histogram of frequency of occurrence vs. space-time orientation for dynamic gestures. In each case the histogram constitutes the signature of the gesture which is used for gesture recognition. For moving gesture detection, a 3-D space-time orientation map is merged or converted into the 2-D space-time orientation histogram which graphs frequency of occurrence vs. both orientation and movement. It is against this representation or template that an incoming moving gesture is matched.

605 citations

Patent•
24 Sep 1993
TL;DR: In this article, a one or two player virtual reality game efficiently detects and tracks a distinctively colored glove, which is matched up against a virtual opponent in order to put a virtual ball into a virtual basketball hoop before his/her virtual opponent steals the ball.
Abstract: A one or two player virtual reality game efficiently detects and tracks a distinctively colored glove. According to the preferred basketball embodiment, a single player equipped with the distinctively colored glove is matched up against a virtual opponent. The object of the game is for the real player to put a virtual basketball into a virtual basketball hoop before his/her virtual opponent steals the ball. Initially, the background site is scanned, and then the operator with the glove is scanned. A table of colors is then established which are unique only to the glove. A player is then scanned against the background to identify which color glove will have the least conflict with colors worn by the player. During play, the player is scanned at 30 frames a second and the information is stored in a frame buffer. A prediction is made of the location of the glove in subsequent frames based upon its previously known location and velocity. Therefore, a search for the glove can be made of only a limited portion of the full frame, thereby increasing the speed of acquisition. Gestures made by the player such as a flick shot, a dribble or a roundhouse shot can be distinguished so as to automatically cause the basketball to be released from the player's hand. If the velocity and direction of the ball are substantially in the direction of the virtual basketball hoop, then the player will be credited with a score.

420 citations

Patent•
Michael J. Black1, Yaser Yacoob1•
15 Dec 1995
TL;DR: In this article, a planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images.
Abstract: A system tracks human head and facial features over time by analyzing a sequence of images. The system provides descriptions of motion of both head and facial features between two image frames. These descriptions of motion are further analyzed by the system to recognize facial movement and expression. The system analyzes motion between two images using parameterized models of image motion. Initially, a first image in a sequence of images is segmented into a face region and a plurality of facial feature regions. A planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images. The second image is warped or shifted back towards the first image using the estimated motion parameters of the planar model, in order to model the facial features relative to the first image. An affine model and an affine model with curvature are used to recover motion parameters that estimate the image motion between the segmented facial feature regions and the warped second image. The recovered motion parameters of the facial feature regions represent the relative motions of the facial features between the first image and the warped image. The face region in the second image is tracked using the recovered motion parameters of the face region. The facial feature regions in the second image are tracked using both the recovered motion parameters for the face region and the motion parameters for the facial feature regions. The parameters describing the motion of the face and facial features are filtered to derive mid-level predicates that define facial gestures occurring between the two images. These mid-level predicates are evaluated over time to determine facial expression and gestures occurring in the image sequence.

373 citations

Patent•
08 Sep 1993
TL;DR: In this article, a 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably.
Abstract: A 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably. The apparatus includes: an image input unit for entering a plurality of time series images of an object operated by the operator into a motion representing a command; a feature point extraction unit for extracting at least four feature points including at least three reference feature points and one fiducial feature point on the object, from each of the images; a motion recognition unit for recognizing the motion of the object by calculating motion parameters, according to an affine transformation determined from changes of positions of the reference feature points on the images, and a virtual parallax for the fiducial feature point expressing a difference between an actual position change on the images and a virtual position change according to the affine transformation; and a command input unit for inputting the command indicated by the motion of the object recognized by the motion recognition unit.

365 citations