scispace - formally typeset
Search or ask a question

Showing papers by "Matthew Turk published in 2000"


Patent
Nebojsa Jojic1, Matthew Turk1
03 Feb 2000
TL;DR: In this paper, the authors present a system and method for recognizing mutual occlusions of body parts and filling in data for the occluded parts while tracking a human body, where body parts are preferably tracked from frame to frame in image sequences as an articulated structure in which the body parts connect at the joints instead of as individual objects moving and changing shape and orientation freely.
Abstract: The present invention is embodied in a system and method for digitally tracking objects in real time. The present invention visually tracks three-dimensional (3-D) objects in dense disparity maps in real time. Tracking of the human body is achieved by digitally segmenting and modeling different body parts using statistical models defined by multiple size parameters, position and orientation. In addition, the present invention is embodied in a system and method for recognizing mutual occlusions of body parts and filling in data for the occluded parts while tracking a human body. The body parts are preferably tracked from frame to frame in image sequences as an articulated structure in which the body parts are connected at the joints instead of as individual objects moving and changing shape and orientation freely.

306 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: A state-based approach to gesture learning and recognition is proposed, using spatial clustering and temporal alignment to build a finite state machine (FSM) recognizer.
Abstract: We propose a state-based approach to gesture learning and recognition. Using spatial clustering and temporal alignment, each gesture is defined to be an ordered sequence of states in spatial-temporal space. The 2D image positions of the centers of the head and both hands of the user are used as features; these are located by a color-based tracking method. From training data of a given gesture, we first learn the spatial information and then group the data into segments that are automatically aligned temporally. The temporal information is further integrated to build a finite state machine (FSM) recognizer. Each gesture has a FSM corresponding to it. The computational efficiency of the FSM recognizers allows us to achieve real-time on-line performance. We apply this technique to build an experimental system that plays a game of "Simon Says" with the user.

259 citations


Journal ArticleDOI

165 citations


Journal Article
Matthew Turk1
TL;DR: This chapter describes the emerging Perceptual User Interfaces field and then reports on three PUI-motivated projects: computer vision-based techniques to visually perceive relevant information about the user, and three projects to accommodate a wider range of scenarios, tasks, users and preferences.
Abstract: For some time, graphical user interfaces (GUIs) have been the dominant platform for human—computer interaction. The GUI-based style of interaction has made computers simpler and easier to use, especially for office productivity applications where computers are used as tools to accomplish specific tasks. However, as the way we use computers changes and computing becomes more pervasive and ubiquitous, GUIs will not easily support the range of interactions necessary to meet users’ needs. In order to accommodate a wider range of scenarios, tasks, users and preferences, we need to move toward interfaces that are natural, intuitive, adaptive and unobtrusive. The aim of a new focus in MCI, called Perceptual User Interfaces (PUIs), is to make human—computer interaction more like how people interact with each other and with the world. This chapter describes the emerging PUI field and then reports on three PUI-motivated projects: computer vision-based techniques to visually perceive relevant information about the user.

64 citations


Proceedings ArticleDOI
01 Sep 2000
TL;DR: An approach to 2D gesture recognition that models each gesture as a finite state machine (FSM) in the spatial-temporal space is proposed and the computational efficiency of the FSM recognizers allows real-time online performance to be achieved.
Abstract: Proposes an approach to 2D gesture recognition that models each gesture as a finite state machine (FSM) in the spatial-temporal space. The model construction works in a semi-automatic way. The structure of the model is first manually decided based on the observation of the spatial topology of the data. The model is refined iteratively between two stages: data segmentation and model training. We incorporate a modified Knuth-Morris-Pratt algorithm recognition procedure to speed up recognition. The computational efficiency of the FSM recognizers allows real-time online performance to be achieved.

58 citations