scispace - formally typeset
M

Matthew Turk

Researcher at Toyota Technological Institute at Chicago

Publications -  209
Citations -  33736

Matthew Turk is an academic researcher from Toyota Technological Institute at Chicago. The author has contributed to research in topics: Augmented reality & Facial recognition system. The author has an hindex of 55, co-authored 198 publications receiving 30972 citations. Previous affiliations of Matthew Turk include Massachusetts Institute of Technology & University of California.

Papers
More filters
Proceedings ArticleDOI

Automatic Hot Spot Detection and Segmentation in Whole Body FDG-PET Images

TL;DR: A novel body-section labeling module based on spatial hidden-Markov models (HMM) allows different processing policies to be applied in different body sections and works robustly despite the large variations in clinical PET images.
Proceedings ArticleDOI

The isometric self-organizing map for 3D hand pose estimation

TL;DR: An isometric self-organizing map (ISO-SOM) method for nonlinear dimensionality reduction, which integrates a self- Organizing map model and an ISOMAP dimension reduction algorithm, organizing the high dimension data in a low dimension lattice structure is proposed.
Dissertation

Interactive-time vision: face recognition as a visual behavior

TL;DR: A near-real-time computer system which locates and tracks a subject's head and then recognize the person by comparing characteristics of the face to those of known individuals, and provides for the ability to learn and later recognize new faces in an unsupervised manner.
Proceedings ArticleDOI

Visual interaction with lifelike characters

TL;DR: This paper explores the use of fast, simple computer vision techniques to add compelling visual capabilities to social user interfaces and presents a set of "interactive-time" vision routines that begin to support the user's expectations of a seeing character.
Patent

Systems and methods for augmented reality-based remote collaboration

TL;DR: In this paper, the authors present a system, methods, devices, and software for an augmented shared visual space for live mobile remote collaboration on physical tasks, where participants can explore a scene in location A independently of one or more local participants current camera position in location B, and communicate via spatial annotations that are immediately visible to all other participants in augmented reality.