scispace - formally typeset
D

David Kim

Researcher at Microsoft

Publications -  55
Citations -  12500

David Kim is an academic researcher from Microsoft. The author has contributed to research in topics: Augmented reality & Depth map. The author has an hindex of 36, co-authored 55 publications receiving 11020 citations. Previous affiliations of David Kim include Newcastle University & Ludwig Maximilian University of Munich.

Papers
More filters
Proceedings ArticleDOI

KinectFusion: Real-time dense surface mapping and tracking

TL;DR: A system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware, which fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real- time.
Proceedings ArticleDOI

KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera

TL;DR: Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction, to enable real-time multi-touch interactions anywhere.
Proceedings ArticleDOI

Holoportation: Virtual 3D Teleportation in Real-time

TL;DR: This paper demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras, and allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space.
Proceedings ArticleDOI

Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor

TL;DR: Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand, which enables a variety of freehand interactions on the move and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware.
Journal ArticleDOI

Fusion4D: real-time performance capture of challenging scenes

TL;DR: This work contributes a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time, highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes.