scispace - formally typeset
K

Kevin Karsch

Researcher at University of Illinois at Urbana–Champaign

Publications -  47
Citations -  2012

Kevin Karsch is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Rendering (computer graphics) & Augmented reality. The author has an hindex of 19, co-authored 47 publications receiving 1766 citations. Previous affiliations of Kevin Karsch include United States Naval Research Laboratory & Adobe Systems.

Papers
More filters
Journal ArticleDOI

Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling.

TL;DR: The technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and is demonstrated through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.
Book ChapterDOI

Depth extraction from video using non-parametric sampling

TL;DR: The technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and is demonstrated through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.
Proceedings ArticleDOI

Rendering synthetic objects into legacy photographs

TL;DR: In this article, a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements is proposed, which can be used for home decorating and user content creation.
Journal ArticleDOI

Automatic Scene Inference for 3D Object Compositing

TL;DR: a user-friendly image editing system that supports a drag-and-drop object insertion, postprocess illumination editing, and depth-of-field manipulation, and achieves the same level of realism as techniques that require significant user interaction.
Posted Content

Automatic Scene Inference for 3D Object Compositing

TL;DR: In this article, a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), post-process illumination editing, and depth-of-field manipulation is presented.