scispace - formally typeset
Search or ask a question

Showing papers by "Brian Curless published in 2009"


Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors) and demonstrates results on several challenging datasets, including the first result of this kind from an automated computer vision system.
Abstract: This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system first uses structure-from-motion, multi-view stereo, and a stereo algorithm specifically designed for Manhattan-world scenes (scenes consisting predominantly of piece-wise planar surfaces with dominant directions) to calibrate the cameras and to recover initial 3D geometry in the form of oriented points and depth maps. Next, the initial geometry is fused into a 3D model with a novel depth-map integration algorithm that, again, makes use of Manhattan-world assumptions and produces simplified 3D models. Finally, the system enables the exploration of reconstructed environments with an interactive, image-based 3D viewer. We demonstrate results on several challenging datasets, including a 3D reconstruction and image-based walk-through of an entire floor of a house, the first result of this kind from an automated computer vision system.

385 citations


Proceedings ArticleDOI
20 Jun 2009
TL;DR: This paper presents a novel MVS approach to overcome limitations for Manhattan World scenes, i.e., scenes that consists of piece-wise planar surfaces with dominant directions, and demonstrates results that outperform the current state of the art for such texture-poor scenes.
Abstract: Multi-view stereo (MVS) algorithms now produce reconstructions that rival laser range scanner accuracy However, stereo algorithms require textured surfaces, and therefore work poorly for many architectural scenes (eg, building interiors with textureless, painted walls) This paper presents a novel MVS approach to overcome these limitations for Manhattan World scenes, ie, scenes that consists of piece-wise planar surfaces with dominant directions Given a set of calibrated photographs, we first reconstruct textured regions using an existing MVS algorithm, then extract dominant plane directions, generate plane hypotheses, and recover per-view depth maps using Markov random fields We have tested our algorithm on several datasets ranging from office interiors to outdoor buildings, and demonstrate results that outperform the current state of the art for such texture-poor scenes

373 citations



Proceedings Article
25 May 2009
TL;DR: A GPU-accelerated, temporally coherent rendering algorithm is described that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening.
Abstract: We present an approach to convert a small portion of a light field with extracted depth information into a cinematic effect with simulated, smooth camera motion that exhibits a sense of 3D parallax. We develop a taxonomy of the cinematic conventions of these effects, distilled from observations of documentary film footage and organized by the number of subjects of interest in the scene. We present an automatic, content-aware approach to apply these cinematic conventions to an input light field. A face detector identifies subjects of interest. We then optimize for a camera path that conforms to a cinematic convention, maximizes apparent parallax, and avoids missing information in the input. We describe a GPU-accelerated, temporally coherent rendering algorithm that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening. We evaluate and demonstrate our approach on a wide variety of scenes and present a user study that compares our 3D cinematic effects to their 2D counterparts.

74 citations


Proceedings ArticleDOI
16 Apr 2009
TL;DR: The framework enables a variety of applications that were previously unavailable to the amateur user, such as the ability to automatically create videos with high spatiotemporal resolution, and shift a high-resolution still to nearby points in time to better capture a missed event.
Abstract: We present solutions for enhancing the spatial and/or temporal resolution of videos. Our algorithm targets the emerging consumer-level hybrid cameras that can simultaneously capture video and high-resolution stills. Our technique produces a high spacetime resolution video using the high-resolution stills for rendering and the low-resolution video to guide the reconstruction and the rendering process. Our framework integrates and extends two existing algorithms, namely a high-quality optical flow algorithm and a high-quality image-based-rendering algorithm. The framework enables a variety of applications that were previously unavailable to the amateur user, such as the ability to (1) automatically create videos with high spatiotemporal resolution, and (2) shift a high-resolution still to nearby points in time to better capture a missed event.

60 citations


01 Jan 2009
TL;DR: This paper first compute dense view dependent depthmaps using consistent segmentation and then proposes two rendering algorithms to render novel views using the segmentation, both realtime and off-line.
Abstract: This paper presents an approach to render novel views from input photographs, a task which is commonly referred to as image based rendering. We first compute dense view dependent depthmaps using consistent segmentation. This method jointly computes multiview stereo and segments input photographs while accounting for mixed pixels (matting). We take the images with depth as our input and then propose two rendering algorithms to render novel views using the segmentation, both realtime and off-line. We demonstrate the results of our approach on a wide variety of scenes.

18 citations