scispace - formally typeset
Search or ask a question

Showing papers by "Jian Sun published in 2004"


Journal ArticleDOI
01 Aug 2004
TL;DR: Usability studies indicate that Lazy Snapping provides a better user experience and produces better segmentation results than the state-of-the-art interactive image cutout tool, Magnetic Lasso in Adobe Photoshop.
Abstract: In this paper, we present Lazy Snapping, an interactive image cutout tool. Lazy Snapping separates coarse and fine scale processing, making object specification and detailed adjustment easy. Moreover, Lazy Snapping provides instant visual feedback, snapping the cutout contour to the true object boundary efficiently despite the presence of ambiguous or low contrast edges. Instant feedback is made possible by a novel image segmentation algorithm which combines graph cut with pre-computed over-segmentation. A set of intuitive user interface (UI) tools is designed and implemented to provide flexible control and editing for the users. Usability studies indicate that Lazy Snapping provides a better user experience and produces better segmentation results than the state-of-the-art interactive image cutout tool, Magnetic Lasso in Adobe Photoshop.

1,170 citations


Journal ArticleDOI
01 Aug 2004
TL;DR: Experiments on many complex natural images demonstrate that Poisson matting can generate good matting results that are not possible using existing matting techniques.
Abstract: In this paper, we formulate the problem of natural image matting as one of solving Poisson equations with the matte gradient field. Our approach, which we call Poisson matting, has the following advantages. First, the matte is directly reconstructed from a continuous matte gradient field by solving Poisson equations using boundary information from a user-supplied trimap. Second, by interactively manipulating the matte gradient field using a number of filtering tools, the user can further improve Poisson matting results locally until he or she is satisfied. The modified local result is seamlessly integrated into the final result. Experiments on many complex natural images demonstrate that Poisson matting can generate good matting results that are not possible using existing matting techniques.

646 citations


Journal ArticleDOI
TL;DR: An image-based modeling and rendering system that models a sparse light field using a set of coherent layers, and introduces a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.
Abstract: In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.

200 citations


Patent
01 Apr 2004
TL;DR: In this article, a scene is split into one or more coherent layers and the boundaries of the coherent layers are propagated across a plurality of frames corresponding to the scene, and the splitting may be further refined to present a virtual view of the scene.
Abstract: Techniques are disclosed to produce virtual views of a complex scene. The virtual views are substantially free from aliasing even when using a relatively sparse set of images of the scene. In a described implementation, a scene is split into one or more coherent layers. The boundaries of the coherent layers are propagated across a plurality of frames corresponding to the scene. The splitting may be further refined (e.g., in accordance with user feedback) to present a virtual view of the scene.

157 citations


Journal Article
TL;DR: A novel approach to recover a high-quality image by exploiting the tradeoff between exposure time and motion blur, which considers color statistics and spatial constraints simultaneously, by using only two defective input images.
Abstract: Under dimly lit condition, it is difficult to take a satisfactory image in long exposure time with a hand-held camera. Despite the use of a tripod, moving objects in the scene still generate ghosting and blurring effect. In this paper, we propose a novel approach to recover a high-quality image by exploiting the tradeoff between exposure time and motion blur, which considers color statistics and spatial constraints simultaneously, by using only two defective input images. A Bayesian framework is adopted to incorporate the factors to generate an optimal color mapping function. No estimation of PSF is performed. Our new approach can be readily extended to handle high contrast scenes to reveal fine details in saturated or highlight regions. An image acquisition system deploying off-the-shelf digital cameras and camera control softwares was built. We present our results on a variety of defective images: global and local motion blur due to camera shake or object movement, and saturation due to high contrast scenes.

68 citations


Patent
01 Apr 2004
TL;DR: In this article, techniques to improve quality of images that may be blurred or underexposed (e.g., because of camera shake, taken in dim lighting conditions, or taken of high action scenes) are described.
Abstract: Techniques are disclosed to improve quality of images that may be blurred or underexposed (e.g., because of camera shake, taken in dim lighting conditions, or taken of high action scenes). The techniques may be implemented in a digital camera, digital video camera, or a digital camera capable of capturing video. In one described implementation, a digital camera includes an image sensor, a storage device, and a processing unit. The image sensor captures two images from a same scene which are stored on the storage device. The processing unit enhances the captured images with luminance correction.

67 citations


Book ChapterDOI
11 May 2004
TL;DR: In this article, a novel approach to recover a high-quality image by exploiting the tradeoff between exposure time and motion blur, which considers color statistics and spatial constraints simultaneously, by using only two defective input images.
Abstract: Under dimly lit condition, it is difficult to take a satisfactory image in long exposure time with a hand-held camera. Despite the use of a tripod, moving objects in the scene still generate ghosting and blurring effect. In this paper, we propose a novel approach to recover a high-quality image by exploiting the tradeoff between exposure time and motion blur, which considers color statistics and spatial constraints simultaneously, by using only two defective input images. A Bayesian framework is adopted to incorporate the factors to generate an optimal color mapping function. No estimation of PSF is performed. Our new approach can be readily extended to handle high contrast scenes to reveal fine details in saturated or highlight regions. An image acquisition system deploying off-the-shelf digital cameras and camera control softwares was built. We present our results on a variety of defective images: global and local motion blur due to camera shake or object movement, and saturation due to high contrast scenes.

59 citations


Patent
Jian Sun1, Heung-Yeung Shum1, Hai Tao1
01 Apr 2004
TL;DR: In this paper, a method for generating high-resolution images from a generic low-resolution image is described. But the method is based on extracting, at a training phase, a plurality of primal sketch priors from training data.
Abstract: Techniques are disclosed to enable generation of a high-resolution image from any generic low-resolution image. In one described implementation, a method includes extracting, at a training phase, a plurality of primal sketch priors from training data. At a synthesis phase, the plurality of primal sketch priors are utilized to improve a low-resolution image by replacing one or more low-frequency primitives extracted from the low-resolution image with corresponding ones of the plurality of primal sketch priors.

21 citations


01 Apr 2004
TL;DR: In this paper, an image-based modeling and rendering system, which is called pop-up light field, is presented, where the user specifies how many coherent layers should be modeled or popped up according to the scene complexity.
Abstract: In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images. We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.

2 citations