scispace - formally typeset
Search or ask a question
Author

Xiangmin Li

Bio: Xiangmin Li is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Photography & Focus stacking. The author has an hindex of 1, co-authored 2 publications receiving 3 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The main relations among image defocus, sensordefocus, and scene defocus for an imaging system are introduced and a novel method based on the defocus origin that is essentially the reverse of depth from defocus (DFD).
Abstract: A novel method is proposed for defocus map estimation. It is based on the defocus origin that is essentially the reverse of depth from defocus (DFD). The main relations among image defocus, sensor defocus, and scene defocus for an imaging system are introduced. A defocus map is deduced from the depth map and the depth map is derived from the disparity map. The full disparity map can be reconstructed using an image-matching method and our clustering segmentation algorithm. Experimental results for an interior scene and an outdoor scene demonstrate that our method is effective in defocus measurement.

2 citations

Journal ArticleDOI
TL;DR: A depth-based computational photography model is proposed for all-in-focus image capture that adopts an energy functional minimization method to acquire the sharpest image pieces separately.
Abstract: A depth-based computational photography model is proposed for all-in-focus image capture. A decomposition function, a defocus matrix, and a depth matrix are introduced to construct the photography model. The original image acquired from a camera can be decomposed into several sub-images on the basis of depth information. The defocus matrix can be deduced from the depth matrix according to the sensor defocus geometry for a thin lens model. And the depth matrix is reconstructed using the axial binocular stereo vision algorithm. This photography model adopts an energy functional minimization method to acquire the sharpest image pieces separately. The implementation of the photography method is described in detail. Experimental results for an actual scene demonstrate that our model is effective.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This method builds on the universal imaging principle: only scene at the focus distance will converge to a single sharp point on imaging sensor but other scene will yield different blur effects varying with its distance from the camera lens.
Abstract: We present a technique to recover and refine the depth map from a single image captured by a conventional camera in this paper. Our method builds on the universal imaging principle: only scene at the focus distance will converge to a single sharp point on imaging sensor but other scene will yield different blur effects varying with its distance from the camera lens. We first estimate depth values at edge locations via spectrum contrast and then recover the full depth map using a depth matting optimization method. Due to the fact that some blur textures such as soft shadows or blur patterns will produce ambiguity results during the procedure of depth estimation, we use a total variation-based image smoothing method to smooth the original image, a smoothed image with detailed texture being suppressed can be generated. Taking this smoothed image as reference image, a guided filter is used to refine the final depth map.

35 citations

Journal ArticleDOI
TL;DR: A depth-based computational photography model is proposed for all-in-focus image capture that adopts an energy functional minimization method to acquire the sharpest image pieces separately.
Abstract: A depth-based computational photography model is proposed for all-in-focus image capture. A decomposition function, a defocus matrix, and a depth matrix are introduced to construct the photography model. The original image acquired from a camera can be decomposed into several sub-images on the basis of depth information. The defocus matrix can be deduced from the depth matrix according to the sensor defocus geometry for a thin lens model. And the depth matrix is reconstructed using the axial binocular stereo vision algorithm. This photography model adopts an energy functional minimization method to acquire the sharpest image pieces separately. The implementation of the photography method is described in detail. Experimental results for an actual scene demonstrate that our model is effective.

1 citations

Journal ArticleDOI
TL;DR: This paper proposes a multi-view three-dimensional display method based on a scanning imaging system with the light-intensity characteristic recorded by an improved flatbed scanner that can be applied to present the three- dimensional effect of objects of flat translucent multilayer structure with a wide field of view.
Abstract: This paper proposes a multi-view three-dimensional display method based on a scanning imaging system with the light-intensity characteristic recorded by an improved flatbed scanner. Within the effective scanning depth of the imaging sensor, two transmission images are each simultaneously acquired by two linear CCD modules with different focal planes. Then the phase gradient information of the target can be obtained by an appropriate retrieval algorithm. Further, the multi-view three-dimensional effect is presented through dynamic angles of view. Theoretical analysis of this method is discussed, and experiments are carried out by building a scanner. The experiment results are presented with an algae specimen and transparent beads. We hope this method can be applied to present the three-dimensional effect of objects of flat translucent multilayer structure with a wide field of view.

1 citations