scispace - formally typeset
Search or ask a question

Showing papers by "Jinli Suo published in 2013"


Proceedings ArticleDOI
19 Apr 2013
TL;DR: This work presents coded focal stack photography as a computational photography paradigm that combines a focal sweep and a coded sensor readout with novel computational algorithms that facilitates high-resolution post-capture refocusing, flexible depth of field, and 3D imaging.
Abstract: We present coded focal stack photography as a computational photography paradigm that combines a focal sweep and a coded sensor readout with novel computational algorithms. We demonstrate various applications of coded focal stacks, including photography with programmable non-planar focal surfaces and multiplexed focal stack acquisition. By leveraging sparse coding techniques, coded focal stacks can also be used to recover a full-resolution depth and all-in-focus (AIF) image from a single photograph. Coded focal stack photography is a significant step towards a computational camera architecture that facilitates high-resolution post-capture refocusing, flexible depth of field, and 3D imaging.

47 citations


Journal ArticleDOI
TL;DR: The fundamental theories about compressed sensing, matrix rank minimization, and low-rank matrix recovery are reviewed and the typical applications of these theories in image processing, computer vision, and computational photography are introduced.

16 citations


Proceedings ArticleDOI
19 Apr 2013
TL;DR: This work introduces high-rank coded aperture projectors - a new computational display technology that combines optical designs with computational processing to overcome depth of field limitations of conventional devices.
Abstract: Projectors require large apertures to maximize light throughput. Unfortunately, this leads to shallow depths of field (DOF), hence blurry images, when projecting on non-planar surfaces, such as cultural heritage sites, curved screens, or when sharing visual information in everyday environments. We introduce high-rank coded aperture projectors - a new computational display technology that combines optical designs with computational processing to overcome depth of field limitations of conventional devices. In particular, we employ high-speed spatial light modulators (SLMs) on the image plane and in the aperture of modified projectors. The patterns displayed on these SLMs are computed with a new mathematical framework that uses high-rank light field factorizations and directly exploits the limited temporal resolution and contrast sensitivity of the human visual system. With an experimental prototype projector, we demonstrate significantly increased DOF as compared to conventional technology.

6 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: A prototype system which can fast compute non-uniform convolution for the blurring image of planar scene caused by 3D rotation and incorporate it into an iterative deblurring framework is developed.
Abstract: Removing non-uniform blurring caused by camera shake has been troublesome for its high computational cost. To accelerate the non-uniform deblurring process, this paper analyzes the efficiency bottleneck of the non-uniform deblurring algorithms and proposes to implement the time-consuming and repeatedly required module, i.e. non-uniform convolution, by optical computing. Specifically, the non-uniform convolution is simulated by an off-the-shelf projector together with a camera mounted on a programmable motion platform. Benefiting from the high speed and parallelism of optical computation, our system is able to accelerate most existing non-uniform camera shake removing algorithms extensively. We develop a prototype system which can fast compute non-uniform convolution for the blurring image of planar scene caused by 3D rotation. By incorporating it into an iterative deblurring framework, the effectiveness of proposed system is verified.

4 citations


Journal ArticleDOI
TL;DR: An efficient optical computation deblurring framework that implements the time-consuming and repeatedly required modules, i.e., non-uniform convolution and perspective warping, by light transportation is proposed and has a high generalizability to more complex camera motions.

4 citations


Patent
02 Oct 2013
TL;DR: In this article, a spectrum time-space domain propagation method based on a trilateral filtering method was proposed to propagate the sampling point spectrum information to multiple scenes in the scene video.
Abstract: The invention discloses a spectrum time-space domain propagation method based on a trilateral filtering The method comprises the following steps: S1, a double channel video of a scene is collected, the double channel video comprises a scene video and a spectrum video of a sampling point, and the spectrum video of the sampling point is processed to achieve spectrum information of the sampling point; S2, a red-green-blue integral curve of an acquisition device of the scene video is used to estimate a spectrum subchannel propagation proportion estimation coefficient of the scene video; S3, an optical flow method is used to estimate a sampling point position in the spectrum video of each frame; and S4, based on the spectrum subchannel propagation proportion estimation coefficient and the sampling point position in the spectrum video of each frame, the trilateral filtering method is used to propagate the sampling point spectrum information to multiple scenes in the scene video According to the method in the embodiment of the invention, the trilateral filtering method is used to propagate the sampling point spectrum information to high-resolution scenes, so that the high-space, high-spectrum-resolution video is achieved