scispace - formally typeset
Search or ask a question

Showing papers by "Wolfgang Heidrich published in 2019"


Journal Article•DOI•
TL;DR: A lens is designed to produce spatially shift-invariant point spread functions, over the full FOV, that are tailored to the proposed reconstruction architecture, and this system is compared against existing single-element designs, including an aspherical lens and a pinhole, and against a complex multielement lens.
Abstract: Typical camera optics consist of a system of individual elements that are designed to compensate for the aberrations of a single lens. Recent computational cameras shift some of this correction task from the optics to post-capture processing, reducing the imaging optics to only a few optical elements. However, these systems only achieve reasonable image quality by limiting the field of view (FOV) to a few degrees - effectively ignoring severe off-axis aberrations with blur sizes of multiple hundred pixels. In this paper, we propose a lens design and learned reconstruction architecture that lift this limitation and provide an order of magnitude increase in field of view using only a single thin-plate lens element. Specifically, we design a lens to produce spatially shift-invariant point spread functions, over the full FOV, that are tailored to the proposed reconstruction architecture. We achieve this with a mixture PSF, consisting of a peak and and a low-pass component, which provides residual contrast instead of a small spot size as in traditional lens designs. To perform the reconstruction, we train a deep network on captured data from a display lab setup, eliminating the need for manual acquisition of training data in the field. We assess the proposed method in simulation and experimentally with a prototype camera system. We compare our system against existing single-element designs, including an aspherical lens and a pinhole, and we compare against a complex multielement lens, validating high-quality large field-of-view (i.e. 53°) imaging performance using only a single thin-plate element.

80 citations


Journal Article•DOI•
TL;DR: This work presents a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor, and introduces a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF.
Abstract: Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We thereupon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we propose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state-of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.

78 citations


Journal Article•DOI•
TL;DR: In this paper, the suitability of optical coherence tomography (OCT) in monitoring the cake layer development in-situ in activated sludge membrane bioreactors (AS-MBR) under continuous operation was evaluated.

28 citations


Journal Article•DOI•
TL;DR: In this paper, the authors used structured monochromatic volume illumination with spatially varying intensity profiles to achieve 3D intensity particle tracking velocimetry using a single video camera.
Abstract: We use structured monochromatic volume illumination with spatially varying intensity profiles, to achieve 3D intensity particle tracking velocimetry using a single video camera. The video camera records the 2D motion of a 3D particle field within a fluid, which is perpendicularly illuminated with depth gradients of the illumination intensity. This allows us to encode the depth position perpendicular to the camera, in the intensity of each particle image. The light intensity field is calibrated using a 3D laser-engraved glass cube containing a known spatial distribution of 1100 defects. This is used to correct for the distortions and divergence of the projected light. We use a sequence of changing light patterns, with numerous sub-gradients in the intensity, to achieve a resolution of 200 depth levels.

27 citations


Journal Article•DOI•
TL;DR: This work demonstrates a low-cost, easy to implement microscopy setup for quantitative imaging of phase and bright field amplitude using collimated white light illumination.
Abstract: Phase imaging techniques are an invaluable tool in microscopy for quickly examining thin transparent specimens. Existing methods are limited to either simple and inexpensive methods that produce only qualitative phase information (e.g. phase contrast microscopy, DIC), or significantly more elaborate and expensive quantitative methods. Here we demonstrate a low-cost, easy to implement microscopy setup for quantitative imaging of phase and bright field amplitude using collimated white light illumination.

27 citations


Journal Article•DOI•
TL;DR: This work improves the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations by introducing an essentially continuous time axis where consistency of the reconstructed shape with the projection images is enforced for the specific time and deformation state at which the image was captured.
Abstract: Computed tomography has emerged as the method of choice for scanning complex shapes as well as interior structures of stationary objects. Recent progress has also allowed the use of CT for analyzing deforming objects and dynamic phenomena, although the deformations have been constrained to be either slow or periodic motions. In this work we improve the tomographic reconstruction of time-varying geometries undergoing faster, non-periodic deformations. Our method uses a warp-and-project approach that allows us to introduce an essentially continuous time axis where consistency of the reconstructed shape with the projection images is enforced for the specific time and deformation state at which the image was captured. The method uses an efficient, time-adaptive solver that yields both the moving geometry as well as the deformation field. We validate our method with extensive experiments using both synthetic and real data from a range of different application scenarios.

24 citations


Proceedings Article•DOI•
01 Jan 2019
TL;DR: This work was supported by King Abdullah University of Science and Technology as part of VCC center baseline funding and King Abdulaziz City for Science and Technologyolarship.
Abstract: This work was supported by King Abdullah University of Science and Technology as part of VCC center baseline funding. Masheal Alghamdi is supported by King Abdulaziz City for Science and Technologyscholarship

11 citations


Proceedings Article•
01 Jan 2019
TL;DR: In this paper, the authors used the King Abdullah University of Science and Technology as part of VCC center baseline funding and an equipment donation from LUCID Vision Labs to support their work.
Abstract: This work was supported by King Abdullah University of Science and Technology as part of VCC center baseline funding and an equipment donation from LUCID Vision Labs. We thank Dr. Alex Tibbs for sharing data for initial test and Nadya Suvorova for helping dataset construction.

7 citations


Journal Article•DOI•
TL;DR: The supplementary material for paper "Rui Li, Wolfgang Heidrich, Hierarchical and View-invariant Light Field Segmentation by Maximizing Entropy Rate on 4D Ray Graphs" is provided.
Abstract: Image segmentation is an important first step of many image processing, computer graphics, and computer vision pipelines. Unfortunately, it remains difficult to automatically and robustly segment cluttered scenes, or scenes in which multiple objects have similar color and texture. In these scenarios, light fields offer much richer cues that can be used efficiently to drastically improve the quality and robustness of segmentations. In this paper we introduce a new light field segmentation method that respects texture appearance, depth consistency, as well as occlusion, and creates well-shaped segments that are robust under view point changes. Furthermore, our segmentation is hierarchical, i.e. with a single optimization, a whole hierarchy of segmentations with different numbers of regions is available. All this is achieved with a submodular objective function that allows for efficient greedy optimization. Finally, we introduce a new tree-array type data structure, i.e. a disjoint tree, to efficiently perform submodular optimization on very large graphs. This approach is of interest beyond our specific application of light field segmentation. We demonstrate the efficacy of our method on a number of synthetic and real data sets, and show how the obtained segmentations can be used for applications in image processing and graphics.

6 citations


Proceedings Article•
09 Sep 2019
TL;DR: A method to combine simultaneously captured images from such a two-camerastereo system to generate a high-quality, noise reduced color image, made robust by introducing a novel artifact-robust optimization formulation.
Abstract: Recent years have seen an explosion of the number of camera modules integratedinto individual consumer mobile devices, including configurations that contain multi-ple different types of image sensors. One popular configuration is to combine an RGBcamera for color imaging with a monochrome camera that has improved performancein low-light settings, as well as some sensitivity in the infrared. In this work we in-troduce a method to combine simultaneously captured images from such a two-camerastereo system to generate a high-quality, noise reduced color image. To do so, pixel-to-pixel alignment has to be constructed between the two captured monochrome and colorimages, which however, is prone to artifacts due to parallax. The joint image recon-struction is made robust by introducing a novel artifact-robust optimization formulation.We provide extensive experimental results based on the two-camera configuration of a commercially available cell phone.

3 citations


Proceedings Article•DOI•
29 Sep 2019
TL;DR: This work proposes a novel stochastic spatial-domain solver, in which a randomized subsampling strategy is introduced during the learning sparse codes, and extends the proposed strategy in conjunction with online learning, scaling the CSC model up to very large sample sizes.
Abstract: This work was supported by King Abdullah University of Science and Technology as part of VCC center baseline funding.

Proceedings Article•DOI•
24 Jun 2019
TL;DR: In this article, a new formula is derived to connect between slopes wavefront sensors (e.g. Shack-Hartmann) and curvature sensors (based on Transport-of-Intensity Equation).
Abstract: A new formula is derived to connect between slopes wavefront sensors (e.g. Shack-Hartmann) and curvature sensors (based on Transport-of-Intensity Equation). Experimental results demonstrate snapshot simultaneous phase and intensity recovery on an incoherent illumination microscopy.

Posted Content•
TL;DR: Zhang et al. as discussed by the authors proposed a new image processing pipeline to solve the problem of incomplete color sampling, noise degradation, and limited resolution in modern camera systems by considering the mixture problem of DM, DN, and SR.
Abstract: Incomplete color sampling, noise degradation, and limited resolution are the three key problems that are unavoidable in modern camera systems Demosaicing (DM), denoising (DN), and super-resolution (SR) are core components in a digital image processing pipeline to overcome the three problems above, respectively Although each of these problems has been studied actively, the mixture problem of DM, DN, and SR, which is a higher practical value, lacks enough attention Such a mixture problem is usually solved by a sequential solution (applying each method independently in a fixed order: DM $\to$ DN $\to$ SR), or is simply tackled by an end-to-end network without enough analysis into interactions among tasks, resulting in an undesired performance drop in the final image quality In this paper, we rethink the mixture problem from a holistic perspective and propose a new image processing pipeline: DN $\to$ SR $\to$ DM Extensive experiments show that simply modifying the usual sequential solution by leveraging our proposed pipeline could enhance the image quality by a large margin We further adopt the proposed pipeline into an end-to-end network, and present Trinity Enhancement Network (TENet) Quantitative and qualitative experiments demonstrate the superiority of our TENet to the state-of-the-art Besides, we notice the literature lacks a full color sampled dataset To this end, we contribute a new high-quality full color sampled real-world dataset, namely PixelShift200 Our experiments show the benefit of the proposed PixelShift200 dataset for raw image processing

Proceedings Article•DOI•
18 Nov 2019
TL;DR: This work targets on hand-crafting the optical structure of SPAD array to enable the super-resolution design ofSPAD array, and investigates the scenario of optical coding for SPADarray, including the improvement of fill-factor of SPad array by assembling microstructures and the direct light modulation using a diffractive optical element.
Abstract: Time-of- ight depth imaging and transient imaging are two imaging modalities that have recently received a lot of interest. Despite much research, existing hardware systems are limited either in terms of temporal resolution or are prohibitively expensive. Arrays of Single Photon Avalanche Diodes (SPADs) are promising candidates to fill this gap by providing higher temporal resolution at an affordable cost. Unfortunately, state-of-the-art SPAD arrays are only available in relatively small resolutions and low fill-factor. Furthermore, the low fill-factor issue leads to more ill-posed problems when seeking to realize the super-resolution imaging with SPAD array. In this work, we target on hand-crafting the optical structure of SPAD array to enable the super-resolution design of SPAD array. We particularly investigate the scenario of optical coding for SPAD array, including the improvement of fill-factor of SPAD array by assembling microstructures and the direct light modulation using a diffractive optical element. A part of the design work has been applied in our recent advance, where here we show several applications in depth and transient imaging.

Patent•
29 Aug 2019
TL;DR: In this article, a reference image is captured in response to a plane wavefront incident on the mask, and another measurement image was captured by a distorted wavefront on the face mask.
Abstract: A wavefront sensor includes a mask and a sensor utilized to capture a diffraction pattern generated by light incident to the mask. A reference image is captured in response to a plane wavefront incident on the mask, and another measurement image is captured in response to a distorted wavefront incident on the mask. The distorted wavefront is reconstructed based on differences between the reference image and the measurement image.

Posted Content•
TL;DR: In this paper, a randomized subsampling strategy is introduced during the learning of sparse codes to exploit the sparsity of the problem as well as the small spatial support of the filters.
Abstract: State-of-the-art methods for Convolutional Sparse Coding usually employ Fourier-domain solvers in order to speed up the convolution operators. However, this approach is not without shortcomings. For example, Fourier-domain representations implicitly assume circular boundary conditions and make it hard to fully exploit the sparsity of the problem as well as the small spatial support of the filters. In this work, we propose a novel stochastic spatial-domain solver, in which a randomized subsampling strategy is introduced during the learning sparse codes. Afterwards, we extend the proposed strategy in conjunction with online learning, scaling the CSC model up to very large sample sizes. In both cases, we show experimentally that the proposed subsampling strategy, with a reasonable selection of the subsampling rate, outperforms the state-of-the-art frequency-domain solvers in terms of execution time without losing the learning quality. Finally, we evaluate the effectiveness of the over-complete dictionary learned from large-scale datasets, which demonstrates an improved sparse representation of the natural images on account of more abundant learned image features.