scispace - formally typeset
Search or ask a question
Author

Edward R. Dowski

Other affiliations: OmniVision Technologies
Bio: Edward R. Dowski is an academic researcher from University of Colorado Boulder. The author has contributed to research in topics: Wavefront coding & Wavefront. The author has an hindex of 29, co-authored 68 publications receiving 4328 citations. Previous affiliations of Edward R. Dowski include OmniVision Technologies.


Papers
More filters
Journal ArticleDOI
TL;DR: An optical-digital system that delivers near-diffraction-limited imaging performance with a large depth of field that is the standard incoherent optical system modified by a phase mask with digital processing of the resulting intermediate image.
Abstract: We designed an optical‐digital system that delivers near-diffraction-limited imaging performance with a large depth of field. This system is the standard incoherent optical system modified by a phase mask with digital processing of the resulting intermediate image. The phase mask alters or codes the received incoherent wave front in such a way that the point-spread function and the optical transfer function do not change appreciably as a function of misfocus. Focus-independent digital filtering of the intermediate image is used to produce a combined optical‐digital system that has a nearly diffraction limited point-spread function. This high-resolution extended depth of field is obtained through the expense of an increased dynamic range of the incoherent system. We use both the ambiguity function and the stationary-phase method to design these phase masks.

1,344 citations

Journal ArticleDOI
TL;DR: In this article, the authors describe a new paradigm for designing hybrid imaging systems, which is termed wave-front coding, which allows the manufacturing tolerance to be reduced, focus-related aberrations to be controlled, and imaging systems to be constructed with only one optical element plus some signal processing.
Abstract: We describe a new paradigm for designing hybrid imaging systems. These imaging systems use optics with a special aspheric surface to code the image so that the point-spread function or the modulation transfer function has specified characteristics. Signal processing then decodes the detected image. The coding can be done so that the depth of focus can be extended. This allows the manufacturing tolerance to be reduced, focus-related aberrations to be controlled, and imaging systems to be constructed with only one optical element plus some signal processing. OCIS codes: 080.3620, 110.0110, 110.2990, 110.0180, 110.4850, 180.0180. 1. Introduction and Background The new paradigm that we describe for the design of imaging systems has been termed wave-front coding. These coded optical systems are arrived at by means of designing the coding optics and the signal processing as an integrated imaging system. The results are imaging systems with previously unobtainable imaging modalities and require a modification of the optics for coding the wave in the aperture stop or an image of the aperture stop. This coding produces an intermediate image formed by the optical portion of the system that gathers the image. Signal processing is then required for decoding the intermediate image to produce a final image. The coding can be designed to make the imaging system invariant to certain parameters or to optimize the imaging system’s sensitivity to those parameters. One example is the use of image coding to preserve misfocus and hence, range or distance information. Another example is the use of different types of codes to make the image invariant to misfocus. These new focusinvariant imaging systems can have more than an order of magnitude increase in the depth of field. Our emphasis in this paper is on the use of the increased depth of focus to design new types of imaging systems. An example of the new imaging systems that can be constructed is a single-element lens that has a small F#, wide field of view, and diffractionlimited imaging. It also can have greatly relaxed assembly tolerances, because of its invariance to focus-related aberrations. Coding of signals for optimally conveying particular information is not new. In radar the transmitted pulses are coded for optimally providing information concerning a target’s range, for example. The appropriate signal processing to extract the range information is designed in conjunction with the transmitted signal. The integrated design of the optical image-gathering portion along with the signal processing normally is not done in the design of imaging systems. There are exceptions such as tomography, coded aperture imaging, and sometimes, interferometric imaging. In 1984 a group that was investigating the limits of resolution pointed out the potential of increasing the performance of imaging systems by jointly designing the optics and the signal processing. 1

388 citations

Patent
16 Jan 2004
TL;DR: In this paper, a special purpose optical mask was designed to cause the optical transfer function to remain essentially constant within some range from the in-focus position, resulting in an infocus image over an increased depth of field.
Abstract: A system for increasing the depth of field and decreasing the wavelength sensitivity and the effects of misfocus-producing aberrations of the lens of an incoherent optical system incorporates a special purpose optical mask into the incoherent system. The optical mask has been designed to cause the optical transfer function to remain essentially constant within some range from the in-focus position. Signal processing of the resulting intermediate image undoes the optical transfer modifying effects of the mask, resulting in an in-focus image over an increased depth of field. Generally the mask is placed at a principal plane or the image of a principal plane of the optical system. Preferably, the mask modifies only phase and not amplitude of light. The mask may be used to increase the useful range of passive ranging systems.

326 citations

Patent
06 Jun 2001
TL;DR: In this paper, a wavefront coding mask is used to encode phase variations induced by the wavefront and cause the optical transfer function to remain essentially constant within some range away from the in-focus position.
Abstract: The present invention provides extended depth of field or focus to conventional Phase Contrast imaging systems. This is accomplished by including a Wavefront Coding mask in the system to apply phase variations to the wavefront transmitted by the Phase Object being imaged. The phase variations induced by the Wavefront Coding mask code the wavefront and cause the optical transfer function to remain essentially constant within some range away from the in-focus position. This provides a coded image at the detector. Post processing decodes this coded image, resulting in an in-focus image over an increased depth of field.

191 citations

Journal ArticleDOI
TL;DR: Experimental verification of an extended depth of focus (EDF) system with near-diffraction-limited performance capabilities is reported with a number of images from various optical systems using the phase plate, thus demonstrating the success of this EDF system.
Abstract: We report experimental verification of an extended depth of focus (EDF) system with near-diffraction-limited performance capabilities. Dowski and Cathey [Appl. Opt.34, 1859–1866 (1995)] described the theory of this system in detail. We can create an EDF system by modifying a standard incoherent optical system with a special cubic phase plate placed at the aperture stop. We briefly review the theory and present the first optical experimental verification of this EDF system. The phase plate codes the wave front, producing a modified optical transfer function. Once the image is transformed into digital form, a signal-processing step decodes the image and produces the final in-focus image. We have produced a number of images from various optical systems using the phase plate, thus demonstrating the success of this EDF system.

168 citations


Cited by
More filters
01 Jan 2005
TL;DR: The plenoptic camera as mentioned in this paper uses a microlens array between the sensor and the main lens to measure the total amount of light deposited at that location, but how much light arrives along each ray.
Abstract: This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.

2,252 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations

Journal ArticleDOI
TL;DR: In this article, a single high-resolution image of the scattered light, captured with a standard camera, encodes sufficient information to image through visually opaque layers and around corners with diffraction-limited resolution.
Abstract: Optical imaging through and inside complex samples is a difficult challenge with important applications in many fields. The fundamental problem is that inhomogeneous samples such as biological tissue randomly scatter and diffuse light, preventing the formation of diffraction-limited images. Despite many recent advances, no current method can perform non-invasive imaging in real-time using diffused light. Here, we show that, owing to the ‘memory-effect’ for speckle correlations, a single high-resolution image of the scattered light, captured with a standard camera, encodes sufficient information to image through visually opaque layers and around corners with diffraction-limited resolution. We experimentally demonstrate single-shot imaging through scattering media and around corners using spatially incoherent light and various samples, from white paint to dynamic biological samples. Our single-shot lensless technique is simple, does not require wavefront-shaping nor time-gated or interferometric detection, and is realized here using a camera-phone. It has the potential to enable imaging in currently inaccessible scenarios. Diffraction-limited imaging in a variety of complex media is realized based on analysis of speckle correlations in light captured using a camera phone.

899 citations

PatentDOI
TL;DR: In this paper, a double-helix point spread function was used to resolve molecules beyond the optical diffraction limit in three dimensions, which can be used in conjunction with a microscope to provide dual-lobed images of a molecule.
Abstract: Embodiments of the present invention can resolve molecules beyond the optical diffraction limit in three dimensions. A double-helix point spread function can be used to in conjunction with a microscope to provide dual-lobed images of a molecule. Based on the rotation of the dual-lobed image, the axial position of the molecule can be estimated or determined. In some embodiments, the angular rotation of the dual-lobed imaged can be determined using a centroid fit calculation or by finding the midpoints of the centers of the two lobes. Regardless of the technique, the correspondence between the rotation and axial position can be utilized. A double-helix point spread function can also be used to determine the lateral positions of molecules and hence their three-dimensional location.

837 citations

Journal Article
TL;DR: Methods for learning dictionaries that are appropriate for the representation of given classes of signals and multisensor data are described and dimensionality reduction based on dictionary representation can be extended to address specific tasks such as data analy sis or classification.
Abstract: We describe methods for learning dictionaries that are appropriate for the representation of given classes of signals and multisensor data. We further show that dimensionality reduction based on dictionary representation can be extended to address specific tasks such as data analy sis or classification when the learning includes a class separability criteria in the objective function. The benefits of dictionary learning clearly show that a proper understanding of causes underlying the sensed world is key to task-specific representation of relevant information in high-dimensional data sets.

705 citations