scispace - formally typeset
Search or ask a question
Topic

Depth of field

About: Depth of field is a research topic. Over the lifetime, 3306 publications have been published within this topic receiving 58290 citations. The topic is also known as: DOF.


Papers
More filters
01 Jan 2005
TL;DR: The plenoptic camera as mentioned in this paper uses a microlens array between the sensor and the main lens to measure the total amount of light deposited at that location, but how much light arrives along each ray.
Abstract: This paper presents a camera that samples the 4D light field on its sensor in a single photographic exposure. This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera. Each microlens measures not just the total amount of light deposited at that location, but how much light arrives along each ray. By re-sorting the measured rays of light to where they would have terminated in slightly different, synthetic cameras, we can compute sharp photographs focused at different depths. We show that a linear increase in the resolution of images under each microlens results in a linear increase in the sharpness of the refocused photographs. This property allows us to extend the depth of field of the camera without reducing the aperture, enabling shorter exposures and lower image noise. Especially in the macrophotography regime, we demonstrate that we can also compute synthetic photographs from a range of different viewpoints. These capabilities argue for a different strategy in designing photographic imaging systems. To the photographer, the plenoptic camera operates exactly like an ordinary hand-held camera. We have used our prototype to take hundreds of light field photographs, and we present examples of portraits, high-speed action and macro close-ups.

2,252 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations

Journal ArticleDOI
TL;DR: An optical-digital system that delivers near-diffraction-limited imaging performance with a large depth of field that is the standard incoherent optical system modified by a phase mask with digital processing of the resulting intermediate image.
Abstract: We designed an optical‐digital system that delivers near-diffraction-limited imaging performance with a large depth of field. This system is the standard incoherent optical system modified by a phase mask with digital processing of the resulting intermediate image. The phase mask alters or codes the received incoherent wave front in such a way that the point-spread function and the optical transfer function do not change appreciably as a function of misfocus. Focus-independent digital filtering of the intermediate image is used to produce a combined optical‐digital system that has a nearly diffraction limited point-spread function. This high-resolution extended depth of field is obtained through the expense of an increased dynamic range of the incoherent system. We use both the ambiguity function and the stationary-phase method to design these phase masks.

1,344 citations

Book
03 Jan 1992
TL;DR: In this paper, the authors examined a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems and proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation.
Abstract: One of the major unsolved problems in designing an autonomous agent [robot] that must function in a complex, moving environment is obtaining reliable, real-time depth information, preferably without the limitations of active scanners. Stereo remains computationally intensive and prone to severe errors, the use of motion information is still quite experimental, and autofocus schemes can measure depth at only one point at a time. We examine a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems. We prove that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation. Experiments with realistic imagery show that measurement of these optical gradients can potentially provide depth information roughly comparable to stereo disparity or motion parallax, while avoiding image-to-image matching problems. A potentially real-time version of this algorithm is described.

1,014 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems, which can be used to make reliable depth maps of useful accuracy with relatively minimal computation.
Abstract: This paper examines a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems. Previously, autofocus schemes have used depth of field to measured depth by searching for the lens setting that gives the best focus, repeating this search separately for each image point. This search is unnecessary, for there is a smooth gradient of focus as a function of depth. By measuring the amount of defocus, therefore, we can estimate depth simultaneously at all points, using only one or two images. It is proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation. Experiments with realistic imagery show that measurement of these optical gradients can provide depth information roughly comparable to stereo disparity or motion parallax, while avoiding image-to-image matching problems.

1,008 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
82% related
Laser
353.1K papers, 4.3M citations
81% related
Pixel
136.5K papers, 1.5M citations
77% related
Optical fiber
167K papers, 1.8M citations
77% related
Segmentation
63.2K papers, 1.2M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022107
2021135
2020185
2019205
2018230