About: Bokeh is a research topic. Over the lifetime, 179 publications have been published within this topic receiving 1155 citations.
Papers published on a yearly basis
••27 Jul 2009
TL;DR: A new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance and use intelligent binary coding to estimate the relative distance and angle to the camera, and shows potential for applications in augmented reality and motion capture.
Abstract: We show a new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance. Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3mm visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle. The design exploits the bokeh effect of ordinary cameras lenses, which maps rays exiting from an out of focus scene point into a disk like blur on the camera sensor. This bokeh-code or Bokode is a barcode design with a simple lenslet over the pattern. We show that a code with 15μm features can be read using an off-the-shelf camera from distances of up to 2 meters. We use intelligent binary coding to estimate the relative distance and angle to the camera, and show potential for applications in augmented reality and motion capture. We analyze the constraints and performance of the optical system, and discuss several plausible application scenarios.
TL;DR: This letter proposes a light field refocusing method that can selectively refocus images with focused region being superresolved and bokeh being esthetically rendered and enables postadjustment of depth of field.
Abstract: Camera arrays provide spatial and angular information within a single snapshot. With refocusing methods, focal planes can be altered after exposure. In this letter, we propose a light field refocusing method to improve the imaging quality of camera arrays. In our method, the disparity is first estimated. Then, the unfocused region (bokeh) is rendered by using a depth-based anisotropic filter. Finally, the refocused image is produced by a reconstruction-based superresolution approach where the bokeh image is used as a regularization term. Our method can selectively refocus images with focused region being superresolved and bokeh being esthetically rendered. Our method also enables postadjustment of depth of field. We conduct experiments on both public and self-developed datasets. Our method achieves superior visual performance with acceptable computational cost as compared to the other state-of-the-art methods.
••14 Jun 2020
TL;DR: This paper presents a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses, and proposes to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
Abstract: Bokeh is an important artistic effect used to highlight the main object of interest on the photo by blurring all out-of-focus areas. While DSLR and system camera lenses can render this effect naturally, mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics. Unlike the current solutions simulating bokeh by applying Gaussian blur to image background, in this paper we propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras. For this, we present a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses. We use these images to train a deep learning model to reproduce a natural bokeh effect based on a single narrow-aperture image. The experimental results show that the proposed approach is able to render a plausible non-uniform bokeh even in case of complex input data with multiple objects. The dataset, pre-trained models and codes used in this paper are available on the project website: https://people.ee.ethz.ch/ ihnatova/pynet-bokeh.html.
10 Apr 2014
TL;DR: In this paper, an aberrometer is used to measure the refractive condition of any eye and an eyewear measurement generated by analyzing differences between the distorted captured patterns and the undistorted SLM patterns.
Abstract: In exemplary implementations of this invention, an aberrometer is used to measure the refractive condition of any eye. An artificial light source emits light that travels to a light sensor. Along the way, the light enters and then exits the eye, passes through or is reflected from one or more spatial light modulators (SLMs), and passes through an objective lens-system. The SLMs modify a bokeh effect of the imaging system (which is only visible when the system is out-of-focus), creating a blurred version of the SLM patterns. The light sensor then captures one or more out-of-focus images. If there are refractive aberrations in the eye, these aberrations cause the SLM patterns captured in the images to be distorted. By analyzing differences between the distorted captured patterns and the undistorted SLM patterns, refractive aberrations of the eye can be computed and an eyewear measurement generated.
TL;DR: This paper proposes a novel method for reconstructing all-in-focus images through shifted pinholes on the lens based on 3D frequency analysis of multi- focus images using simple linear filters, which enables robust scene refocusing with arbitrary bokeh.
Abstract: Scene refocusing beyond extended depth of field for users to observe objects effectively is aimed by researchers in computational photography, microscopic imaging, and so on. Ordinary all-in-focus image reconstruction from a sequence of multi-focus images achieves extended depth of field, where reconstructed images would be captured through a pinhole in the center on the lens. In this paper, we propose a novel method for reconstructing all-in-focus images through shifted pinholes on the lens based on 3D frequency analysis of multi-focus images. Such shifted pinhole images are obtained by a linear combination of multi-focus images with scene-independent 2D filters in the frequency domain. The proposed method enables us to efficiently synthesize dense 4D light field on the lens plane for image-based rendering, especially, robust scene refocusing with arbitrary bokeh. Our novel method using simple linear filters achieves not only reconstruction of all-in-focus images even for shifted pinholes more robustly than the conventional methods depending on scene/focus estimation, but also scene refocusing without suffering from limitation of resolution in comparison with recent approaches using special devices such as lens arrays in computational photography.