scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2013"


Journal ArticleDOI
TL;DR: In this paper, state-of-the-art theory and experiment of the motion of cold and ultracold atoms coupled to the radiation field within a high-finesse optical resonator in the dispersive regime of the atom-field interaction with small internal excitation is reviewed.
Abstract: We review state-of-the-art theory and experiment of the motion of cold and ultracold atoms coupled to the radiation field within a high-finesse optical resonator in the dispersive regime of the atom-field interaction with small internal excitation The optical dipole force on the atoms together with the back-action of atomic motion onto the light field gives rise to a complex nonlinear coupled dynamics As the resonator constitutes an open driven and damped system, the dynamics is non-conservative and in general enables cooling and confining the motion of polarizable particles In addition, the emitted cavity field allows for real-time monitoring of the particle's position with minimal perturbation up to sub-wavelength accuracy For many-body systems, the resonator field mediates controllable long-range atom-atom interactions, which set the stage for collective phenomena Besides correlated motion of distant particles, one finds critical behavior and non-equilibrium phase transitions between states of different atomic order in conjunction with superradiant light scattering Quantum degenerate gases inside optical resonators can be used to emulate opto-mechanics as well as novel quantum phases like supersolids and spin glasses Non-equilibrium quantum phase transitions, as predicted by eg the Dicke Hamiltonian, can be controlled and explored in real-time via monitoring the cavity field In combination with optical lattices, the cavity field can be utilized for non-destructive probing Hubbard physics and tailoring long-range interactions for ultracold quantum systems

727 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work derives a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space as part of a decoding, calibration and rectification procedure for lenselet-based plenoptic cameras appropriate for a range of computer vision applications.
Abstract: Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedure for lenselet-based plenoptic cameras appropriate for a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.

549 citations


Journal ArticleDOI
TL;DR: An optical model for light field microscopy based on wave optics, instead of previously reported ray optics models is presented, and a 3-D deconvolution method is presented that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported.
Abstract: Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning The recorded light field can then be used to computationally reconstruct a full volume In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method

472 citations


Journal ArticleDOI
03 Jan 2013-Nature
TL;DR: The ultrafast reversibility of the effects implies that the physical properties of a dielectric can be controlled with the electric field of light, offering the potential for petahertz-bandwidth signal manipulation.
Abstract: The ultrafast reversibility of changes to the electronic structure and electric polarizability of a dielectric with the electric field of a laser pulse, demonstrated here, offers the potential for petahertz-bandwidth optical signal manipulation. Two studies published in this issue highlight the potential for ultrafast signal manipulation in dielectrics using optical fields. When it comes to electrical signal processing, semiconductors have become the materials of choice. However, insulators such as dielectrics could be attractive alternatives: they have a fast response in principle, but usually have extremely low conductivity at low electric fields and break down in large fields. The electronic properties of dielectrics can be controlled with few-cycle laser pulses that permit damage-free exposure of dielectrics to high electric fields. Agustin Schiffrin et al. demonstrate that strong optical laser fields with controlled few-cycle waveforms can reversibly transform a dielectric insulator into a conductor within the optical period (within one femtosecond). Martin Schultze et al. address the crucial issue of ultrafast reversibility, demonstrating that the dielectric can be repeatedly switched 'on' and 'off' with light fields, without degradation. The control of the electric and optical properties of semiconductors with microwave fields forms the basis of modern electronics, information processing and optical communications. The extension of such control to optical frequencies calls for wideband materials such as dielectrics, which require strong electric fields to alter their physical properties1,2,3,4,5. Few-cycle laser pulses permit damage-free exposure of dielectrics to electric fields of several volts per angstrom6 and significant modifications in their electronic system6,7,8,9,10,11,12,13. Fields of such strength and temporal confinement can turn a dielectric from an insulating state to a conducting state within the optical period14. However, to extend electric signal control and processing to light frequencies depends on the feasibility of reversing these effects approximately as fast as they can be induced. Here we study the underlying electron processes with sub-femtosecond solid-state spectroscopy, which reveals the feasibility of manipulating the electronic structure and electric polarizability of a dielectric reversibly with the electric field of light. We irradiate a dielectric (fused silica) with a waveform-controlled near-infrared few-cycle light field of several volts per angstrom and probe changes in extreme-ultraviolet absorptivity and near-infrared reflectivity on a timescale of approximately a hundred attoseconds to a few femtoseconds. The field-induced changes follow, in a highly nonlinear fashion, the turn-on and turn-off behaviour of the driving field, in agreement with the predictions of a quantum mechanical model. The ultrafast reversibility of the effects implies that the physical properties of a dielectric can be controlled with the electric field of light, offering the potential for petahertz-bandwidth signal manipulation.

459 citations


Journal ArticleDOI
21 Jul 2013
TL;DR: A compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image, and a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising are demonstrated.
Abstract: Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.

376 citations


Proceedings ArticleDOI
Douglas Lanman1, David Luebke1
21 Jul 2013
TL;DR: A light-field-based approach to near-eye display that allows for thin, lightweight head-mounted displays capable of depicting accurate accommodation, convergence, and binocular disparity depth cues, and a GPU-accelerated stereoscopic light field renderer is proposed.
Abstract: We propose a light-field-based approach to near-eye display that allows for thin, lightweight head-mounted displays capable of depicting accurate accommodation, convergence, and binocular disparity depth cues. Our near-eye light field displays depict sharp images from out-of-focus display elements by synthesizing light fields corresponding to virtual scenes located within the viewer's natural accommodation range. While sharing similarities with existing integral imaging displays and microlens-based light field cameras, we optimize performance in the context of near-eye viewing. Near-eye light field displays support continuous accommodation of the eye throughout a finite depth of field; as a result, binocular configurations provide a means to address the accommodation-convergence conflict occurring with existing stereoscopic displays. We construct a binocular prototype and a GPU-accelerated stereoscopic light field renderer.

346 citations


Journal ArticleDOI
TL;DR: In this paper, two promising adjacent approaches tackle fundamental limita- tions by utilizing non-optical forces which are, however, induced by optical light fields, namely, dielectrophoretic and photophoretic forces.
Abstract: Optical tweezers, a simple and robust implementa- tion of optical micromanipulation technologies, have become a standard tool in biological, medical and physics research labo- ratories. Recently, with the utilization of holographic beam shap- ing techniques, more sophisticated trapping configurations have been realized to overcome current challenges in applications. Holographically generated higher-order light modes, for exam- ple, can induce highly structured and ordered three-dimensional optical potential landscapes with promising applications in op- tically guided assembly, transfer of orbital angular momentum, or acceleration of particles along defined trajectories. The non- diffracting property of particular light modes enables the op- tical manipulation in multiple planes or the creation of axially extended particle structures. Alongside with these concepts which rely on direct interaction of the light field with particles, two promising adjacent approaches tackle fundamental limita- tions by utilizing non-optical forces which are, however, induced by optical light fields. Optoelectronic tweezers take advantage of dielectrophoretic forces for adaptive and flexible, massively parallel trapping. Photophoretic trapping makes use of thermal forces and by this means is perfectly suited for trapping ab- sorbing particles. Hence the possibility to tailor light fields holo- graphically, combined with the complementary dielectrophoretic and photophoretic trapping provides a holistic approach to the majority of optical micromanipulation scenarios.

338 citations


Patent
11 Jun 2013
TL;DR: A two-dimensional array of linear wave guides includes a plurality of 2D planar wave guide assemblies, columns, sets or layers which each produce a respective depth plane to for a simulated 4D light field as discussed by the authors.
Abstract: A two-dimensional array of linear wave guides includes a plurality of 2D planar wave guide assemblies, columns, sets or layers which each produce a respective depth plane to for a simulated 4D light field. Linear wave guides may have a rectangular cylindrical shape, and may stacked in rows and columns. Each linear wave guide is at least partially internally reflective, for example via at least one opposed pair of at least partially reflective planar side walls, to propogate light along a length of the wave guide. Curved micro-reflectors may reflect some modes of light while passing others. The side walls or a face may reflect some modes of light while passing others. The curved micro-reflectors of any given wave guide each contribute to spherical wave front at a defined radial distance, the various layers producing image planes at respective radial distances.

283 citations


Journal ArticleDOI
TL;DR: A linear relationship is observed between the rotation speed and orbital angular momentum content of the beam and a perfect vortex beam with integer or fractional topological charges is observed.
Abstract: We analyze microparticle dynamics within a "perfect" vortex beam. In contrast to other vortex fields, for any given integer value of the topological charge, a "perfect" vortex beam has the same annular intensity profile with fixed radius of peak intensity. For a given topological charge, the field possesses a well-defined orbital angular momentum density at each point in space, invariant with respect to azimuthal position. We experimentally create a perfect vortex and correct the field in situ, to trap and set in motion trapped microscopic particles. For a given topological charge, a single trapped particle exhibits the same local angular velocity moving in such a field independent of its azimuthal position. We also investigate particle dynamics in "perfect" vortex beams of fractional topological charge. This light field may be applied for novel studies in optical trapping of particles, atoms, and quantum gases.

245 citations


Journal ArticleDOI
TL;DR: In this article, a review of the evolution of the self-focus phenomenon in light beams is presented, and the current status of this rapidly growing area of nonlinear optics and laser physics is discussed.
Abstract: 2012 marked the 50th anniversary of the first published prediction of the self-focusing phenomenon in light beams. The recent revived interest in the subject is due to advances in high-power femtosecond laser technology and due to the possibility they provided of creating extended filaments of high light field intensity in gases and condensed media. This review shows in retrospect how our understanding of the self-action of light evolved from the self-focusing of laser beams in the 1960s to the filamentation of femtosecond laser pulses at present. We also describe the current status of this rapidly growing area of nonlinear optics and laser physics. Finally, we discuss, in general terms, what the phenomena of laser beam self-focusing and laser pulse filamentation have in common and how they differ.

145 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper presents a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT), and develops a novel line-assisted graph-cut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching.
Abstract: Light fields are image-based representations that use densely sampled rays as a scene description. In this paper, we explore geometric structures of 3D lines in ray space for improving light field triangulation and stereo matching. The triangulation problem aims to fill in the ray space with continuous and non-overlapping simplices anchored at sampled points (rays). Such a triangulation provides a piecewise-linear interpolant useful for light field super-resolution. We show that the light field space is largely bilinear due to 3D line segments in the scene, and direct triangulation of these bilinear subspaces leads to large errors. We instead present a simple but effective algorithm to first map bilinear subspaces to line constraints and then apply Constrained Delaunay Triangulation (CDT). Based on our analysis, we further develop a novel line-assisted graph-cut (LAGC) algorithm that effectively encodes 3D line constraints into light field stereo matching. Experiments on synthetic and real data show that both our triangulation and LAGC algorithms outperform state-of-the-art solutions in accuracy and visual quality.

Journal ArticleDOI
TL;DR: In this article, a systematic derivation of the dynamical polarizability and the ac Stark shift of the ground and excited states of atoms interacting with a far-off-resonance light field of arbitrary polarization was presented.
Abstract: We present a systematic derivation of the dynamical polarizability and the ac Stark shift of the ground and excited states of atoms interacting with a far-off-resonance light field of arbitrary polarization. We calculate the scalar, vector, and tensor polarizabilities of atomic cesium using resonance wavelengths and reduced matrix elements for a large number of transitions. We analyze the properties of the fictitious magnetic field produced by the vector polarizability in conjunction with the ellipticity of the polarization of the light field.

Patent
16 Sep 2013
TL;DR: In this paper, the authors proposed a method for correcting artifacts in a light field image rendered from light field obtained by capturing a set of images from different viewpoints and initial depth estimates for pixels within the light field using a processor configured by image processing application.
Abstract: Systems and methods for correction of user identified artifacts in light field images are disclosed. One embodiment of the invention is a method for correcting artifacts in a light field image rendered from a light field obtained by capturing a set of images from different viewpoints and initial depth estimates for pixels within the light field using a processor configured by an image processing application, where the method includes: receiving a user input indicating the location of an artifact within said light field image; selecting a region of the light field image containing the indicated artifact; generating updated depth estimates for pixels within the selected region; and re-rendering at least a portion of the light field image using the updated depth estimates for the pixels within the selected region.

Patent
06 May 2013
TL;DR: Optical systems of light field capture devices can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints as discussed by the authors, which can result in captured light field image data (both still and video) that is cheaper and easier to process.
Abstract: According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.

Patent
30 Sep 2013
TL;DR: In this paper, a system for the synthesis of light field images from virtual viewpoints is described, where a processor and a memory are configured to store captured light field image data and an image manipulation application is used to generate an image from the perspective of the virtual viewpoint.
Abstract: Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system includes a processor and a memory configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the motion of light fields near the bottom of a potential valley in a multi-dimensional field space and provided a geometrical criterion for model-builders to decide whether or not the single field and/or the truncation approximation is justified, identifying its leading deviations, and to efficiently extract cosmological predictions.
Abstract: We examine the motion of light fields near the bottom of a potential valley in a multi-dimensional field space. In the case of two fields we identify three general scales, all of which must be large in order to justify an effective low-energy approximation involving only the light field, l. (Typically only one of these — the mass of the heavy field transverse to the trough — is used in the literature when justifying the truncation of heavy fields.) We explicitly compute the resulting effective field theory, which has the form of a P(l, X) model, with $ X=-\frac{1}{2}{{\left( {\partial \ell } \right)}^2} $ , as a function of these scales. This gives the leading ways each scale contributes to any low-energy dynamics, including (but not restricted to) those relevant for cosmology. We check our results with the special case of a homogeneous roll near the valley floor, placing into a broader context recent cosmological calculations that show how the truncation approximation can fail. By casting our results covariantly in field space, we provide a geometrical criterion for model-builders to decide whether or not the single-field and/or the truncation approximation is justified, identify its leading deviations, and to efficiently extract cosmological predictions.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work shows that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information.
Abstract: We present the first variational framework for multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, features need to be extracted from the 2D projection of a three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.

Journal ArticleDOI
Xinxing Xia1, Xu Liu1, Haifeng Li1, Zhenrong Zheng1, Han Wang1, Yifan Peng1, Shen Weidong1 
TL;DR: Using light field reconstruction technique, a floating 3D scene in the air is displayed, which is 360-degree surrounding viewable with correct occlusion effect, and the experimental results verified the representability of this method.
Abstract: Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper uses a single-shot light field image as an input and proposes a new feature, called the light field distortion (LFD) feature, for identifying a transparent object, which is incorporated into the bag-of-features approach for recognizing transparent objects.
Abstract: Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance of such objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light field image as an input and model the distortion of the light field caused by the refractive property of a transparent object. We propose a new feature, called the light field distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.

Journal ArticleDOI
TL;DR: In this paper, the optical force and torque applied to an electric dipole by a spinning light field were investigated. And they found that the dissipative part of the force depends on the orbital energy flow of the field only, because the latter is related to the phase gradient generalized for such a light field.
Abstract: We calculate the optical force and torque applied to an electric dipole by a spinning light field. We find that the dissipative part of the force depends on the orbital energy flow of the field only, because the latter is related to the phase gradient generalized for such a light field. As for the remaining spin energy flow, it gives rise to an optical torque. The resulting change in the optical force is detailed for different experimentally relevant configurations, and we show in particular how this change is critical when surface plasmon modes are involved.

Journal ArticleDOI
TL;DR: In this article, the authors show that it is possible to generate a light field that contains purely transverse angular momentum, the analogue of a spinningmechanical wheel, by tight focusing of a polarization tailored light beam and measuring it using an optical nano-probingtechnique.
Abstract: In classical mechanics, a system may possess angular momentum which can be either transverse (e.g. in a spinning wheel) or longitudinal(e.g. for a spiraling seed falling from a tree) with respect to the direction of motion. However, for light, a typical massless wave system,the situation is less versatile. Photons are well-known to exhibit intrinsic angular momentum which is longitudinal only: the spin angularmomentum defining the polarization and the orbital angular momentum associated with a spiraling phase front. Here we show that itis possible to generate a novel state of the light field that contains purely transverse angular momentum, the analogue of a spinningmechanical wheel. We realize this state by tight focusing of a polarization tailored light beam and measure it using an optical nano-probingtechnique. Such a novel state of the light field can find applications in optical tweezers and spanners where it allows for additionalrotational degree of freedom not achievable in single-beam configurations so far.

Journal ArticleDOI
TL;DR: This work develops a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function and demonstrates many practical applications of the framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.
Abstract: Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.

Journal ArticleDOI
TL;DR: In this paper, a display-adaptive light field retargeting method was proposed to provide high-quality, blur-free viewing experiences of the same content on a variety of display types, ranging from hand-held devices to movie theaters.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work analyzes regularization of light fields in variational frameworks and shows that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space.
Abstract: Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images. In this work, we analyze regularization of light fields in variational frameworks and show that their variational structure is induced by disparity, which is in this context best understood as a vector field on epipolar plane image space. We derive differential constraints on this vector field to enable consistent disparity map regularization. Furthermore, we show how the disparity field is related to the regularization of more general vector-valued functions on the 4D ray space of the light field. This way, we derive an efficient variational framework with convex priors, which can serve as a fundament for a large class of inverse problems on ray space.

Proceedings ArticleDOI
TL;DR: The hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering, and it is shown that the hyperfan’s performance scales with aperture count.
Abstract: Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan’s performance scales with aperture count.

Patent
Douglas Lanman1, David Luebke1
31 Dec 2013
TL;DR: In this article, a method for displaying a near-eye light field display (NELD) image is proposed, where a pre-filtered image corresponds to a target image.
Abstract: A method for displaying a near-eye light field display (NELD) image is disclosed. The method comprises determining a pre-filtered image to be displayed, wherein the pre-filtered image corresponds to a target image. It further comprises displaying the pre-filtered image on a display. Subsequently, it comprises producing a near-eye light field after the pre-filtered image travels through a microlens array adjacent to the display, wherein the near-eye light field is operable to simulate a light field corresponding to the target image. Finally, it comprises altering the near-eye light field using at least one converging lens, wherein the altering allows a user to focus on the target image at an increased depth of field at an increased distance from an eye of the user and wherein the altering increases spatial resolution of said target image.

Book ChapterDOI
19 Aug 2013
TL;DR: The proposed model combines the main idea of Active Wavefront Sampling AWS with the light field technique, i.e. so-called sub-aperture images are extracted out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view.
Abstract: In this paper we propose an efficient method to calculate a high-quality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling AWS with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.

Journal ArticleDOI
TL;DR: In this article, the formation of light bullets during femtosecond laser pulse filamentation in the presence of anomalous group velocity dispersion has been recorded for the first time, and the minimum experimentally detected width of the light bullet autocorrelation function is 27 fs, which corresponds to a duration of about 13.5 fs.
Abstract: The formation of light bullets during femtosecond laser pulse filamentation in the presence of anomalous group velocity dispersion has been recorded for the first time. The minimum experimentally detected width of the light bullet autocorrelation function is 27 fs, which corresponds to a duration of about 13.5 fs. The duration of the light bullet at a wavelength of 1800 nm is about two periods of the light field oscillation. The numerically calculated width of the autocorrelation function for such a light bullet is 23 fs, which is in good agreement with the experimental value.

Journal ArticleDOI
TL;DR: In this article, the full control of the optical radiation pressure at fixed photon flux and incident angle by the photon spin was reported by using transparent chiral liquid crystal droplets that enable a strong coupling between the linear and angular degrees of freedom of a light field.
Abstract: We report on the full control of the optical radiation pressure at fixed photon flux and incident angle by the photon spin. This is done by using transparent chiral liquid crystal droplets that enable a strong coupling between the linear and angular degrees of freedom of a light field. From these results, we anticipate optical sorting of particles with different chirality as well as novel optical trapping and micromanipulation strategies.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work presents a novel computational imaging solution by exploiting the light field probe (LF-Probe), which can reliably reconstruct small to medium scale gas flows and shows that the use of ray-ray correspondences can greatly improve the reconstruction.
Abstract: Transparent gas flows are difficult to reconstruct: the refractive index field (RIF) within the gas volume is uneven and rapidly evolving, and correspondence matching under distortions is challenging. We present a novel computational imaging solution by exploiting the light field probe (LF-Probe). A LF-probe resembles a view-dependent pattern where each pixel on the pattern maps to a unique ray. By observing the LF-probe through the gas flow, we acquire a dense set of ray-ray correspondences and then reconstruct their light paths. To recover the RIF, we use Fermat's Principle to correlate each light path with the RIF via a Partial Differential Equation (PDE). We then develop an iterative optimization scheme to solve for all light-path PDEs in conjunction. Specifically, we initialize the light paths by fitting Hermite splines to ray-ray correspondences, discretize their PDEs onto voxels, and solve a large, over-determined PDE system for the RIF. The RIF can then be used to refine the light paths. Finally, we alternate the RIF and light-path estimations to improve the reconstruction. Experiments on synthetic and real data show that our approach can reliably reconstruct small to medium scale gas flows. In particular, when the flow is acquired by a small number of cameras, the use of ray-ray correspondences can greatly improve the reconstruction.