scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2015"


Journal ArticleDOI
TL;DR: This work may constitute the first example of MoS2-enabled wave-guiding photonic device, and potentially give some new insights into two-dimensional layered materials related photonics.
Abstract: By coupling few-layer Molybdenum Disulfide (MoS2) with fiber-taper evanescent light field, a new type of MoS2 based nonlinear optical modulating element had been successfully fabricated as a two-dimensional layered saturable absorber with strong light-matter interaction. This MoS2-taper-fiber device is not only capable of passively mode-locking an all-normal-dispersion ytterbium-doped fiber laser and enduring high power laser excitation (up to 1 W), but also functions as a polarization sensitive optical modulating component (that is, different polarized light can induce different nonlinear optical response). Thanks to the combined advantages from the strong nonlinear optical response in MoS2 together with the sufficiently-long-range interaction between light and MoS2, this device allows for the generation of high power stable dissipative solitons at 1042.6 nm with pulse duration of 656 ps and a repetition rate of 6.74 MHz at a pump power of 210 mW. Our work may also constitute the first example of MoS2-enabled wave-guiding photonic device, and potential y give some new insights into two-dimensional layered materials related photonics.

428 citations


Journal ArticleDOI
20 Feb 2015
TL;DR: In this article, the Fourier ptychography was used to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations.
Abstract: Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective (5× larger NA) and axial resolution better than the depth of field, using a low-magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multislice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data are captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.

403 citations


Journal ArticleDOI
TL;DR: In this paper, a phase-space formulation for the transport of intensity equation (TIE) is presented for analyzing phase retrieval under partially coherent illumination. But the authors do not consider the effect of the partial coherence on phase retrieval.

277 citations


Journal ArticleDOI
TL;DR: In this paper, an ultrabroadband superoscillatory lens (UBSOL) is proposed and realized by utilizing the metasurface-assisted law of refraction and reflection in arrayed nanorectangular apertures with variant orientations.
Abstract: Conventional optics is diffraction limited due to the cutoff of spatial frequency components, and evanescent waves allow subdiffraction optics at the cost of complex near-field manipulation. Recently, optical superoscillatory phenomena were employed to realize superresolution lenses in the far field, but suffering from very narrow working wavelength band due to the fragility of the superoscillatory light field. Here, an ultrabroadband superoscillatory lens (UBSOL) is proposed and realized by utilizing the metasurface-assisted law of refraction and reflection in arrayed nanorectangular apertures with variant orientations. The ultrabroadband feature mainly arises from the nearly dispersionless phase profile of transmitted light through the UBSOL for opposite circulation polarization with respect to the incident light. It is demonstrated in experiments that subdiffraction light focusing behavior holds well with nearly unchanged focal patterns for wavelengths spanning across visible and near-infrared light. This method is believed to find promising applications in superresolution microscopes or telescopes, high-density optical data storage, etc.

192 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the redundant information in light field imagery allows volumetric focus, an improvement of signal quality that maintains focus over a controllable range of depths, and shows the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering.
Abstract: We demonstrate that the redundant information in light field imagery allows volumetric focus, an improvement of signal quality that maintains focus over a controllable range of depths. To do this, we derive the frequency-domain region of support of the light field, finding it to be the 4D hyperfan at the intersection of a dual fan and a hypercone, and design a filter with correspondingly shaped passband. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenslet-based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including planar focus, fan-shaped antialiasing, and nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, through murky water and particulate matter, in real-world scenarios, and evaluated using a variety of metrics. We show that the hyperfan's performance scales with aperture count, and demonstrate the inclusion of aliased components for high-quality rendering.

143 citations


Journal ArticleDOI
TL;DR: A light transport framework for understanding the fundamental limits of light field camera resolution that can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes.
Abstract: Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.

111 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array that applies a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilizes a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses.
Abstract: This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology.

104 citations


Journal ArticleDOI
TL;DR: This tutorial paper illustrates the concept of plenoptic function and light field from the perspective of geometric optics, and describes the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics.
Abstract: Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

100 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: The results show that relative performance is not consistent across all coding configurations, raising new research questions regarding standard coding of Lytro-Illum light fields using HEVC, and the proposed data formats greatly increase HEVC performance.
Abstract: Light fields captured by the Lytro-Illum camera are the first to appear in the consumer market, capable of providing refocused pictures at acceptable spatial resolution and quality. Since this is partially due to sampling of a huge number of light rays, efficient compression methods are required to store and exchange light field data. This paper presents a performance study of HEVC-compatible coding of Lytro-Illum light fields using different data formats for standard coding. The efficiency of 5 different light field data formats are evaluated using a data set of 12 light field images and the standard HEVC coding configurations of Still-Image Profile, All-Intra, Low Delay B and P and Random Access. Unexpectedly, the results show that relative performance is not consistent across all coding configurations, raising new research questions regarding standard coding of Lytro-Illum light fields using HEVC. Most importantly, the proposed data formats greatly increase HEVC performance.

97 citations


Journal ArticleDOI
TL;DR: It is shown how speckle light fields can be used to control the anomalous diffusion of a Brownian particle and to perform some basic optical manipulation tasks such as guiding and sorting.
Abstract: The motion of particles in random potentials occurs in several natural phenomena ranging from the mobility of organelles within a biological cell to the diffusion of stars within a galaxy. A Brownian particle moving in the random optical potential associated to a speckle pattern, i.e., a complex interference pattern generated by the scattering of coherent light by a random medium, provides an ideal model system to study such phenomena. Here, we derive a theory for the motion of a Brownian particle in a speckle field and, in particular, we identify its universal characteristic timescale. Based on this theoretical insight, we show how speckle light fields can be used to control the anomalous diffusion of a Brownian particle and to perform some basic optical manipulation tasks such as guiding and sorting. Our results might broaden the perspectives of optical manipulation for real-life applications.

97 citations


Posted Content
TL;DR: The devised iterative regularization algorithm based on adaptive thresholding provides high-quality reconstruction results for relatively big disparities between neighboring views and is suitable for all applications which require light field reconstruction.
Abstract: In this article we develop an image based rendering technique based on light field reconstruction from a limited set of perspective views acquired by cameras. Our approach utilizes sparse representation of epipolar-plane images in a directionally sensitive transform domain, obtained by an adapted discrete shearlet transform. The used iterative thresholding algorithm provides high-quality reconstruction results for relatively big disparities between neighboring views. The generated densely sampled light field of a given 3D scene is thus suitable for all applications which requires light field reconstruction. The proposed algorithm is compared favorably against state of the art depth image based rendering techniques.

Proceedings ArticleDOI
10 Dec 2015
TL;DR: A subaperture images streaming scheme to compress lenselet images, in which rotation scan mapping is adopted to further improve compression efficiency, is proposed and results show the approach can efficient compress the redundancy in lensinglet images and outperform traditional image compression method.
Abstract: Plenoptic cameras capture the light field in a scene with a single shot and produce lenselet images. From a lenselet image, light field can be reconstructed, with which we can render images with different viewpoints and focal length. Because of large volume data, high efficient image compression scheme for storage and transmission is urgent. Containing 4D light field information, lenselet images have much more redundant information than traditional 2D images. In this paper, we propose a subaperture images streaming scheme to compress lenselet images, in which rotation scan mapping is adopted to further improve compression efficiency. The experiment results show our approach can efficient compress the redundancy in lenselet images and outperform traditional image compression method.

Patent
08 Jan 2015
TL;DR: In this article, a compressed light field imaging system is described, where the light field 3D data is analyzed to determine optimal subset of light field samples to be (acquired) rendered, while the remaining samples are generated using multi-reference depth-image based rendering.
Abstract: A compressed light field imaging system is described. The light field 3D data is analyzed to determine optimal subset of light field samples to be (acquired) rendered, while the remaining samples are generated using multi-reference depth-image based rendering. The light field is encoded and transmitted to the display. The 3D display directly reconstructs the light field and avoids data expansion that usually occurs in conventional imaging systems. The present invention enables the realization of full parallax 3D compressed imaging system that achieves high compression performance while minimizing memory and computational requirements.

Proceedings Article
Jun Zhang1, Meng Wang1, Jun Gao1, Yi Wang1, Xudong Zhang1, Xindong Wu1 
25 Jul 2015
TL;DR: Extensive evaluations on the recently introduced Light Field Saliency Dataset (LFSD) show that the investigated light field properties are complementary with each other and lead to improvements on 2D/3D models, and the approach produces superior results in comparison with the state-of-the-art.
Abstract: Although the light field has been recently recognized helpful in saliency detection, it is not comprehensively explored yet. In this work, we propose a new saliency detection model with light field data. The idea behind the proposed model originates from the following observations. (1) People can distinguish regions at different depth levels via adjusting the focus of eyes. Similarly, a light field image can generate a set of focal slices focusing at different depth levels, which suggests that a background can be weighted by selecting the corresponding slice. We show that background priors encoded by light field focusness have advantages in eliminating background distraction and enhancing the saliency by weighting the light field contrast. (2) Regions at closer depth ranges tend to be salient, while far in the distance mostly belong to the backgrounds. We show that foreground objects can be easily separated from similar or cluttered backgrounds by exploiting their light field depth. Extensive evaluations on the recently introduced Light Field Saliency Dataset (LFSD) [Li et al., 2014], including studies of different light field cues and comparisons with Li et al.'s method (the only reported light field saliency detection approach to our knowledge) and the 2D/3D state-of-the-art approaches extended with light field depth/focusness information, show that the investigated light field properties are complementary with each other and lead to improvements on 2D/3D models, and our approach produces superior results in comparison with the state-of-the-art.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A novel approach to relative pose estimation which is tailored to 4D light field cameras is presented and compares favourably to direct linear pose estimation based on aligning the 3D point clouds obtained by reconstructing depth for each individual light field.
Abstract: We present a novel approach to relative pose estimation which is tailored to 4D light field cameras. From the relationships between scene geometry and light field structure and an analysis of the light field projection in terms of Pluecker ray coordinates, we deduce a set of linear constraints on ray space correspondences between a light field camera pair. These can be applied to infer relative pose of the light field cameras and thus obtain a point cloud reconstruction of the scene. While the proposed method has interesting relationships to pose estimation for generalized cameras based on ray-to-ray correspondence, our experiments demonstrate that our approach is both more accurate and computationally more efficient. It also compares favourably to direct linear pose estimation based on aligning the 3D point clouds obtained by reconstructing depth for each individual light field. To further validate the method, we employ the pose estimates to merge light fields captured with hand-held consumer light field cameras into refocusable panoramas.

Journal ArticleDOI
14 Apr 2015-Sensors
TL;DR: This paper proposes an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller.
Abstract: This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

Journal ArticleDOI
TL;DR: In this paper, the formation of light bullets in the presence of anomalous group velocity dispersion is presented within the same general scenario for condensed matter and humid air, and the temporal and spectral parameters of light bullet formation during filamentation in fused silica and humidity are obtained.
Abstract: The scenario of the formation of light bullets in the presence of anomalous group velocity dispersion is presented within the same general scenario for condensed matter and humid air. The temporal and spectral parameters of light bullets during filamentation in fused silica and humid air are obtained. A light bullet (LB) is a short-lived formation in a femtosecond filament with a high spatiotemporal light field localization. The sequence formation of the quasi-periodical LB is obtained numerically and is confirmed experimentally by autocorrelation measurements of the LB's duration. The estimation of the LB duration reaches few-cycle value. It is established that the generation of each LB is accompanied by the ejection of a supercontinuum (SC) in the visible spectrum and an isolated anti-Stokes wing is formed in the visible area of the SC as a result of destructive interference of broadband spectral components. It was found that the energy of a visible SC increases discretely according to the number of LBs in the filament. We demonstrated that the model of ionization in solid dielectric which is used in numerical simulation fundamentally affects the obtained scenario of LB formation. The possibility of the formation of LBs under the filamentation of middle-IR pulses in the atmosphere was shown with numerical simulation.

Journal ArticleDOI
TL;DR: A method to reduce the adverse effect of unreliable local estimations is introduced, which helps to get rid of errors in specular areas and edges where depth values are discontinuous.
Abstract: In this paper, we investigate how the recently emerged photography technology—the light field—can benefit depth map estimation, a challenging computer vision problem. A novel framework is proposed to reconstruct continuous depth maps from light field data. Unlike many traditional methods for the stereo matching problem, the proposed method does not need to quantize the depth range. By making use of the structure information amongst the densely sampled views in light field data, we can obtain dense and relatively reliable local estimations. Starting from initial estimations, we go on to propose an optimization method based on solving a sparse linear system iteratively with a conjugate gradient method. Two different affinity matrices for the linear system are employed to balance the efficiency and quality of the optimization. Then, a depth-assisted segmentation method is introduced so that different segments can employ different affinity matrices. Experiment results on both synthetic and real light fields demonstrate that our continuous results are more accurate, efficient, and able to preserve more details compared with discrete approaches.

Journal ArticleDOI
TL;DR: The 5-D spectrum of an object in an LFV is derived for the important practical case of objects moving with constant velocity and at constant depth and it is shown that the region of support (ROS) of the 5- D spectrum is a skewed 3-D hyperfan in the5-D frequency domain.
Abstract: Five-dimensional (5-D) light field video (LFV) (also known as plenoptic video) is a more powerful form of representing information of dynamic scenes compared to conventional three-dimensional (3-D) video. In this paper, the 5-D spectrum of an object in an LFV is derived for the important practical case of objects moving with constant velocity and at constant depth. In particular, it is shown that the region of support (ROS) of the 5-D spectrum is a skewed 3-D hyperfan in the 5-D frequency domain, with the degree of skew depending on the velocity and depth of the moving object. Based on this analysis, a 5-D depth-velocity digital filter to enhance moving objects in LFVs is proposed, described and implemented. Further, by means of the commercially available Lytro light-field camera, LFVs of real scenes are generated and used to test and confirm the performance of the 5-D depth-velocity filters for enhancing such objects.

Journal ArticleDOI
TL;DR: In this article, a plenoptic sensor was designed to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence, which can be used to guide adaptive optics systems in directing beam propagation through turbulence.
Abstract: We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-π,π) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2π. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper, we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.


Proceedings ArticleDOI
TL;DR: In this article, a phase-only spatial light modulator (SLM) was proposed to tailor the amplitude and phase of a complex light field, as well as the transverse states of polarization.
Abstract: We present a method to tailor not only amplitude and phase of a complex light field, but also the transverse states of polarization. Starting from the implementation of spatially inhomogeneous distributions of polarization, so called Poincare beams, we realized a holographic optical technique that allows arbitrarily modulating the states of polarization by a single phase-only spatial light modulator (SLM). Moreover, the effective amplitude modulation of higher order beams performed by a phase-only SLM is shown. We will demonstrate the capabilities of our method ranging from the modulation of higher order Gaussian modes including desired polarization characteristics to the generation of polarization singularities at arbitrary points in the transverse plane of Poincare beams.

Journal ArticleDOI
Bin Liu1, Yuan Yuan1, Sai Li1, Yong Shuai1, He-Ping Tan1 
TL;DR: In this paper, a Monte Carlo method based on ray splitting and a physical model of a light-field camera with a microlens array to simulate its imaging and refocusing processes is presented.

Patent
20 Apr 2015
TL;DR: The light field display can correct for the focus and cylindrical refractive error of the subject, and can be used to perform a variety of visual field testing strategies by rendering visual stimuli to the eye as discussed by the authors.
Abstract: Systems and methods for performing visual field testing using light field displays are described. The light field display ( 201, 202; 801; 504; 601 - 604; 704; 801 ), that can correct for the focus and cylindrical refractive error of the subject, can be used to perform a variety of visual field testing strategies by rendering visual stimuli to the eye ( 203 ). The light field display may be included near the subject's eye, or reimaged by a relay optical system ( 802 ). Several embodiments of head and arm mounted systems ( 711; 704 ) including a near eye light field display ( 704 ) are presented.

Journal ArticleDOI
TL;DR: In this article, an analogue model for controllable photon generation via the dynamical Casimir effect (DCE) in a cavity containing a degenerate optical parametric amplifier (OPA), which is pumped by an amplitude-modulated field.
Abstract: We present and investigate an analogue model for controllable photon generation via the dynamical Casimir effect (DCE) in a cavity containing a degenerate optical parametric amplifier (OPA), which is pumped by an amplitude-modulated field. The time modulation of the pump field in the model OPA system is equivalent to a periodic modulation of the cavity length, which is responsible for the generation of the Casimir radiation. By taking into account the rapidly oscillating terms of the modulation frequency, the effects of the corresponding counter-rotating terms (CRTs) on the analogue Casimir radiation clearly emerge. We find that the mean number of generated photons and their quantum statistical properties exhibit oscillatory behaviors, which are controllable through the modulation frequency as an external control parameter. We also find that the time-modulated pumping may lead to the recently predicted phenomenon, the so-called “anti-DCE,” in which pair photons can be coherently annihilated. We show that the Casimir radiation exhibits quadrature squeezing, photon bunching, and super-Poissonian statistics, which are controllable by modulation frequency. We also calculate the power spectrum of the intracavity light field. We find that the appearance of sidebands in the spectrum is due to the presence of the CRTs.

Journal ArticleDOI
TL;DR: In this article, the authors examined two schemes of atomic levels and field polarizations where the guided probe field is quasilinearly polarized along the major or minor principal axis, which is parallel or perpendicular to the radial direction of the atomic position.
Abstract: We study the propagation of guided light along an array of three-level atoms in the vicinity of an optical nanofiber under the condition of electromagnetically induced transparency. We examine two schemes of atomic levels and field polarizations where the guided probe field is quasilinearly polarized along the major or minor principal axis, which is parallel or perpendicular, respectively, to the radial direction of the atomic position. Our numerical calculations indicate that 200 cesium atoms in a linear array with a length of 100 $\mu$m at a distance of 200 nm from the surface of a nanofiber with a radius of 250 nm can slow down the speed of guided probe light by a factor of about $3.5\times 10^6$ (the corresponding group delay is about 1.17 $\mu$s). In the neighborhood of the Bragg resonance, a significant fraction of the guided probe light can be reflected back with a negative group delay. The reflectivity and the group delay of the reflected field do not depend on the propagation direction of the probe field. However, when the input guided light is quasilinearly polarized along the major principal axis, the transmittivity and the group delay of the transmitted field substantially depend on the propagation direction of the probe field. Under the Bragg resonance condition, an array of atoms prepared in an appropriate internal state can transmit guided light polarized along the major principal in one specific direction even in the limit of infinitely large atom numbers. The directionality of transmission of guided light through the array of atoms is a consequence of the existence of a longitudinal component of the guided light field as well as the ellipticity of both the field polarization and the atomic dipole vector.

Journal ArticleDOI
TL;DR: In this article, a scheme for photon trapping in an optical resonator coupled with two-level atoms was proposed and analyzed, where the output light from the cavity is suppressed while the intracavity light field is near maximum due to the excitation of the polariton state of the coupled cavity and atom system.
Abstract: We propose and analyze a scheme for photon trapping in an optical resonator coupled with two-level atoms. We show that when the cavity is excited by two identical light fields, one from each end of the cavity, the output light from the cavity is suppressed while the intracavity light field is nearmaximum due to the excitation of the polariton state of the coupled cavity and atom system. We also present methods for the direct probing of the trapped polariton state. The photon trapping is manifested by the destructive interference of the transmitted light and the incident light, which is conditioned on the presence of incoherent processes such as spontaneous decay of the atomic excitation of the polariton state. Such photon trapping is quite generic and should be observable experimentally in a variety of cavity quantum electrodynamics systems.

Journal ArticleDOI
Li-Yi Wei1, Chia-Kai Liang1, Myhre Graham B1, Colvin Pitts1, Kurt Akeley1 
27 Jul 2015
TL;DR: This work proposes designs in mainlens aberrations and microlens/photosensor sample patterns, and evaluates them through simulated measurements and captured results with the hardware prototype.
Abstract: Conventional camera designs usually shun sample irregularities and lens aberrations. We demonstrate that such irregularities and aberrations, when properly applied, can improve the quality and usability of light field cameras. Examples include spherical aberrations for the mainlens, and misaligned sampling patterns for the microlens and photosensor elements. These observations are a natural consequence of a key difference between conventional and light field cameras: optimizing for a single captured 2D image versus a range of reprojected 2D images from a captured 4D light field. We propose designs in mainlens aberrations and microlens/photosensor sample patterns, and evaluate them through simulated measurements and captured results with our hardware prototype.

Journal ArticleDOI
TL;DR: In this paper, a broadband optical rotator is proposed to rotate the polarization plane of a linearly polarized light at any desired angle over a wide range of wavelengths, which is composed of a sequence of half-wave plates rotated at specific angles with respect to their fast polarization axes.

Patent
25 Mar 2015
TL;DR: In this paper, a rapid three-dimensional reconstruction method based on light field digit refocusing was proposed, where a light field data obtaining device obtains space four-dimensional LFD data, and a digit focusing module conducts digit focusing processing on the space LDF data to obtain a focusing plane sequence image.
Abstract: The invention provides a rapid three-dimensional reconstruction method based on light field digit refocusing. The method includes the following steps: firstly, a light field data obtaining device obtains space four-dimensional light field data; secondly, a digit refocusing module conducts digit refocusing processing on the space four-dimensional light field data to obtain a focusing plane sequence image; finally, a three-dimensional reconstruction module conducts three-dimensional reconstruction on the focusing plane sequence image. The invention further provides a rapid three-dimensional reconstruction system based on the light field digit refocusing. The rapid three-dimensional reconstruction system comprises the light field data obtaining device, the digit refocusing module and the three-dimensional reconstruction module. By means of the rapid three-dimensional reconstruction method and system, shooting is only needed one time, a camera and a target do not need to be moved, and results can be checked at any visual angle; the shooting difficulty and the reconstruction algorithm complexity are reduced, the image obtaining time is shortened, the method and system are suitable for three-dimensional reconstruction of the motion target, the application depth scope of a DFF algorithm is expanded, and the method and system are suitable for three-dimensional reconstruction of large-field-depth scenes.