scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2017"


Journal ArticleDOI
19 May 2017-Science
TL;DR: The observation of up to ninth-order harmonics in graphene excited by mid-infrared laser pulses at room temperature opens up the possibility of investigating strong-field and ultrafast dynamics and nonlinear behavior of massless Dirac fermions.
Abstract: The electronic properties of graphene can give rise to a range of nonlinear optical responses. One of the most desirable nonlinear optical processes is high-harmonic generation (HHG) originating from coherent electron motion induced by an intense light field. Here, we report on the observation of up to ninth-order harmonics in graphene excited by mid-infrared laser pulses at room temperature. The HHG in graphene is enhanced by an elliptically polarized laser excitation, and the resultant harmonic radiation has a particular polarization. The observed ellipticity dependence is reproduced by a fully quantum mechanical treatment of HHG in solids. The zero-gap nature causes the unique properties of HHG in graphene, and our findings open up the possibility of investigating strong-field and ultrafast dynamics and nonlinear behavior of massless Dirac fermions.

498 citations


Journal ArticleDOI
TL;DR: A comprehensive overview and discussion of research in light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data are presented.
Abstract: Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.

412 citations


Journal ArticleDOI
12 Oct 2017-Nature
TL;DR: Graphene is a promising platform with which to achieve light-field-driven control of electrons in a conducting material, because of its broadband and ultrafast optical response, weak screening and high damage threshold, and it is shown that a current induced in monolayer graphene by two-cycle laser pulses is sensitive to the electric-field waveform.
Abstract: The ability to steer electrons using the strong electromagnetic field of light has opened up the possibility of controlling electron dynamics on the sub-femtosecond (less than 10-15 seconds) timescale. In dielectrics and semiconductors, various light-field-driven effects have been explored, including high-harmonic generation, sub-optical-cycle interband population transfer and the non-perturbative change of the transient polarizability. In contrast, much less is known about light-field-driven electron dynamics in narrow-bandgap systems or in conductors, in which screening due to free carriers or light absorption hinders the application of strong optical fields. Graphene is a promising platform with which to achieve light-field-driven control of electrons in a conducting material, because of its broadband and ultrafast optical response, weak screening and high damage threshold. Here we show that a current induced in monolayer graphene by two-cycle laser pulses is sensitive to the electric-field waveform, that is, to the exact shape of the optical carrier field of the pulse, which is controlled by the carrier-envelope phase, with a precision on the attosecond (10-18 seconds) timescale. Such a current, dependent on the carrier-envelope phase, shows a striking reversal of the direction of the current as a function of the driving field amplitude at about two volts per nanometre. This reversal indicates a transition of light-matter interaction from the weak-field (photon-driven) regime to the strong-field (light-field-driven) regime, where the intraband dynamics influence interband transitions. We show that in this strong-field regime the electron dynamics are governed by sub-optical-cycle Landau-Zener-Stuckelberg interference, composed of coherent repeated Landau-Zener transitions on the femtosecond timescale. Furthermore, the influence of this sub-optical-cycle interference can be controlled with the laser polarization state. These coherent electron dynamics in graphene take place on a hitherto unexplored timescale, faster than electron-electron scattering (tens of femtoseconds) and electron-phonon scattering (hundreds of femtoseconds). We expect these results to have direct ramifications for band-structure tomography and light-field-driven petahertz electronics.

302 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: In this article, a chiral interaction between single quantum emitters and transversally confined photons was observed in a whispering gallery mode microresonator, where the emission direction of light into the structure is controlled by the polarization of the excitation light or by the internal quantum state of the emitter.
Abstract: Controlling the interaction of light and matter is the basis for diverse applications ranging from light technology to quantum information processing. Nowadays, many of these applications are based on nanophotonic structures. It turns out that the confinement of light in such nanostructures imposes an inherent link between its local polarization and its propagation direction, also referred to as spin-momentum locking of light [1]. Remarkably, this leads to chiral, i.e., propagation direction-dependent effects in the emission and absorption of light, and elementary processes of light-matter interaction are fundamentally altered. For example, when coupling plasmonic particles or atoms to evanescent fields, the intrinsic mirror symmetry of the particles' emission can be broken. In our group, we observed this effect in the interaction between single rubidium atoms and the evanescent part of a light field that is confined by continuous total internal reflection in a whispering-gallery-mode microresonator [2]. In the following, this allowed us to realize chiral nanophotonic interfaces in which the emission direction of light into the structure is controlled by the polarization of the excitation light [3] or by the internal quantum state of the emitter [4], respectively. Moreover, we employed this chiral interaction to demonstrate an integrated optical isolator [5] as well as an integrated optical circulator [6] which operate at the single-photon level and which exhibit low loss. The latter are the first two examples of a new class of nonreciprocal nanophotonic devices which exploit the chiral interaction between single quantum emitters and transversally confined photons.

238 citations


Journal ArticleDOI
Nianyi Li1, Jinwei Ye1, Yu Ji1, Haibin Ling2, Jingyi Yu1 
TL;DR: Experiments show that the saliency detection scheme can robustly handle challenging scenarios such as similar foreground and background, cluttered background, complex occlusions, etc., and achieve high accuracy and robustness.
Abstract: Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions. We explore the problem of using light fields as input for saliency detection. Our technique is enabled by the availability of commercial plenoptic cameras that capture the light field of a scene in a single shot. We show that the unique refocusing capability of light fields provides useful focusness, depths, and objectness cues. We further develop a new saliency detection algorithm tailored for light fields. To validate our approach, we acquire a light field database of a range of indoor and outdoor scenes and generate the ground truth saliency map. Experiments show that our saliency detection scheme can robustly handle challenging scenarios such as similar foreground and background, cluttered background, complex occlusions, etc., and achieve high accuracy and robustness.

236 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper takes advantage of the clear texture structure of the epipolar plane image (EPI) in the light field data and model the problem of light field reconstruction from a sparse set of views as a CNN-based angular detail restoration on EPI.
Abstract: In this paper, we take advantage of the clear texture structure of the epipolar plane image (EPI) in the light field data and model the problem of light field reconstruction from a sparse set of views as a CNN-based angular detail restoration on EPI. We indicate that one of the main challenges in sparsely sampled light field reconstruction is the information asymmetry between the spatial and angular domain, where the detail portion in the angular domain is damaged by undersampling. To balance the spatial and angular information, the spatial high frequency components of an EPI is removed using EPI blur, before feeding to the network. Finally, a non-blind deblur operation is used to recover the spatial detail suppressed by the EPI blur. We evaluate our approach on several datasets including synthetic scenes, real-world scenes and challenging microscope light field data. We demonstrate the high performance and robustness of the proposed framework compared with the state-of-the-arts algorithms. We also show a further application for depth enhancement by using the reconstructed light field.

184 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, a convolutional neural network (CNN) is used to estimate scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects.
Abstract: We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.

168 citations


Journal ArticleDOI
Yunsu Bok1, Hae-Gon Jeon1, In So Kweon1
TL;DR: A novel method is presented for the geometric calibration of micro-lens-based light field cameras by directly utilizing raw images for calibration, which contains a smaller number of parameters than previous models.
Abstract: We present a novel method for the geometric calibration of micro-lens-based light field cameras. Accurate geometric calibration is the basis of various applications. Instead of using sub-aperture images, we directly utilize raw images for calibration. We select appropriate regions in raw images and extract line features from micro-lens images in those regions. For the entire process, we formulate a new projection model of a micro-lens-based light field camera, which contains a smaller number of parameters than previous models. The model is transformed into a linear form using line features. We compute the initial solution of both the intrinsic and the extrinsic parameters by a linear computation and refine them via non-linear optimization. Experimental results demonstrate the accuracy of the correspondences between rays and pixels in raw images, as estimated by the proposed method.

162 citations


Journal ArticleDOI
20 Nov 2017
TL;DR: Combination of pupil tracking and advanced near-eye display technique opens new possibilities of the future augmented reality.
Abstract: We introduce an augmented reality near-eye display dubbed "Retinal 3D." Key features of the proposed display system are as follows: Focus cues are provided by generating the pupil-tracked light field that can be directly projected onto the retina. Generated focus cues are valid over a large depth range since laser beams are shaped for a large depth of field (DOF). Pupil-tracked light field generation significantly reduces the needed information/computation load. Also, it provides "dynamic eye-box" which can be a break-through that overcome the drawbacks of retinal projection-type displays. For implementation, we utilized a holographic optical element (HOE) as an image combiner, which allowed high transparency with a thin structure. Compared with current augmented reality displays, the proposed system shows competitive performances of a large field of view (FOV), high transparency, high contrast, high resolution, as well as focus cues in a large depth range. Two prototypes are presented along with experimental results and assessments. Analysis on the DOF of light rays and validity of focus cue generation are presented as well. Combination of pupil tracking and advanced near-eye display technique opens new possibilities of the future augmented reality.

154 citations


Posted Content
TL;DR: This work presents a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction), unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.
Abstract: We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at this https URL

150 citations


Journal ArticleDOI
Youngjin Yoon1, Hae-Gon Jeon1, Donggeun Yoo1, Joon-Young Lee1, In So Kweon1 
TL;DR: This letter presents a novel method to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network, and trains the whole network end-to-end.
Abstract: Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing . In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.

Journal ArticleDOI
TL;DR: A scheme to coherently control the electron wave function from attosecond to zeptosecond timescales by using semi-infinite light fields is discussed.
Abstract: Light-electron interaction in empty space is the seminal ingredient for free-electron lasers and also for controlling electron beams to dynamically investigate materials and molecules. Pushing the coherent control of free electrons by light to unexplored timescales, below the attosecond, would enable unprecedented applications in light-assisted electron quantum circuits and diagnostics at extremely small timescales, such as those governing intramolecular electronic motion and nuclear phenomena. We experimentally demonstrate attosecond coherent manipulation of the electron wave function in a transmission electron microscope, and show that it can be pushed down to the zeptosecond regime with existing technology. We make a relativistic pulsed electron beam interact in free space with an appropriately synthesized semi-infinite light field generated by two femtosecond laser pulses reflected at the surface of a mirror and delayed by fractions of the optical cycle. The amplitude and phase of the resulting coherent oscillations of the electron states in energymomentum space are mapped via momentum-resolved ultrafast electron energy-loss spectroscopy. The experimental results are in full agreement with our theoretical framework for light-electron interaction, which predicts access to the zeptosecond timescale by combining semi-infinite X-ray fields with free electrons.

Journal ArticleDOI
TL;DR: In this paper, a hybrid imaging system is proposed to generate a full light field video at 30 fps by propagating the angular information from the light field sequence to the 2D video, so that warp input images to the target view.
Abstract: Light field cameras have many advantages over traditional cameras, as they allow the user to change various camera settings after capture. However, capturing light fields requires a huge bandwidth to record the data: a modern light field camera can only take three images per second. This prevents current consumer light field cameras from capturing light field videos. Temporal interpolation at such extreme scale (10x, from 3 fps to 30 fps) is infeasible as too much information will be entirely missing between adjacent frames. Instead, we develop a hybrid imaging system, adding another standard video camera to capture the temporal information. Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps. We adopt a learning-based approach, which can be decomposed into two steps: spatio-temporal flow estimation and appearance estimation. The flow estimation propagates the angular information from the light field sequence to the 2D video, so we can warp input images to the target view. The appearance estimation then combines these warped images to output the final pixels. The whole process is trained end-to-end using convolutional neural networks. Experimental results demonstrate that our algorithm outperforms current video interpolation methods, enabling consumer light field videography, and making applications such as refocusing and parallax view generation achievable on videos for the first time.

Journal ArticleDOI
TL;DR: A light field compression scheme based on a novel homography-based low-rank approximation method called HLRA, which shows substantial peak signal to noise ratio gain of the compression algorithm, as well as the accuracy of the proposed parameter prediction model, especially for real light fields.
Abstract: This paper describes a light field compression scheme based on a novel homography-based low-rank approximation method called HLRA. The HLRA method jointly searches for the set of homographies best aligning the light field views and for the low-rank approximation matrices. The light field views are aligned using either one global homography or multiple homographies depending on how much the disparity across views varies from one depth plane to the other. The light field low-rank representation is then compressed using high efficiency video coding (HEVC). The best pair of rank and quantization parameters of the coding scheme, for a given target bit rate, is predicted with a model defined as a function of light field disparity and texture features. The results are compared with those obtained by directly applying HEVC on the light field views restructured as a pseudovideo sequence. The experiments using different datasets show substantial peak signal to noise ratio (PSNR)-rate gain of our compression algorithm, as well as the accuracy of the proposed parameter prediction model, especially for real light fields. A scalable extension of the coding scheme is finally proposed.

Journal ArticleDOI
TL;DR: A light field-based CGH rendering pipeline is presented allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view-dependent occlusion and it is shown that the rendering accurately models the spherical illumination introduced by the eye piece and produces the desired 3D imagery at the designated depth.
Abstract: Holograms display a 3D image in high resolution and allow viewers to focus freely as if looking through a virtual window, yet computer generated holography (CGH) hasn't delivered the same visual quality under plane wave illumination and due to heavy computational cost. Light field displays have been popular due to their capability to provide continuous focus cues. However, light field displays must trade off between spatial and angular resolution, and do not model diffraction.We present a light field-based CGH rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view-dependent occlusion. Our rendering accurately accounts for diffraction and supports various types of reference illuminations for hologram. We avoid under- and over-sampling and geometric clipping effects seen in previous work. We also demonstrate an implementation of light field rendering plus the Fresnel diffraction integral based CGH calculation which is orders of magnitude faster than the state of the art [Zhang et al. 2015], achieving interactive volumetric 3D graphics.To verify our computational results, we build a see-through, near-eye, color CGH display prototype which enables co-modulation of both amplitude and phase. We show that our rendering accurately models the spherical illumination introduced by the eye piece and produces the desired 3D imagery at the designated depth. We also analyze aliasing, theoretical resolution limits, depth of field, and other design trade-offs for near-eye CGH.

Journal ArticleDOI
TL;DR: This paper aims at the evaluation of perceived visual quality of light field images and at comparing the performance of a few state-of-the-art algorithms for light field image compression, by means of a set of objective and subjective quality assessments.
Abstract: The recent advances in light field imaging, supported among others by the introduction of commercially available cameras, e.g., Lytro or Raytrix, are changing the ways in which visual content is captured and processed. Efficient storage and delivery systems for light field images must rely on compression algorithms. Several methods to compress light field images have been proposed recently. However, in-depth evaluations of compression algorithms have rarely been reported. This paper aims at the evaluation of perceived visual quality of light field images and at comparing the performance of a few state-of-the-art algorithms for light field image compression. First, a processing chain for light field image compression and decompression is defined for two typical use cases, professional and consumer. Then, five light field compression algorithms are compared by means of a set of objective and subjective quality assessments. An interactive methodology recently introduced by authors, as well as a passive methodology is used to perform these evaluations. The results provide a useful benchmark for future development of compression solutions for light field images.

Journal ArticleDOI
TL;DR: In this article, the authors present a storage scheme that is insensitive to spin exchange collisions, thus enabling long storage times at high atomic densities, achieving record storage time of 1 second in cesium vapor, a 100-fold improvement over existing storage schemes.
Abstract: Light storage, the controlled and reversible mapping of photons onto long-lived states of matter [1], enables memory capability in optical quantum networks [2-6]. Prominent storage media are warm alkali gases due to their strong optical coupling and long-lived spin states [7,8]. In a dense gas, the random atomic collisions dominate the lifetime of the spin coherence, limiting the storage time to a few milliseconds [9,10]. Here we present and experimentally demonstrate a storage scheme that is insensitive to spin-exchange collisions, thus enabling long storage times at high atomic densities. This unique property is achieved by mapping the light field onto spin orientation within a decoherence-free subspace of spin states. We report on a record storage time of 1 second in cesium vapor, a 100-fold improvement over existing storage schemes. Furthermore, our scheme lays the foundations for hour-long quantum memories using rare-gas nuclear spins.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A novel two-dimensional weighted prediction and rate allocation scheme is proposed to adopt the HEVC compression structure to the plenoptic image properties, which outperforms all ICME-contestants, and improves on the JPEG-anchor of ICME with an average PSNR gain.
Abstract: Over the last decade, advancements in optical devices have made it possible for new novel image acquisition technologies to appear. Angular information for each spatial point is acquired in addition to the spatial information of the scene that enables 3D scene reconstruction and various post-processing effects. Current generation of plenoptic cameras spatially multiplex the angular information, which implies an increase in image resolution to retain the level of spatial information gathered by conventional cameras. In this work, the resulting plenoptic image is interpreted as a multi-view sequence that is efficiently compressed using the multi-view extension of high efficiency video coding (MV-HEVC). A novel two-dimensional weighted prediction and rate allocation scheme is proposed to adopt the HEVC compression structure to the plenoptic image properties. The proposed coding approach is a response to ICIP 2017 Grand Challenge: Light field Image Coding. The proposed scheme outperforms all ICME-contestants, and improves on the JPEG-anchor of ICME with an average PSNR gain of 7.5 dB and the HEVC-anchor of ICIP 2017 Grand Challenge with an average PSNR gain of 2.4 dB.

Journal ArticleDOI
TL;DR: This work extends patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing by correctly handles object boundary occlusion with semi-transparency and can generate more realistic results than previous methods.
Abstract: Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.

Patent
24 Jul 2017
TL;DR: In this paper, a wearable ophthalmic device is described, which includes an outward facing head-mounted light field camera to receive light from a user's surroundings and to generate numerical light field image data.
Abstract: A wearable ophthalmic device is disclosed. The device may include an outward facing head-mounted light field camera to receive light from a user's surroundings and to generate numerical light field image data. The device may also include a light field processor to access the numerical light field image data, to obtain an optical prescription for an eye of the user, and to computationally introduce an amount of positive or negative optical power to the numerical light field image data based on the optical prescription to generate modified numerical light field image data. The device may also include a head-mounted light field display to generate a physical light field corresponding to the modified numerical light field image data.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A new prior is proposed, called linear approximation prior that reveals intrinsic property among the LF sub-views that can approximate a certain view with a weighted sum of other views and is proposed as a powerful coding scheme.
Abstract: In recent years, the light field (LF) image as a new imaging modality has attracted much interest. While light field camera records both the luminance and direction of the rays in a scene, large amount of data makes it a great challenge for storage and transmission. Thus an adequate compression scheme is desired. In this paper, we propose a new prior, called linear approximation prior that reveals intrinsic property among the LF sub-views. It indicates that we can approximate a certain view with a weighted sum of other views. By fully exploiting this prior we propose a powerful coding scheme. The experiments show the superior performance of our scheme, which achieves as large as 45.51% BD-rate reduction and 37.41% BD-rate reduction on average compared with the High Efficiency Video Coding (HEVC).

Journal ArticleDOI
TL;DR: An example-based super-resolution algorithm for light fields is described, which allows the increase of the spatial resolution of the different views in a consistent manner across all subaperture images of the light field.
Abstract: Light field imaging has emerged as a very promising technology in the field of computational photography. Cameras are becoming commercially available for capturing real-world light fields. However, capturing high spatial resolution light fields remains technologically challenging, and the images rendered from real light fields have today a significantly lower spatial resolution compared to traditional two-dimensional (2-D) cameras. This paper describes an example-based super-resolution algorithm for light fields, which allows the increase of the spatial resolution of the different views in a consistent manner across all subaperture images of the light field. The algorithm learns linear projections between subspaces of reduced dimension in which reside patch-volumes extracted from the light field. The method is extended to cope with angular super-resolution, where 2-D patches of intermediate subaperture images are approximated from neighboring subaperture images using multivariate ridge regression. Experimental results show significant quality improvement when compared to state-of-the-art single-image super-resolution methods applied on each view separately, as well as when compared to a recent light field super-resolution techniques based on deep learning.

Journal Article
TL;DR: In this article, the authors derived the microscopic optomagnonic Hamiltonian of a macrospin in the optical cavities and showed that the induced dissipation coefficient can change sign on the Bloch sphere, leading to self-sustained oscillations.
Abstract: Experiments during the past 2 years have shown strong resonant photon-magnon coupling in microwave cavities, while coupling in the optical regime was demonstrated very recently for the first time. Unlike with microwaves, the coupling in optical cavities is parametric, akin to optomechanical systems. This line of research promises to evolve into a new field of optomagnonics, aimed at the coherent manipulation of elementary magnetic excitations in solid-state systems by optical means. In this work we derive the microscopic optomagnonic Hamiltonian. In the linear regime the system reduces to the well-known optomechanical case, with remarkably large coupling. Going beyond that, we study the optically induced nonlinear classical dynamics of a macrospin. In the fast-cavity regime we obtain an effective equation of motion for the spin and show that the light field induces a dissipative term reminiscent of Gilbert damping. The induced dissipation coefficient, however, can change sign on the Bloch sphere, giving rise to self-sustained oscillations. When the full dynamics of the system is considered, the system can enter a chaotic regime by successive period doubling of the oscillations.

Journal ArticleDOI
TL;DR: A generalized framework to model the image formation process of the existing light-field display methods is described and a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display is presented.
Abstract: One of the key issues in conventional stereoscopic displays is the well-known vergence-accommodation conflict problem due to the lack of the ability to render correct focus cues for 3D scenes. Recently several light field display methods have been explored to reconstruct a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. These methods are potentially capable of rendering correct or nearly correct focus cues and addressing the vergence-accommodation conflict problem. In this paper, we describe a generalized framework to model the image formation process of the existing light-field display methods and present a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display. We further employ this framework to investigate the trade-offs and guidelines for an optimal 3D light field display design. Our method is based on quantitatively evaluating the modulation transfer functions of the perceived retinal image of a light field display by accounting for the ocular factors of the human visual system.

Journal ArticleDOI
Qingbin Fan1, D. H. Wang1, Pengcheng Huo1, Zijie Zhang1, Yuzhang Liang1, Ting Xu1 
TL;DR: An extremely compact design to generate high-efficiency AFA beam at visible frequency by using metasurface which is composed of a single layer array of amorphous titanium dioxide (TiO2) elliptical nanofins sitting on the fused-silica substrate is proposed.
Abstract: Conventional method to generate autofocusing Airy (AFA) beam involves the optical Fourier transform (FT) system, which has a fairly long working distance due to the focal length of FT lens, presence of spatial light modulator (SLM) and auxiliary total reflection mirrors. Here, we propose an extremely compact design to generate high-efficiency AFA beam at visible frequency by using metasurface which is composed of a single layer array of amorphous titanium dioxide (TiO2) elliptical nanofins sitting on the fused-silica substrate. Numerical simulations show that the designed structures are capable of precisely controlling the deflection of Airy beam and tuning the focal length of AFA beam. We further numerically demonstrate that the phase modulation of AFA beam could combine with the concept of vortex light field to produce vortical AFA beam. We anticipate that such device can be useful in the ultra-compact integrated optic system, biomedical nanosurgery and optical trapping applications.

Journal ArticleDOI
TL;DR: It is theoretically shown that the higher harmonic gamma-ray produced by nonlinear inverse Thomson scattering of circularly polarized light is a Gamma-ray vortex, which means that it possesses a helical wave front and carries orbital angular momentum.
Abstract: Inverse Thomson scattering is a well-known radiation process that produces high-energy photons both in nature and in the laboratory. Nonlinear inverse Thomson scattering occurring inside an intense light field is a process which generates higher harmonic photons. In this paper, we theoretically show that the higher harmonic gamma-ray produced by nonlinear inverse Thomson scattering of circularly polarized light is a gamma-ray vortex, which means that it possesses a helical wave front and carries orbital angular momentum. Our work explains a recent experimental result regarding nonlinear inverse Thomson scattering that clearly shows an annular intensity distribution as a remarkable feature of a vortex beam. Our work implies that gamma-ray vortices should be produced in various situations in astrophysics in which high-energy electrons and intense circularly polarized light fields coexist. Nonlinear inverse Thomson scattering is a promising radiation process for realizing a gamma-ray vortex source based on currently available laser and accelerator technologies, which would be an indispensable tool for exploring gamma-ray vortex science.

Journal ArticleDOI
TL;DR: A concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera that combines patch-based and depth-based synthesis in a novel fashion and achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.
Abstract: We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of eight low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

Journal ArticleDOI
20 Nov 2017
TL;DR: A content-adaptive importance model in the 4D ray space is formulated based on psychophysical experiments and theoretical analysis on visual and display bandwidths and verified by building a prototype light field display that can render only 16% -- 30% rays without compromising perceptual quality.
Abstract: A variety of applications such as virtual reality and immersive cinema require high image quality, low rendering latency, and consistent depth cues. 4D light field displays support focus accommodation, but are more costly to render than 2D images, resulting in higher latency.The human visual system can resolve higher spatial frequencies in the fovea than in the periphery. This property has been harnessed by recent 2D foveated rendering methods to reduce computation cost while maintaining perceptual quality. Inspired by this, we present foveated 4D light fields by investigating their effects on 3D depth perception. Based on our psychophysical experiments and theoretical analysis on visual and display bandwidths, we formulate a content-adaptive importance model in the 4D ray space. We verify our method by building a prototype light field display that can render only 16% -- 30% rays without compromising perceptual quality.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A novel light field compression scheme using a depth image-based view synthesis technique that significantly outperforms a similar view synthesis method which utilizes convolutional neural networks, and does not require training with a large dataset of light fields as required by deep learning techniques.
Abstract: This paper describes a novel light field compression scheme using a depth image-based view synthesis technique. A small subset of views is compressed with HEVC inter coding and then used to reconstruct the entire light field. The residual of the whole light field can be then restructured as a video sequence and encoded by HEVC inter coding. Experiments show that our scheme significantly outperforms a similar view synthesis method which utilizes convolutional neural networks, and does not require training with a large dataset of light fields as required by deep learning techniques. It also outperforms as well the direct encoding of all the light field views.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate strong coherent coupling between a single Rydberg superatom, consisting of thousands of atoms behaving as a single two-level system, and a propagating light pulse containing only a few photons.
Abstract: The interaction of a single photon with an individual two-level system is the textbook example of quantum electrodynamics. Achieving strong coupling in this system so far required confinement of the light field inside resonators or waveguides. Here, we demonstrate strong coherent coupling between a single Rydberg superatom, consisting of thousands of atoms behaving as a single two-level system due to the Rydberg blockade, and a propagating light pulse containing only a few photons. The strong light-matter coupling in combination with the direct access to the outgoing field allows us to observe for the first time the effect of the interactions on the driving field at the single photon level. We find that all our results are in quantitative agreement with the predictions of the theory of a single two-level system strongly coupled to a single quantized propagating light mode. The demonstrated coupling strength opens the way towards interfacing photonic and atomic qubits and preparation of propagating non-classical states of light, two crucial building blocks in future quantum networks.