scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: A learning-based method using residual convolutional networks is proposed to reconstruct light fields with higher spatial resolution and shows good performances in preserving the inherent epipolar property in light field images.
Abstract: Light field cameras are considered to have many potential applications since angular and spatial information is captured simultaneously. However, the limited spatial resolution has brought lots of difficulties in developing related applications and becomes the main bottleneck of light field cameras. In this paper, a learning-based method using residual convolutional networks is proposed to reconstruct light fields with higher spatial resolution. The view images in one light field are first grouped into different image stacks with consistent sub-pixel offsets and fed into different network branches to implicitly learn inherent corresponding relations. The residual information in different spatial directions is then calculated from each branch and further integrated to supplement high-frequency details for the view image. Finally, a flexible solution is proposed to super-resolve entire light field images with various angular resolutions. Experimental results on synthetic and real-world datasets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in both visual and numerical evaluations. Furthermore, the proposed method shows good performances in preserving the inherent epipolar property in light field images.

150 citations


Journal ArticleDOI
01 Mar 2019-Nature
TL;DR: The approach to determining these field patterns is based solely on far-field measurements of the scattering properties of a disordered medium, it could be suitable for other applications in which waves need to be perfectly focused, routed or absorbed.
Abstract: Non-Hermitian wave engineering is a recent and fast-moving field that examines both fundamental and application-oriented phenomena1–7. One such phenomenon is coherent perfect absorption8–11—an effect commonly referred to as ‘anti-lasing’ because it corresponds to the time-reversed process of coherent emission of radiation at the lasing threshold (where all radiation losses are exactly balanced by the optical gain). Coherent perfect absorbers (CPAs) have been experimentally realized in several setups10–18, with the notable exception of a CPA in a disordered medium (a medium without engineered structure). Such a ‘random CPA’ would be the time-reverse of a ‘random laser’19,20, in which light is resonantly enhanced by multiple scattering inside a disorder. Because of the complexity of this scattering process, the light field emitted by a random laser is also spatially complex and not focused like a regular laser beam. Realizing a random CPA (or ‘random anti-laser’) is therefore challenging because it requires the equivalent of time-reversing such a light field in all its degrees of freedom to create coherent radiation that is perfectly absorbed when impinging on a disordered medium. Here we use microwave technology to build a random anti-laser and demonstrate its ability to absorb suitably engineered incoming radiation fields with near-perfect efficiency. Because our approach to determining these field patterns is based solely on far-field measurements of the scattering properties of a disordered medium, it could be suitable for other applications in which waves need to be perfectly focused, routed or absorbed. Coherent perfect absorption in a disordered medium is demonstrated experimentally in the microwave regime through the realization of a random anti-laser that absorbs engineered radiation with near-perfect efficiency.

105 citations


Journal ArticleDOI
TL;DR: A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views and it is indicated that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI).
Abstract: In this paper, a novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views. We indicate that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI). The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling. Directly upsampling or super-resolving the light field in the angular dimensions causes ghosting effects. To suppress these ghosting effects, we contribute a novel “blur-restoration-deblur” framework. First, the “blur” step is applied to extract the low-frequency components of the light field in the spatial dimensions by convolving each EPI slice with a selected blur kernel. Then, the “restoration” step is implemented by a CNN, which is trained to restore the angular details of the EPI. Finally, we use a non-blind “deblur” operation to recover the spatial high frequencies suppressed by the EPI blur. We evaluate our approach on several datasets, including synthetic scenes, real-world scenes and challenging microscope light field data. We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms. We further show extended applications, including depth enhancement and interpolation for unstructured input. More importantly, a novel rendering approach is presented by combining the proposed framework and depth information to handle large disparities.

104 citations


Journal ArticleDOI
TL;DR: A pipeline that automatically determines the best configuration for photo-consistency measure, which leads to the most reliable depth label from the light field, which was competitive with several state-of-the-art methods for the benchmark and real-world light field datasets.
Abstract: One of the core applications of light field imaging is depth estimation. To acquire a depth map, existing approaches apply a single photo-consistency measure to an entire light field. However, this is not an optimal choice because of the non-uniform light field degradations produced by limitations in the hardware design. In this paper, we introduce a pipeline that automatically determines the best configuration for photo-consistency measure, which leads to the most reliable depth label from the light field. We analyzed the practical factors affecting degradation in lenslet light field cameras, and designed a learning based framework that can retrieve the best cost measure and optimal depth label. To enhance the reliability of our method, we augmented an existing light field benchmark to simulate realistic source dependent noise, aberrations, and vignetting artifacts. The augmented dataset was used for the training and validation of the proposed approach. Our method was competitive with several state-of-the-art methods for the benchmark and real-world light field datasets.

76 citations


Journal ArticleDOI
TL;DR: This work experimentally demonstrates the nonreciprocal transmission between two counterpropagating light fields with extremely low power by adopting the strong nonlinearity associated with a few atoms in a strongly coupled cavity QED system and an asymmetric cavity configuration.
Abstract: Optical nonreciprocity is important in photonic information processing to route the optical signal or prevent the reverse flow of noise. By adopting the strong nonlinearity associated with a few atoms in a strongly coupled cavity QED system and an asymmetric cavity configuration, we experimentally demonstrate the nonreciprocal transmission between two counterpropagating light fields with extremely low power. The transmission of 18% is achieved for the forward light field, and the maximum blocking ratio for the reverse light is 30 dB. Though the transmission of the forward light can be maximized by optimizing the impedance matching of the cavity, it is ultimately limited by the inherent loss of the scheme. This nonreciprocity can even occur on a few-photon level due to the high optical nonlinearity of the system. The working power can be flexibly tuned by changing the effective number of atoms strongly coupled to the cavity. The idea and result can be applied to optical chips as optical diodes by using fiber-based cavity QED systems. Our work opens up new perspectives for realizing optical nonreciprocity on a few-photon level based on the nonlinearities of atoms strongly coupled to an optical cavity.

64 citations


Proceedings Article
Miao Zhang1, Jingjing Li1, Ji Wei, Yongri Piao1, Huchuan Lu1 
01 Jan 2019
TL;DR: A deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection and deeply explore and comprehensively exploit internal correlation of focal slices for accurate prediction by designing feature fusion and integration mechanisms.
Abstract: Light field data have been demonstrated in favor of many tasks in computer vision, but existing works about light field saliency detection still rely on hand-crafted features. In this paper, we present a deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection. Our goal is to deeply explore and comprehensively exploit internal correlation of focal slices for accurate prediction by designing feature fusion and integration mechanisms. The success of our method is demonstrated by achieving the state of the art on three datasets. We present this problem in a way that is accessible to members of the community and provide a large-scale light field dataset that facilitates comparisons across algorithms. The code and dataset will be made publicly available.

55 citations


Journal ArticleDOI
TL;DR: A new Light Field representation for efficient Light Field processing and rendering called Fourier Disparity Layers, which allows real-time Light Field rendering and direct applications such as view interpolation or extrapolation and denoising are presented and evaluated.
Abstract: In this paper, we present a new Light Field representation for efficient Light Field processing and rendering called Fourier Disparity Layers (FDL). The proposed FDL representation samples the Light Field in the depth (or equivalently the disparity) dimension by decomposing the scene as a discrete sum of layers. The layers can be constructed from various types of Light Field inputs, including a set of sub-aperture images, a focal stack, or even a combination of both. From our derivations in the Fourier domain, the layers are simply obtained by a regularized least square regression performed independently at each spatial frequency, which is efficiently parallelized in a GPU implementation. Our model is also used to derive a gradient descent-based calibration step that estimates the input view positions and an optimal set of disparity values required for the layer construction. Once the layers are known, they can be simply shifted and filtered to produce different viewpoints of the scene while controlling the focus and simulating a camera aperture of arbitrary shape and size. Our implementation in the Fourier domain allows real-time Light Field rendering. Finally, direct applications such as view interpolation or extrapolation and denoising are presented and evaluated.

54 citations


Journal ArticleDOI
TL;DR: In this article, the authors perform basis-independent tomography of arbitrary pure vectorial light fields using spatially resolved Stokes projections, and demonstrate that concurrence, as a non-separability witness, can be extracted from global Stokes projection, without assessing the spatial degree of freedom.
Abstract: Complex vectorial light fields, non-separable in their polarization and spatial degree of freedom, are of relevance in a wide variety of research areas encompassing microscopy, metrology, communication and topological studies. Controversially, they have been suggested as analogues to quantum entanglement, raising fundamental questions on the relation between non-separability in classical systems, and entanglement in quantum systems. We perform basis-independent tomography of arbitrary pure vectorial light fields using spatially resolved Stokes projections. Moreover, we propose and demonstrate that concurrence, as a non-separability witness for any pure vectorial light field, can be extracted from global Stokes projections, without assessing the spatial degree of freedom.

53 citations


Proceedings ArticleDOI
Yongri Piao1, Zhengkun Rong1, Miao Zhang1, Xiao Li1, Huchuan Lu1 
01 Aug 2019
TL;DR: This paper proposes a high-quality light field synthesis network to produce reliable 4D light field information and proposes a novel light-fielddriven saliency detection network with two purposes, that is, richer saliency features can be produced and geometric information can be considered for integration of multi-view saliency maps in a view-wise attention fashion.
Abstract: Previous 2D saliency detection methods extract salient cues from a single view and directly predict the expected results. Both traditional and deep-learning-based 2D methods do not consider geometric information of 3D scenes. Therefore the relationship between scene understanding and salient objects cannot be effectively established. This limits the performance of 2D saliency detection in challenging scenes. In this paper, we show for the first time that saliency detection problem can be reformulated as two sub-problems: light field synthesis from a single view and light-field-driven saliency detection. We propose a high-quality light field synthesis network to produce reliable 4D light field information. Then we propose a novel light-field-driven saliency detection network with two purposes, that is, i) richer saliency features can be produced for effective saliency detection; ii) geometric information can be considered for integration of multi-view saliency maps in a view-wise attention fashion. The whole pipeline can be trained in an end-to-end fashion. For training our network, we introduce the largest light field dataset for saliency detection, containing 1580 light fields that cover a wide variety of challenging scenes. With this new formulation, our method is able to achieve state-of-the-art performance.

47 citations


Posted Content
TL;DR: In this paper, a spatial-angular interactive network (LF-InterNet) is proposed to combine spatial and angular information for image super-resolution, which can achieve high PSNR and SSIM scores with low computational cost.
Abstract: Light field (LF) cameras record both intensity and directions of light rays, and capture scenes from a number of viewpoints. Both information within each perspective (i.e., spatial information) and among different perspectives (i.e., angular information) is beneficial to image super-resolution (SR). In this paper, we propose a spatial-angular interactive network (namely, LF-InterNet) for LF image SR. Specifically, spatial and angular features are first separately extracted from input LFs, and then repetitively interacted to progressively incorporate spatial and angular information. Finally, the interacted features are fused to superresolve each sub-aperture image. Experimental results demonstrate the superiority of LF-InterNet over the state-of-the-art methods, i.e., our method can achieve high PSNR and SSIM scores with low computational cost, and recover faithful details in the reconstructed images.

45 citations


Journal ArticleDOI
TL;DR: This work analyzes the sampling patterns of the LFM, and introduces a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs, and proposes a novel aliasing-aware deconvolution scheme to address the sampling artifacts.
Abstract: The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.

Journal ArticleDOI
TL;DR: A hybrid head-mounted display system that is based on a liquid crystal microlens array that can be divided into light field and two-dimensional modes to show comfortable 3D images with high resolution compensated by the 2D image.
Abstract: In recent years, head-mounted display technologies have greatly advanced. In order to overcome the accommodation-convergence conflict, light field displays reconstruct three-dimensional (3D) images with a focusing cue but sacrifice resolution. In this paper, a hybrid head-mounted display system that is based on a liquid crystal microlens array is proposed. By using a time-multiplexed method, the display signals can be divided into light field and two-dimensional (2D) modes to show comfortable 3D images with high resolution compensated by the 2D image. According to the experimental results, the prototype supports a 12.28 ppd resolution in the diagonal direction, which reaches 82% of the traditional virtual reality (VR) head-mounted display (HMD).

Journal ArticleDOI
TL;DR: Based on the Richards-Wolf formalism, a set of explicit analytical expressions are obtained that completely describe a light field with a double higher-order singularity (phase and polarization), as well as distributions of its intensity and energy flux near the focus.
Abstract: Based on the Richards-Wolf formalism, we obtain for the first time a set of explicit analytical expressions that completely describe a light field with a double higher-order singularity (phase and polarization), as well as distributions of its intensity and energy flux near the focus. A light field with the double singularity is an optical vortex with a topological charge m and with nth-order cylindrical polarization (azimuthal or radial). From the theory developed, rather general predictions follow. 1) For any singularity orders m and n, the intensity distribution near the focus has a symmetry of order 2(n – 1), while the longitudinal component of the Poynting vector has always an axially symmetric distribution. 2) If n = m + 2, there is a reverse energy flux on the optical axis near the focus, which is comparable in magnitude with the forward flux. 3) If m ≠0, forward and reverse energy fluxes rotate along a spiral around the optical axis, whereas at m = 0 the energy flux is irrotational. 4) For any values of m and n, there is a toroidal energy flux in the focal area near the dark rings in the distribution of the longitudinal component of the Poynting vector.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multi-projection center (MPC) model with 6 intrinsic parameters to characterize light field cameras based on traditional two-parallelplane (TPP) representation.
Abstract: Light field cameras can capture both spatial and angular information of light rays, enabling 3D reconstruction by a single exposure. The geometry of 3D reconstruction is affected by intrinsic parameters of a light field camera significantly. In the paper, we propose a multi-projection-center (MPC) model with 6 intrinsic parameters to characterize light field cameras based on traditional two-parallel-plane (TPP) representation. The MPC model can generally parameterize light field in different imaging formations, including conventional and focused light field cameras. By the constraints of 4D ray and 3D geometry, a 3D projective transformation is deduced to describe the relationship between geometric structure and the MPC coordinates. Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model. Our calibration method includes a close-form solution and a non-linear optimization by minimizing re-projection errors. Experimental results on both simulated and real scene data have verified the performance of our algorithm.

Journal ArticleDOI
20 Aug 2019
TL;DR: In this paper, it was shown that a dipolar chiral nanostructure is capable of distinguishing the sign of the phase vortex of the incoming light beam, carried by a linearly polarized Laguerre-Gaussian beam, upon tight focusing.
Abstract: The capability to distinguish the handedness of circularly polarized light is a well-known intrinsic property of a chiral nanostructure. It is a long-standing controversial debate, however, whether a chiral object can also sense the vorticity, or the orbital angular momentum (OAM), of a light field. Since OAM is a spatial property, it seems rather counterintuitive that a point-like chiral object could be able to distinguish the sense of the wave-front of light carrying OAM. Here, we show that a dipolar chiral nanostructure is indeed capable of distinguishing the sign of the phase vortex of the incoming light beam. To this end, we take advantage of the conversion of the sign of OAM, carried by a linearly polarized Laguerre–Gaussian beam, into the sign of optical chirality upon tight focusing. Our study provides for a deeper insight into the discussion of chiral light–matter interactions and the respective role of OAM.

Journal ArticleDOI
TL;DR: In this article, the authors show that light's orbital angular momentum can be efficiently transferred to an elementary excitation in solids with extended electronic states, where the results obey the selection rules governed by the conservation of the total angular momentum, which is numerically confirmed by the electromagnetic field analysis.
Abstract: The nature of light-matter interaction is governed by the spatial-temporal structures of a light field and material wavefunctions. The emergence of the light beam with transverse phase vortex, or equivalently orbital angular momentum (OAM) has been providing intriguing possibilities to induce unconventional optical transitions beyond the framework of the electric dipole interaction. The uniqueness stems from the OAM transfer from light to material, as demonstrated using the bound electron of a single trapped ion. However, many aspects of the vortex light-matter interaction are still unexplored especially in solids with extended electronic states. Here, we unambiguously visualized dipole-forbidden multipolar excitations in a solid-state electron system; spoof localized surface plasmon, selectively induced by the terahertz vortex beam. The results obey the selection rules governed by the conservation of the total angular momentum, which is numerically confirmed by the electromagnetic field analysis. Our results show light's OAM can be efficiently transferred to an elementary excitation in solids.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: LiFF is scale invariant and utilizes the full 4D light field to detect features that are robust to changes in perspective, particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene.
Abstract: Feature detectors and descriptors are key low-level vision tools that many higher-level tasks build on. Unfortunately these fail in the presence of challenging light transport effects including partial occlusion, low contrast, and reflective or refractive surfaces. Building on spatio-angular imaging modalities offered by emerging light field cameras, we introduce a new and computationally efficient 4D light field feature detector and descriptor: LiFF. LiFF is scale invariant and utilizes the full 4D light field to detect features that are robust to changes in perspective. This is particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene. We demonstrate significantly improved 3D reconstructions via SfM when using LiFF instead of the leading 2D or 4D features, and show that LiFF runs an order of magnitude faster than the leading 4D approach. Finally, LiFF inherently estimates depth for each feature, opening a path for future research in light field-based SfM.

Journal ArticleDOI
TL;DR: Experimental results verify that the proposed light field display has the capability to present realistic 3D images of historical relics in 120-degree wide viewing angle.
Abstract: The light field display can provide vivid and natural 3D performance, which can find many applications, such as relics research and exhibition. However, current light field displays are constrained by the viewing angle, which cannot meet the expectations. With three groups directional backlights and a fast-switching LCD panel, a time-multiplexed light field display with a 120-degree wide viewing angle is demonstrated. Up to 192 views are constructed within the viewing range to ensure the right geometric occlusion and smooth parallax motion. Clear 3D images can be perceived at the entire range of viewing angle. Additionally, the designed holographic functional screen is used to recompose the light distribution and the compound aspheric lens array is optimized to balance the aberrations and improve the 3D display quality. Experimental results verify that the proposed light field display has the capability to present realistic 3D images of historical relics in 120-degree wide viewing angle.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a dipolar chiral nanostructure is capable of distinguishing the sign of the phase vortex of the incoming light beam, carried by a linearly polarized Laguerre-Gaussian beam, upon tight focusing.
Abstract: The capability to distinguish the handedness of circularly polarized light is a well-known intrinsic property of a chiral nanostructure. It is a long-standing controversial debate, however, whether a chiral object can also sense the vorticity, or the orbital angular momentum (OAM), of a light field. Since OAM is a non-local property, it seems rather counter-intuitive that a point-like chiral object could be able to distinguish the sense of the wave-front of light carrying OAM. Here, we show that a dipolar chiral nanostructure is indeed capable of distinguishing the sign of the phase vortex of the incoming light beam. To this end, we take advantage of the conversion of the sign of OAM, carried by a linearly polarized Laguerre-Gaussian beam, into the sign of optical chirality upon tight focusing. Our study provides for a deeper insight into the discussion of chiral light-matter interactions and the respective role of OAM.

Journal ArticleDOI
TL;DR: In this paper, the second-harmonic generation (SHG) induced by vectorial laser modes was studied and the spin-orbit-coupling (SOC) property of the pump field was analyzed.
Abstract: Vectorial nonlinear optics refers to the investigation of optical processes whose nonlinear polarization (NP) undergoes spin-orbit-coupling (SOC) interactions where, in general, the driving light field or the new field generated by the interaction contains the SOC property. To contribute to fundamental knowledge in this domain, we examine the type-II second-harmonic generation (SHG) induced by vectorial laser modes. First, we provide a general theory to analyze the vectorial SHG process. Second, by using two typical vector modes as examples, we show how the SOC of the pump field dictates nonlinear interaction. Finally, we corroborate our theoretical predictions through experiments to confirm the crucial role of the SOC in nonlinear interactions. These results enhance our fundamental understanding of SOC-mediated nonlinear optics and lay the foundation for further fundamental studies as well as possible applications.

Journal ArticleDOI
TL;DR: In this article, a simple sufficient condition for the interactions to be Hamiltonian is derived: the light field needs to interact twice with the systems and the second interaction has to be the time reversal of the first.
Abstract: We address a fundamental question of quantum optics: Can a beam of light mediate coherent Hamiltonian interactions between two distant quantum systems? This is an intriguing question whose answer is not a priori clear, since the light carries away information about the systems and might be subject to losses, giving rise to intrinsic decoherence channels associated with the coupling. Our answer is affirmative and we derive a particularly simple sufficient condition for the interactions to be Hamiltonian: The light field needs to interact twice with the systems and the second interaction has to be the time reversal of the first. We demonstrate that, even in the presence of significant optical loss, coherent interactions can be realized and generate substantial amounts of entanglement between the systems. Our method is directly applicable for building hybrid quantum systems, with relevant applications in the fields of optomechanics and atomic ensembles.

Journal ArticleDOI
TL;DR: This paper systematically model and analyze the ray position sampling issue in the reconstruction of the light field and characterize its effect on the quality of the rendered retinal image and on the accommodative response in viewing a 3D light field display.
Abstract: A 3D light field display typically reconstructs a 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. These light field display methods are potentially capable of rendering correct or nearly correct focus cues and therefore addressing the well-known vergence-accommodation conflict problem plaguing the conventional stereoscopic displays. However, very limited efforts have been made to investigate the effects of light ray sampling on the quality of the rendered focus cues and thus the visual responses of a viewer in light field displays. In this paper, by accounting for both the specifications of a light field display system and the ocular factors of the human visual system, we systematically model and analyze the ray position sampling issue in the reconstruction of the light field and characterize its effect on the quality of the rendered retinal image and on the accommodative response in viewing a 3D light field display. Using a recently developed 3D light field display prototype, we further experimentally validated the effects of ray position sampling on the resolution and accommodative response of a light field display, of which the result matches with theoretical characterization.

Journal ArticleDOI
TL;DR: In this article, a mode-selective excitation of complex amplitudes is performed with only one phase-only spatial light modulator, and the light field propagating through the fiber is measured holographically and is analyzed by a rapid decomposition method.
Abstract: Multimode fibers (MMF) are promising candidates to increase the data rate while reducing the space required for optical fiber networks. However, their use is hampered by mode mixing and other effects, leading to speckled output patterns. This can be overcome by measuring the transmission matrix (TM) of a multimode fiber. In this contribution, a mode-selective excitation of complex amplitudes is performed with only one phase-only spatial light modulator. The light field propagating through the fiber is measured holographically and is analyzed by a rapid decomposition method. This technique requires a small amount of measurements N, which corresponds to the degree of freedom of the fiber. The TM determines the amplitude and phase relationships of the modes, which allows us to understand the mode scrambling processes in the MMF and can be used for mode division multiplexing.

Journal ArticleDOI
TL;DR: For the first time, light field 3D image reproduction with a maximum spatial resolution of approximately 330,000 pixels, which is near standard-definition television resolution and three times that of conventional light field display using a lens array, is achieved.
Abstract: Natural three-dimensional (3D) images, perceived as real objects in front of the viewer, can be displayed by faithfully reproducing light ray information. However, 3D images with sufficient characteristics for practical use cannot be displayed using conventional technologies because highly accurate reproduction of numerous light rays is required. We propose a novel full-parallax light field 3D display method named 'Aktina Vision', which includes a special top-hat diffusing screen with a narrow diffusion angle and an optical system for reproducing high-density light rays. Our prototype system reproduces over 100,000,000 light rays at angle intervals of less than 1° and optimally diffuses light rays with the top-hat diffusing screen. Thus, for the first time, light field 3D image reproduction with a maximum spatial resolution of approximately 330,000 pixels, which is near standard-definition television resolution and three times that of conventional light field display using a lens array, is achieved.

Proceedings ArticleDOI
22 Sep 2019
TL;DR: Different scanning orders of the light field views are investigated and their respective efficiencies regarding the compression performance are investigated to refine the layer model in order to predict the next views with increased accuracy.
Abstract: In this paper, we present a compression method for light fields based on the Fourier Disparity Layer representation. This light field representation consists in a set of layers that can be efficiently constructed in the Fourier domain from a sparse set of views, and then used to reconstruct intermediate viewpoints without requiring a disparity map. In the proposed compression scheme, a subset of light field views is encoded first and used to construct a Fourier Disparity Layer model from which a second subset of views is predicted. After encoding and decoding the residual of those predicted views, a larger set of decoded views is available, allowing us to refine the layer model in order to predict the next views with increased accuracy. The procedure is repeated until the complete set of light field views is encoded. Following this principle, we investigate in the paper different scanning orders of the light field views and analyse their respective efficiencies regarding the compression performance.

Journal ArticleDOI
TL;DR: The proposed method synthesizes holograms without hogel configuration by applying the complex field recovery technique from its Wigner distribution function to the light field data, generating a converging parabolic wave for each object point with continuous wavefront.
Abstract: We propose a novel method that synthesizes computer-generated holograms from light field data. Light field, or ray space, is the spatio-angular distribution of light rays coming from three-dimensional scene, and it can also be represented using a large number of views from different observation directions. The proposed method synthesizes a hologram by applying the complex field recovery technique from its Wigner distribution function to the light field data. Unlike conventional approaches, the proposed method synthesizes holograms without hogel configuration, generating a converging parabolic wave for each object point with continuous wavefront. The proposed method does not trade the spatial resolution with angular resolution like conventional hogel-based approaches. Moreover, the proposed method works not only for random phase light field like conventional approaches, but also for arbitrary phase distribution with corresponding carrier waves. Therefore, the proposed method is useful in synthesizing holographic contents for a wide range of applications. The proposed method is verified by simulations and optical experiments, showing successful reconstruction of three-dimensional objects.

Journal ArticleDOI
TL;DR: In this article, the authors show that the orbital angular momentum (OAM) content of a perfect vortice can be measured quantitatively using optical modal decomposition, an already widely utilized technique for decomposing an arbitrary light field into a set of basis functions.
Abstract: Perfect (optical) vortices (PVs) have the mooted ability to encode orbital angular momentum (OAM) onto the field within a well-defined annular ring. Although this makes the near-field radial profile independent of OAM, the far-field radial profile nevertheless scales with OAM, forming a Bessel structure. As yet, the quantitative measurement of the OAM of PVs has been elusive, with current detection protocols opting for more qualitative procedures using interference or mode sorters. Here, we show that the OAM content of a PV can be measured quantitatively using optical modal decomposition: an already widely utilized technique for decomposing an arbitrary light field into a set of basis functions. We outline the theory and confirm it by experiment with holograms written to spatial light modulators, highlighting the care required for accurate decomposition of the OAM content. Our work will be of interest to the large community who seek to use such structured light fields in various applications, including optical trapping and tweezing, and optical communications.

Posted Content
TL;DR: LiFF as mentioned in this paper is a scale invariant feature detector and descriptor that utilizes the full 4D light field to detect features that are robust to changes in perspective, which is particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene.
Abstract: Feature detectors and descriptors are key low-level vision tools that many higher-level tasks build on. Unfortunately these fail in the presence of challenging light transport effects including partial occlusion, low contrast, and reflective or refractive surfaces. Building on spatio-angular imaging modalities offered by emerging light field cameras, we introduce a new and computationally efficient 4D light field feature detector and descriptor: LiFF. LiFF is scale invariant and utilizes the full 4D light field to detect features that are robust to changes in perspective. This is particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene. We demonstrate significantly improved 3D reconstructions via SfM when using LiFF instead of the leading 2D or 4D features, and show that LiFF runs an order of magnitude faster than the leading 4D approach. Finally, LiFF inherently estimates depth for each feature, opening a path for future research in light field-based SfM.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This dataset contains 250 light fields, captured with a focused plenoptic camera and classified into eight clinical categories, according to the type of lesion, which has high potential for advancing medical imaging research and development of new classification algorithms based on light fields.
Abstract: Light field imaging technology has been attracting increasing interest because it enables capturing enriched visual information and expands the processing capabilities of traditional 2D imaging systems. Dense multiview, accurate depth maps and multiple focus planes are examples of different types of visual information enabled by light fields. This technology is also emerging in medical imaging research, like dermatology, allowing to find new features and improve classification algorithms, namely those based on machine learning approaches. This paper presents a contribution for the research community, in the form of a publicly available light field image dataset of skin lesions (named SKINL2 v1.0). This dataset contains 250 light fields, captured with a focused plenoptic camera and classified into eight clinical categories, according to the type of lesion. Each light field is comprised of 81 different views of the same lesion. The database also includes the dermatoscopic image of each lesion. A representative subset of 17 central view images of the light fields is further characterised in terms of spatial information (SI), colourfulness (CF) and compressibility. This dataset has high potential for advancing medical imaging research and development of new classification algorithms based on light fields, as well as in clinically-oriented dermatology studies.

Journal ArticleDOI
TL;DR: In this article, a mechanism for light-induced Floquet engineering of the Fermi surface was introduced to dynamically tip the balance between competing instabilities in correlated condensed matter systems in the vicinity of a Van Hove singularity.
Abstract: We introduce a mechanism for light-induced Floquet engineering of the Fermi surface to dynamically tip the balance between competing instabilities in correlated condensed matter systems in the vicinity of a Van Hove singularity. We first calculate how the Fermi surface is deformed by an off-resonant, high-frequency light field and then determine the impact of this deformation on the ordering tendencies using an unbiased functional renormalization group approach. As a testbed, we investigate Floquet engineering in cuprates driven by light. We find that the $d$-wave superconducting ordering tendency in this system can be strongly enhanced over the Mott insulating one. This gives rise to extended regions of induced $d$-wave superconductivity in the effective phase diagram in the presence of a light field.