scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2012"


Journal ArticleDOI
01 Jul 2012
TL;DR: A unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures is introduced, which is the first to allow joint multilayer, multiframe light field decompositions and is also the first optimization method for designs combining multiple layers with directional backlighting.
Abstract: We introduce tensor displays: a family of compressive light field displays comprising all architectures employing a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We show that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. Using this representation we introduce a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light field decompositions, significantly reducing artifacts observed with prior multilayer-only and multiframe-only decompositions; it is also the first optimization method for designs combining multiple layers with directional backlighting. We verify the benefits and limitations of tensor displays by constructing a prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. Through simulations and experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.

429 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: A novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework is presented, taking into account the special structure of light field data, and reformulate the problem of stereo matching to a constrained labeling problem on epipolar plane images.
Abstract: We present a novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework. Taking into account the special structure of light field data, we reformulate the problem of stereo matching to a constrained labeling problem on epipolar plane images, which can be thought of as vertical and horizontal 2D cuts through the field. This alternative formulation allows to estimate accurate depth values even for specular surfaces, while simultaneously taking into account global visibility constraints in order to obtain consistent depth maps for all views. The resulting optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithm on a number of synthetic and real-world examples captured with a light field gantry and a plenoptic camera, and compare to ground truth where available. All data sets as well as source code are provided online for additional evaluation.

385 citations


Patent
20 Mar 2012
TL;DR: In this article, a radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lenses, as in conventional plenoptic cameras.
Abstract: Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to light-fields captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods.

282 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare different iterative ghost imaging algorithms and show that their normalized weighting algorithm can match the performance of differential ghost imaging, and adapt the weighting factor used in the traditional ghost imaging algorithm to account for changes in the efficiency of generated light field.
Abstract: We present an experimental comparison between different iterative ghost imaging algorithms. Our experimental setup utilizes a spatial light modulator for generating known random light fields to illuminate a partially-transmissive object. We adapt the weighting factor used in the traditional ghost imaging algorithm to account for changes in the efficiency of the generated light field. We show that our normalized weighting algorithm can match the performance of differential ghost imaging.

279 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work proposes a patch based approach, where it is shown that the light field patches with the same disparity value lie on a low-dimensional subspace and that the dimensionality of such subspaces varies quadratically with the disparity value.
Abstract: With the recent availability of commercial light field cameras, we can foresee a future in which light field signals will be as common place as images. Hence, there is an imminent need to address the problem of light field processing. We provide a common framework for addressing many of the light field processing tasks, such as denoising, angular and spatial superresolution, etc. (in essence, all processing tasks whose observation models are linear). We propose a patch based approach, where we model the light field patches using a Gaussian mixture model (GMM). We use the ”disparity pattern” of the light field data to design the patch prior. We show that the light field patches with the same disparity value (i.e., at the same depth from the focal plane) lie on a low-dimensional subspace and that the dimensionality of such subspaces varies quadratically with the disparity value. We then model the patches as Gaussian random variables conditioned on its disparity value, thus, effectively leading to a GMM model. During inference, we first find the disparity value of a patch by a fast subspace projection technique and then reconstruct it using the LMMSE algorithm. With this prior and inference algorithm, we show that we can perform many different processing tasks under a common framework.

237 citations


Journal ArticleDOI
TL;DR: A new rendering algorithm is presented that is tailored to the unstructured yet dense data the authors capture and can achieve piecewise‐bicubic reconstruction using a triangulation of the captured viewpoints and subdivision rules applied to reconstruction weights.
Abstract: We present a system for interactively acquiring and rendering light fields using a hand-held commodity camera. The main challenge we address is assisting a user in achieving good coverage of the 4D domain despite the challenges of hand-held acquisition. We define coverage by bounding reprojection error between viewpoints, which accounts for all 4 dimensions of the light field. We use this criterion together with a recent Simultaneous Localization and Mapping technique to compute a coverage map on the space of viewpoints. We provide users with real-time feedback and direct them toward under-sampled parts of the light field. Our system is lightweight and has allowed us to capture hundreds of light fields. We further present a new rendering algorithm that is tailored to the unstructured yet dense data we capture. Our method can achieve piecewise-bicubic reconstruction using a triangulation of the captured viewpoints and subdivision rules applied to reconstruction weights. © 2012 Wiley Periodicals, Inc.

208 citations


Patent
04 Sep 2012
TL;DR: A light field data acquisition device includes optics and a light field sensor to acquire light field image data of a scene, which can subsequently be used to generate a plurality of images of the scene using different virtual focus depths as discussed by the authors.
Abstract: A light field data acquisition device includes optics and a light field sensor to acquire light field image data of a scene. In at least one embodiment, the light field sensor is located at a substantially fixed, predetermined distance relative to the focal point of the optics. In response to user input, the light field acquires the light field image data of the scene, and a storage device stores the acquired data. Such acquired data can subsequently be used to generate a plurality of images of the scene using different virtual focus depths.

160 citations


Book ChapterDOI
07 Oct 2012
TL;DR: A variational framework to generate super-resolved novel views from 4D light field data sampled at low resolution, for example by a plenoptic camera is presented.
Abstract: We present a variational framework to generate super-resolved novel views from 4D light field data sampled at low resolution, for example by a plenoptic camera. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. High-accuracy depth maps for the input views are locally estimated using epipolar plane image analysis, which yields floating point depth precision without the need for expensive matching cost minimization. The disparity maps are further improved by increasing angular resolution with synthesized intermediate views. Minimization of the super-resolution model energy is performed with state of the art convex optimization algorithms within seconds.

126 citations


Journal ArticleDOI
TL;DR: The technological advances that have recently permitted the synthesis of light transients confinable to less than a single oscillation of its carrier wave and the precise attosecond tailoring of their fields are detailed.
Abstract: Ultimate control over light entails the capability of crafting its field waveform. Here, we detail the technological advances that have recently permitted the synthesis of light transients confinable to less than a single oscillation of its carrier wave and the precise attosecond tailoring of their fields. Our work opens the door to light field based control of electrons on the atomic, molecular, and mesoscopic scales.

115 citations


Patent
05 Dec 2012
TL;DR: Spatio-temporal light field cameras as mentioned in this paper can be used to capture the light field within its spatio temporally extended angular extent, which can capture and digitally record the intensity and color from multiple directional views within a wide angle.
Abstract: Spatio-temporal light field cameras that can be used to capture the light field within its spatio temporally extended angular extent. Such cameras can be used to record 3D images, 2D images that can be computationally focused, or wide angle panoramic 2D images with relatively high spatial and directional resolutions. The light field cameras can be also be used as 2D/3D switchable cameras with extended angular extent. The spatio-temporal aspects of the novel light field cameras allow them to capture and digitally record the intensity and color from multiple directional views within a wide angle. The inherent volumetric compactness of the light field cameras make it possible to embed in small mobile devices to capture either 3D images or computationally focusable 2D images. The inherent versatility of these light field cameras makes them suitable for multiple perspective light field capture for 3D movies and video recording applications.

112 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic derivation of the dynamical polarizability and the ac Stark shift of the ground and excited states of atoms interacting with a far-off-resonance light field of arbitrary polarization was presented.
Abstract: We present a systematic derivation of the dynamical polarizability and the ac Stark shift of the ground and excited states of atoms interacting with a far-off-resonance light field of arbitrary polarization. We calculate the scalar, vector, and tensor polarizabilities of atomic cesium using resonance wavelengths and reduced matrix elements for a large number of transitions. We analyze the properties of the fictitious magnetic field produced by the vector polarizability in conjunction with the ellipticity of the polarization of the light field.

Journal ArticleDOI
TL;DR: A two-beam interference technique is proposed that results in an appreciable level of spin flow in moderately focused beams and detection of the orbital motion of probe particles within a field where the transverse energy circulation is associated exclusively with the spin flow.
Abstract: The internal energy flow in a light beam can be divided into the “orbital” and “spin” parts, associated with the spatial and polarization degrees of freedom of light. In contrast to the orbital one, experimental observation of the spin flow seems problematic because it is converted into an orbital flow upon tight focusing of the beam, usually applied for energy flow detection by means of the mechanical action upon probe particles. We propose a two-beam interference technique that results in an appreciable level of spin flow in moderately focused beams and detection of the orbital motion of probe particles within a field where the transverse energy circulation is associated exclusively with the spin flow. This result can be treated as the first demonstration of mechanical action of the spin flow of a light field.

Journal ArticleDOI
TL;DR: The proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs.
Abstract: We propose a novel design for light field image acquisition based on compressive sensing principles By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images Each captured image is a random linear combination of different angular views of a scene The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs We present a prototype camera design based on the proposed framework by modifying a regular digital camera Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images

Journal ArticleDOI
24 Feb 2012-ACS Nano
TL;DR: The influence of noise on mode competition and the onset and magnitude of the relaxation oscillations is elucidated, and the dynamics and spectra of the emitted light indicate that coherent amplification and lasing are maintained even in the presence of noise and amplified spontaneous emission.
Abstract: Nanoplasmonic metamaterials are an exciting new class of engineered media that promise a range of important applications, such as subwavelength focusing, cloaking, and slowing/stopping of light. At optical frequencies, using gain to overcome potentially not insignificant losses has recently emerged as a viable solution to ultra-low-loss operation that may lead to next-generation active metamaterials. Maxwell-Bloch models for active nanoplasmonic metamaterials are able to describe the coherent spatiotemporal and nonlinear gain-plasmon dynamics. Here, we extend the Maxwell-Bloch theory to a Maxwell-Bloch Langevin approach-a spatially resolved model that describes the light field and noise dynamics in gain-enhanced nanoplasmonic structures. Using the example of an optically pumped nanofishnet metamaterial with an embedded laser dye (four-level) medium exhibiting a negative refractive index, we demonstrate the transition from loss-compensation to amplification and to nanolasing. We observe ultrafast relaxation oscillations of the bright negative-index mode with frequencies just below the THz regime. The influence of noise on mode competition and the onset and magnitude of the relaxation oscillations is elucidated, and the dynamics and spectra of the emitted light indicate that coherent amplification and lasing are maintained even in the presence of noise and amplified spontaneous emission.

Journal ArticleDOI
Kai Lou1, Sheng-Xia Qian1, Xi-Lin Wang1, Yongnan Li1, Bing Gu1, Chenghou Tu1, Hui-Tian Wang1 
TL;DR: The designable spatial structure of polarization of the femtosecond vector light field can be used to manipulate the fabricated microstructure and it is shown that the ripples are always perpendicular to the direction of the locally linear polarization.
Abstract: We have fabricated the complicated two-dimensional subwave-length microstructures induced by the femtosecond vector light fields on silicon. The fabricated microstructures have the interval between two ripples in microstructures to be around 670-690 nm and the depth of the grooves to be about 300 nm when the pulse fluence of 0.26 J/cm2 is slightly higher than the ablated threshold of 0.2 J/cm2 for silicon under the irradiation of 100 pulses. The ripples are always perpendicular to the direction of the locally linear polarization. The designable spatial structure of polarization of the femtosecond vector light field can be used to manipulate the fabricated microstructure.

01 Feb 2012
TL;DR: In this article, the authors used Rayleigh-Sommerfeld backpropagation to reconstruct the three-dimensional light field responsible for the recorded intensity in an in-line hologram.
Abstract: Rayleigh-Sommerfeld back-propagation can be used to reconstruct the three-dimensional light field responsible for the recorded intensity in an in-line hologram. Deconvolving the volumetric reconstruction with an optimal kernel derived from the Rayleigh-Sommerfeld propagator itself emphasizes the objects responsible for the scattering pattern while suppressing both the propagating light and also such artifacts as the twin image. Bright features in the deconvolved volume may be identified with such objects as colloidal spheres and nanorods. Tracking their thermally-driven Brownian motion through multiple holographic video images provides estimates of the tracking resolution, which approaches 1 nm in all three dimensions.

Proceedings ArticleDOI
TL;DR: The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor, and real world images are demonstrating the extended capabilities, and limitations are discussed.
Abstract: The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

Journal ArticleDOI
01 Jul 2012
TL;DR: A general reconstruction technique that exploits anisotropy in the light field and permits efficient reuse of input samples between pixels or world-space locations, multiplying the effective sampling rate by a large factor is described.
Abstract: Stochastic techniques for rendering indirect illumination suffer from noise due to the variance in the integrand. In this paper, we describe a general reconstruction technique that exploits anisotropy in the light field and permits efficient reuse of input samples between pixels or world-space locations, multiplying the effective sampling rate by a large factor. Our technique introduces visibility-aware anisotropic reconstruction to indirect illumination, ambient occlusion and glossy reflections. It operates on point samples without knowledge of the scene, and can thus be seen as an advanced image filter. Our results show dramatic improvement in image quality while using very sparse input samplings.

Journal ArticleDOI
TL;DR: In this paper, the authors compare different iterative ghost imaging algorithms and show that their normalized weighting algorithm can match the performance of differential ghost imaging, and adapt the weighting factor used in the traditional ghost imaging algorithm to account for changes in the efficiency of generated light field.
Abstract: We present an experimental comparison between different iterative ghost imaging algorithms. Our experimental setup utilizes a spatial light modulator for generating known random light fields to illuminate a partially-transmissive object. We adapt the weighting factor used in the traditional ghost imaging algorithm to account for changes in the efficiency of the generated light field. We show that our normalized weighting algorithm can match the performance of differential ghost imaging.

Journal ArticleDOI
TL;DR: Compressive displays aim to create flexible optical systems that can synthesize a compressed target light field through compression and tailored optical designs, where fewer display pixels are necessary to emit a given light field than a direct optical solution would require.
Abstract: Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct “optical” solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.

Patent
10 Jan 2012
TL;DR: In this paper, a 3D display consisting of a light source, a beam scanner and a beam deflector array is presented to reproduce a light field by changing a direction of light rays scanned by the beam scanner.
Abstract: Provided is a 3-dimensional (3D) display apparatus including a light source, a beam scanner, and a beam deflector array. The beam scanner scans light emitted by the light source, and the beam deflector array includes a plurality of beam deflectors arranged in an array to reproduce a light field by changing a direction of light rays scanned by the beam scanner.

Journal ArticleDOI
TL;DR: In this article, the first and second-order statistics of the scattered fields for an arbitrary intensity coherent-state light field interacting with a two-level system in a waveguide geometry were derived.
Abstract: We show how to calculate the first- and second-order statistics of the scattered fields for an arbitrary intensity coherent-state light field interacting with a two-level system in a waveguide geometry. Specifically, we calculate the resonance fluorescence from the qubit, using input-output formalism. We derive the transmission and reflection coefficients, and illustrate the bunching and antibunching of light that is scattered in the forward and backward directions, respectively. Our results agree with previous calculations on one- and two-photon scattering as well as those that are based on the master equation approach.

Patent
20 Dec 2012
TL;DR: In this paper, a 3D scene is modeled as a set of layers representing different depths of the scene, and as masks representing occlusions of layers by other layers, and the layers are represented as linear combinations of atoms selected from an overcomplete dictionary.
Abstract: A three-dimensional scene is modeled as a set of layers representing different depths of the scene, and as masks representing occlusions of layers by other layers. The layers are represented as linear combinations of atoms selected from an overcomplete dictionary. An iterative approach is used to alternately estimate the atom coefficients for layers from a light field image of the scene, assuming values for the masks, and to estimate the masks given the estimated layers. In one approach, the atoms in the dictionary are ridgelets oriented at different angles, where there is a correspondence between depth and angle.

Patent
31 Jan 2012
TL;DR: In this paper, improved downsampling techniques are employed, which can be applied to light field images and which preserve the ability to refocus (and otherwise manipulate) such images.
Abstract: According to various embodiments of the invention, improved downsampling techniques are employed, which can be applied to light field images and which preserve the ability to refocus (and otherwise manipulate) such images. Groups of pixels, rather than individual pixels, are downsampled; such groups of pixels can be defined, for example, as disks of pixels. Such downsampling is accomplished, for example, by aggregating values for pixels having similar relative positions within adjacent disks (or other defined regions or pixel groups) of the image. When applied to light field images, the downsampling techniques of the present invention reduce spatial resolution without sacrificing angular resolution. This ensures that the refocusing capability of the resulting light field image is not reduced and/or adversely impacted.

Journal ArticleDOI
TL;DR: In this paper, the superradiant emission properties from an atomic ensemble with cascade level configuration are numerically simulated and correlated spontaneous emissions (signal then idler fields) are purely stochastic processes initiated by quantum fluctuations.
Abstract: The superradiant emission properties from an atomic ensemble with cascade level configuration is numerically simulated. The correlated spontaneous emissions (signal then idler fields) are purely stochastic processes which are initiated by quantum fluctuations. We utilize the positive-$P$ phase-space method to investigate the dynamics of the atoms and counterpropagating emissions. The light field intensities are calculated, and the signal-idler correlation function is studied for different optical depths of the atomic ensemble. A shorter correlation time scale for a denser atomic ensemble implies a broader spectral window needed to store or retrieve the idler pulse.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A new method for light field compression that exploits inter-view correlation and uses homography and 2D warping to predict views and does not require additional camera parameters or a 3D geometry model is described.
Abstract: This paper describes a new method for light field compression that exploits inter-view correlation. The proposed method uses homography and 2D warping to predict views and does not require additional camera parameters or a 3D geometry model. The method utilizes angular shift between views, which is neglected in conventional motion compensation methods. Results indicate improved coding efficiency of the proposed method over traditional motion compensation schemes. A full light field coder based on video-compression demonstrates 1–1.5 dB additional improvement in PSNR, or equivalently a 4–12% additional reduction in bitrate when the new method is introduced as a prediction mode.

Journal ArticleDOI
TL;DR: In this article, ground-state depletion for sub-diffraction-limited spatial resolution in coherent anti-Stokes Raman scattering (CARS) microscopy was investigated, which is achieved via a control laser light field incident prior to the CARS excitation light fields.
Abstract: We theoretically investigate ground-state depletion for subdiffraction-limited spatial resolution in coherent anti-Stokes Raman scattering (CARS) microscopy. We propose a scheme based on ground-state depopulation, which is achieved via a control laser light field incident prior to the CARS excitation light fields. This ground-state depopulation results in a reduced CARS signal generation. With an appropriate choice of spatial beam profiles, the scheme can be used to increase the spatial resolution. Based on the density matrix formalism we calculate the CARS signal generation and find a CARS signal suppression by 75% due to ground-state depletion with a single control light field and by using two control light fields the CARS signal suppression can be enhanced to 94%. Additional control light fields will enhance the CARS suppression even further. In case of a single control light field we calculate resulting CARS images using a computer-generated test image including quantum and detector noise and show that the background from the limited CARS suppression can be removed by calculating difference images, yielding subdiffraction-limited resolution where the resolution achievable depends only on the intensity used.

Journal ArticleDOI
TL;DR: It is demonstrated that the evanescent light field, confined near the surface of a waveguide, can be used to direct light into cyanobacteria and successfully drive photosynthesis.
Abstract: The conversion of solar energy to chemical energy useful for maintaining cellular function in photosynthetic algae and cyanobacteria relies critically on light delivery to the microorganisms. Conventional direct irradiation of a bulk suspension leads to non-uniform light distribution within a strongly absorbing culture, and related inefficiencies. The study of small colonies of cells in controlled microenvironments would benefit from control over wavelength, intensity, and location of light energy on the scale of the microorganism. Here we demonstrate that the evanescent light field, confined near the surface of a waveguide, can be used to direct light into cyanobacteria and successfully drive photosynthesis. The method is enabled by the synergy between the penetration depth of the evanescent field and the size of the photosynthetic bacterium, both on the order of micrometres. Wild type Synechococcus elongatus (ATCC 33912) cells are exposed to evanescent light generated through total internal reflection of red (λ = 633 nm) light on a prism surface. Growth onset is consistently observed at intensity levels of 79 ± 10 W m−2, as measured 1 μm from the surface, and 60 ± 8 W m−2 as measured by a 5 μm depthwise average. These threshold values agree well with control experiments and literature values based on direct irradiation with daylight. In contrast, negligible growth is observed with evanescent light penetration depths less than the minor dimension of the rod-like bacterium (achieved at larger light incident angles). Collectively these results indicate that evanescent light waves can be used to tailor and direct light into cyanobacteria, driving photosynthesis.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A redundant dictionary is designed to exploit cross-cameras correlated structures to sparsely represent cameras image and an efficient compressive encoding scheme based on the random convolution framework is proposed.
Abstract: This paper presents a novel approach to capture light field in camera arrays based on the compressive sensing framework. Light fields are captured by a linear array of cameras with overlapping field of view. In this work, we design a redundant dictionary to exploit cross-cameras correlated structures to sparsely represent cameras image. Our main contributions are threefold. First, we exploit the correlations between the set of views by making use of a specially designed redundant dictionary. We show experimentally that the projection of complex scenes onto this dictionary yields very sparse coefficients. Second, we propose an efficient compressive encoding scheme based on the random convolution framework [1]. Finally, we develop a joint sparse recovery algorithm for decoding the compressed measurements and show a marked improvement over independent decoding of CS measurements.

Journal ArticleDOI
TL;DR: In this article, the authors investigated femtosecond laser pulse filamentation in fused silica by varying the wavelength in the range from 800 to 2300 nm and showed that in the case of anomalous group-velocity dispersion, a sequence of light bullets with a high spatial and temporal localisation of the light field is formed along the filament.
Abstract: We report the results of investigation of femtosecond laser pulse filamentation in fused silica by varying the wavelength in the range from 800 to 2300 nm. It is shown that in the case of the anomalous group-velocity dispersion, a sequence of 'light bullets' with a high spatial and temporal localisation of the light field is formed along the filament. The relation of the formation and propagation of light bullets with the formation of an isolated anti-Stokes wing of the supercontinuum spectrum is established.