scispace - formally typeset
Search or ask a question

Showing papers on "Light field published in 2004"


Journal ArticleDOI
27 Aug 2004-Science
TL;DR: The apparatus allows complete characterization of few-cycle waves of visible, ultraviolet, and/or infrared light, thereby providing the possibility for controlled and reproducible synthesis of ultrabroadband light waveforms.
Abstract: The electromagnetic field of visible light performs approximately 10(15) oscillations per second. Although many instruments are sensitive to the amplitude and frequency (or wavelength) of these oscillations, they cannot access the light field itself. We directly observed how the field built up and disappeared in a short, few-cycle pulse of visible laser light by probing the variation of the field strength with a 250-attosecond electron burst. Our apparatus allows complete characterization of few-cycle waves of visible, ultraviolet, and/or infrared light, thereby providing the possibility for controlled and reproducible synthesis of ultrabroadband light waveforms.

604 citations


Proceedings ArticleDOI
19 Jul 2004
TL;DR: A simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework is described and it is shown how to estimate camera positions up to an affine ambiguity, and how to reproject light field images onto a family of planes using only knowledge of planarParallax for one point in the scene.
Abstract: A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration.

379 citations


Journal ArticleDOI
TL;DR: In this article, the authors suggest using a two-color evanescent light field around a subwavelength-diameter fiber to trap and guide atoms, which allows confinement of atoms to two straight lines parallel to the fiber axis.
Abstract: We suggest using a two-color evanescent light field around a subwavelength-diameter fiber to trap and guide atoms. The optical fiber carries a red-detuned light and a blue-detuned light, with both modes far from resonance. When both input light fields are circularly polarized, a set of trapping minima of the total potential in the transverse plane is formed as a ring around the fiber. This design allows confinement of atoms to a cylindrical shell around the fiber. When one or both of the input light fields are linearly polarized, the total potential has two local minimum points in the transverse plane. This design allows confinement of atoms to two straight lines parallel to the fiber axis. Due to the small thickness of the fiber, we can use far-off-resonance fields with substantially differing evanescent decay lengths to produce a net potential with a large depth, a large coherence time, and a large trap lifetime. For example, a 0.2-\ensuremath{\mu}m-radius silica fiber carrying 30 mW of 1.06-\ensuremath{\mu}m-wavelength light and 29 mW of 700-nm-wavelength light, both fields circularly polarized at the input, gives for cesium atoms a trap depth of 2.9 mK, a coherence time of 32 ms, and a recoil-heating-limited trap lifetime of 541 s.

251 citations


Journal ArticleDOI
TL;DR: An image-based modeling and rendering system that models a sparse light field using a set of coherent layers, and introduces a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.
Abstract: In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.

200 citations


Journal ArticleDOI
TL;DR: The key observation is that anti-aliased light field rendering is equivalent to eliminating the “double image” artifacts caused by view interpolation, and a closed-form solution of the minimum sampling rate is presented.
Abstract: Recently, many image-based modeling and rendering techniques have been successfully designed to render photo-realistic images without the need for explicit 3D geometry. However, these techniques (e.g., light field rendering (Levoy, M. and Hanrahan, P., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 31–42) and Lumigraph (Gortler, S.J., Grzeszczuk, R., Szeliski, R., and Cohen, M.F., 1996. In SIGGRAPH 1996 Conference Proceedings, Annual Conference Series, Aug. 1996, pp. 43–54)) may require a substantial number of images. In this paper, we adopt a geometric approach to investigate the minimum sampling problem for light field rendering, with and without geometry information of the scene. Our key observation is that anti-aliased light field rendering is equivalent to eliminating the “double image” artifacts caused by view interpolation. Specifically, we present a closed-form solution of the minimum sampling rate for light field rendering. The minimum sampling rate is determined by the resolution of the camera and the depth variation of the scene. This rate is ensured if the optimal constant depth for rendering is chosen as the harmonic mean of the maximum and minimum depths of the scene. Moreover, we construct the minimum sampling curve in the joint geometry and image space, with the consideration of depth discontinuity. The minimum sampling curve quantitatively indicates how reduced geometry information can be compensated by increasing the number of images, and vice versa. Experimental results demonstrate the effectiveness of our theoretical analysis.

92 citations


Journal ArticleDOI
TL;DR: In these experiments, an N-photon-absorption recording medium is simulated by Nth harmonic generation followed by a CCD camera, suggesting that the improved resolution achieved through use of "quantum lithography" results primarily from the nonlinear response of the recording medium and not from quantum features of the light field.
Abstract: A nonlinear optical, interferometric method for improving the resolution of a lithographic system by an arbitrarily large factor with high visibility is described. The technique is implemented experimentally for both two-fold and three-fold enhancement of the resolution with respect to the traditional Rayleigh limit. In these experiments, an N-photon-absorption recording medium is simulated by Nth harmonic generation followed by a CCD camera. This technique does not exploit quantum features of light; this fact suggests that the improved resolution achieved through use of “quantum lithography” results primarily from the nonlinear response of the recording medium and not from quantum features of the light field.

86 citations


Journal ArticleDOI
TL;DR: It is predicted that polarization sensitivity will be most useful for short-range visual tasks in water and less so for detecting objects, signals, or structures from far away.
Abstract: Partially linearly polarized light is abundant in the oceans. The natural light field is partially polarized throughout the photic range, and some objects and animals produce a polarization pattern of their own. Many polarization-sensitive marine animals take advantage of the polarization information, using it for tasks ranging from navigation and finding food to communication. In such tasks, the distance to which the polarization information propagates is of great importance. Using newly designed polarization sensors, we measured the changes in linear polarization underwater as a function of distance from a standard target. In the relatively clear waters surrounding coral reefs, partial (%) polarization decreased exponentially as a function of distance from the target, resulting in a 50% reduction of partial polarization at a distance of 1.25-3 m, depending on water quality. Based on these measurements, we predict that polarization sensitivity will be most useful for short-range (in the order of meters) visual tasks in water and less so for detecting objects, signals, or structures from far away. Navigation and body orientation based on the celestial polarization pattern are predicted to be limited to shallow waters as well, while navigation based on the solar position is possible through a deeper range.

77 citations


Proceedings ArticleDOI
23 May 2004
TL;DR: It is shown that an infinitesimally small surface element of a Lambertian scene exists as a plane of constant value in a 4D light field, where the orientation of the plane is determined by the depth of the element in the scene.
Abstract: It is shown that an infinitesimally small surface element of a Lambertian scene exists as a plane of constant value in a 4D light field, where the orientation of the plane is determined by the depth of the element in the scene. By applying 2D gradient operators to appropriate subsets of the light field, the orientations of these constant-valued planes, and thus the depths of the corresponding elements of the scene, may be estimated. The redundancy associated with using three color channels, and having two depth estimates based on orthogonal 2D gradient estimates, is resolved using a weighted sum based on the confidence of each estimate.

59 citations


Patent
09 Mar 2004
TL;DR: In this paper, a planar light-emitting device has light input windows 2a-2d for inputting light from point light sources 5a-5d disposed outside; and a light guiding member 3 having a scattering area 4 scattering the input light.
Abstract: PROBLEM TO BE SOLVED: To achieve a planar light-emitting device outputting uniform planar light on the basis of light output from a plurality of point light sources. SOLUTION: A planar light-emitting device 1 has light input windows 2a-2d for inputting light from point light sources 5a-5d disposed outside; and a light guiding member 3 having a scattering area 4 scattering the input light. Each window 2a-2d has a function of adjusting a light travelling direction and the like so that a light intensity distribution pattern in a free space has a symmetry of C 4V if necessary for the input light from the point light sources 5a-5d, and light scattering members are disposed so as to have the symmetry of C 4V on the scattering area 4. COPYRIGHT: (C)2005,JPO&NCIPI

30 citations


Proceedings ArticleDOI
19 Jul 2004
TL;DR: In this article, the difference sphere is used to estimate near point light sources including their radiance, which has been difficult to achieve in previous efforts where only distant directional light sources were assumed.
Abstract: We present a novel approach for estimating lighting sources from a single image of a scene that is illuminated by near point light sources, directional light sources and ambient light. We propose to employ a pair of reference spheres as light probes and introduce the difference sphere that we acquire by differencing the intensities of two image regions of the reference spheres. Since the effect by directional light sources and ambient light is eliminated by differencing, the key advantage of considering the difference sphere is that it enables us to estimate near point light sources including their radiance, which has been difficult to achieve in previous efforts where only distant directional light sources were assumed. We also show that analysis of gray level contours on spherical surfaces facilitates separate identification of multiple combined light sources and is well suited to the difference sphere. Once we estimate point light sources with the difference sphere, we update the input image by eliminating their influence and then estimate other remaining light sources, that is, directional light sources and ambient light. We demonstrate the effectiveness of the entire algorithm with experimental results.

22 citations


Journal ArticleDOI
TL;DR: In this paper, a beam of metastable helium atoms is transversely collimated and guided through an intense standing-wave light field, which increases the confinement of the atoms in the standing wave considerably, and makes the alignment of the experimental setup less critical.
Abstract: We have created periodic nanoscale structures in a gold substrate with a lithography process using metastable triplet helium atoms that damage a hydrophobic resist layer on top of the substrate. A beam of metastable helium atoms is transversely collimated and guided through an intense standing-wave light field. Compared to commonly used low-power optical masks, a high-power light field (saturation parameter of 107) increases the confinement of the atoms in the standing wave considerably, and makes the alignment of the experimental setup less critical. Due to the high internal energy of the metastable helium atoms (20 eV), a dose of only one atom per resist molecule is required. With an exposure time of only eight minutes, parallel lines with a separation of 542 nm and a width of 100 nm (one-eleventh of the wavelength used for the optical mask) are created.

Journal ArticleDOI
TL;DR: In this article, a directional coupler consisting of two waveguides induced by two mutually incoherent white-light photovoltaic dark spatial solitons propagating in parallel in close proximity was proposed.
Abstract: We have observed experimentally, for the first time to our knowledge, one-dimensional photovoltaic dark spatial solitons in LiNbO3:Fe crystal by using incoherent white light. We have also fabricated a directional coupler consisting of two waveguides induced by two mutually incoherent white-light photovoltaic dark spatial solitons propagating in parallel in close proximity. It was found that the light field of a probe laser beam launched into one of the two proximate waveguides can be efficiently coupled into the other waveguide because of the presence of evanescent waves. We also studied the dependence of the coupling efficiency on the distance between the two proximate soliton-induced waveguides.

Proceedings ArticleDOI
08 Aug 2004
TL;DR: A Fresnel lens helps the GRIN lens-array to pick up wider range of a scene by controlling the depth of field, and the PC synthesizes free-viewpoint images from the elemental images captured by the IEEE1394 XGA camera.
Abstract: LIFLET has a Fresnel lens, a GRIN lens-array, an IEEE1394 XGA camera, and a PC. Figure 2 illustrates a diagram of the system. The Fresnel lens helps the GRIN lens-array to pick up wider range of a scene by controlling the depth of field. The GRIN lens-array produces thousands of elemental images (for instance, 50×40 images of 20×20 pixels). The PC synthesizes free-viewpoint images from the elemental images captured by the IEEE1394 XGA camera.

Journal ArticleDOI
TL;DR: An efficient rendering algorithm is presented that combines ray samples from scams with those from the light field, and the resulting image reconstructions are noticeably improved over that of a pure light field.
Abstract: In this article we present a new variant of the light field representation that supports improved image reconstruction by accommodating sparse correspondence information. This places our representation somewhere between a pure, two-plane parameterized, light field and a lumigraph representation, with its continuous geometric proxy. Our approach factors the rays of a light field into one of two separate classes. All rays consistent with a given correspondence are implicitly represented using a new auxiliary data structure, which we call a surface camera, or scam. The remaining rays of the light field are represented using a standard two-plane parameterized light field. We present an efficient rendering algorithm that combines ray samples from scams with those from the light field. The resulting image reconstructions are noticeably improved over that of a pure light field.

Proceedings ArticleDOI
16 Jun 2004
TL;DR: An algorithm that provides real-time walkthrough for globally illuminated scenes that contain mixtures of ideal diffuse and specular surfaces is described, offering a global illumination solution for real- time walkthrough even on a single processor.
Abstract: This paper describes an algorithm that provides real-time walkthrough for globally illuminated scenes that contain mixtures of ideal diffuse and specular surfaces. A type of light field data structure is used for propagating radiance outward from light emitters through the scene, accounting for any kind of L(S\D)* light path. The light field employed is constructed by choosing a regular point subdivision over a hemisphere, to give a set of directions, and then corresponding to each direction there is a rectangular grid of parallel rays. Each rectangular grid of rays is further subdivided into rectangular tiles, such that each tile references a sequence of 2D images containing colour values corresponding to the outgoing radiances of surfaces intersected by the rays in that tile. This structure is then used for final image rendering. Propagation times can be very long and the memory requirements very high. This algorithm, however, offers a global illumination solution for real-time walkthrough even on a single processor

Journal ArticleDOI
TL;DR: A hybrid method is presented by which Monte Carlo techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-dimensional optical environments, capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.
Abstract: A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.

Journal ArticleDOI
TL;DR: In this article, a 3D shape of light fields based on the superposition of coherent non-diffracting modes is proposed and examined, where the transverse amplitude profile of the field is composed of non-colliding spots whose positions can be predetermined and their spatial shape and size controlled.
Abstract: the 3d shaping of light fields based on the superposition of coherent non-diffracting modes is proposed and examined. The transverse amplitude profile of the field is composed of non-diffracting spots whose positions can be predetermined and their spatial shape and size controlled. Due to the self-imaging effect the formed transverse amplitude profile appears at the planes placed periodically along the propagation direction. The transverse and longitudinal localization of the light field is improved if the number of superposed non-diffracting modes is increased. The light then can be confined in the volume element whose transverse and longitudinal dimensions are comparable with the wavelength.

Book ChapterDOI
11 May 2004
TL;DR: A novel 3D appearance model using image-based rendering techniques, which can represent complex lighting conditions, structures, and surfaces and overcomes the limitations of polygonal based appearance models and uses light fields that are acquired in real-time.
Abstract: Statistical shape and texture appearance models are powerful image representations, but previously had been restricted to 2D or 3D shapes with smooth surfaces and lambertian reflectance. In this paper we present a novel 3D appearance model using image-based rendering techniques, which can represent complex lighting conditions, structures, and surfaces. We construct a light field manifold capturing the multi-view appearance of an object class and extend the direct search algorithm of Cootes and Taylor to match new light fields or 2D images of an object to a point on this manifold. When matching to a 2D image the reconstructed light field can be used to render unseen views of the object. Our technique differs from previous view-based active appearance models in that model coefficients between views are explicitly linked, and that we do not model any pose variation within the shape model at a single view. It overcomes the limitations of polygonal based appearance models and uses light fields that are acquired in real-time.

Patent
25 Aug 2004
TL;DR: In this article, a hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two-or three-dimensional optical environments.
Abstract: A hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two- or three-dimensional optical environments. The optical environments are first divided into contiguous regions, or elements, with Monte Carlo techniques then being employed to determine the optical response function of each type of element. The elements are combined, and the iterative relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. This hybrid model is capable of providing estimates of the under-water light field needed to expedite inspection of ship hulls and port facilities. It is also capable of providing estimates of the subaerial light field for structured, absorbing or non-absorbing environments such as shadows of mountain ranges within and without absorption spectral bands such as water vapor or CO2 bands.

Patent
08 Nov 2004
TL;DR: In this paper, an aperture is placed within a light beam path, with a specifically designed three-dimensional profile, so as to shape the light beam in a specific manner, and an optical mask is also employed, with varying light attenuation to impart a varying intensity to the light path.
Abstract: Projection of a light field on a semiconductor wafer, the light field having uniform intensity and a predefined area. An aperture is placed within a light beam path, with a specifically designed three-dimensional profile, so as to shape the light beam in a specific manner. When this light beam is transmitted through the appropriate optics, its shape is altered so as to be projected onto the wafer as a circle (or any other desired shape). An optical mask is also employed, with a varying light attenuation to impart a varying intensity to the light path. The aperture shapes the light path, and the optical mask selectively attenuates it, in so that the end result is a uniformly-intense light field that illuminates only a specific predefined area of the wafer. Wafers can thus be illuminated while avoiding undesirable areas such as wafer edges, thus preventing over- or under-illumination.

Journal ArticleDOI
TL;DR: In this article, a two-level atom interacts with a pair of Laguerre-Gaussian beams with opposite helicity, this leads to an efficient exchange of angular momentum between the light field and the center-of-mass motion of the atom.
Abstract: When a single two-level atom interacts with a pair of Laguerre-Gaussian beams with opposite helicity, this leads to an efficient exchange of angular momentum between the light field and the center-of-mass motion of the atom. When the radial motion is trapped by an additional potential, the wave function of a single localized atom can be split into components that rotate in opposite direction. This suggests a scheme for atom interferometry without mirror pulses.

Journal ArticleDOI
TL;DR: In this paper, the kinetics of atoms with degenerate energy levels in the field produced by elliptically polarized waves are considered in the semiclassical approximation and analytical expressions for the force acting on an atom and for the diffusion coefficient in the momentum space are derived for the optical transition Jg=1/2→Je = 1/2 in the slow atom approximation.
Abstract: The kinetics of atoms with degenerate energy levels in the field produced by elliptically polarized waves is considered in the semiclassical approximation. Analytic expressions for the force acting on an atom and for the diffusion coefficient in the momentum space are derived for the optical transition Jg=1/2→Je = 1/2 in the slow atom approximation. These expressions are valid for an arbitrary one-dimensional configuration of the light field and for an arbitrary intensity. The peculiarities of the atomic kinetics are investigated in detail; these peculiarities are associated with ellipticity of light waves and are absent in particular configurations formed by circularly or linearly polarized waves, which were considered earlier.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: A view-dependent rate-distortion measure that allows us to consider random access and compression efficiency simultaneously simultaneously is presented and is compared with the experimental results from the DCT-based coder and shows that they qualitatively give similar results.
Abstract: Image-based rendering data sets, such as light fields, require efficient compression due to their large data size, but also easy random access when rendering from the data set. Efficient compression usually depends upon prediction between images, which creates dependencies between them, conflicting with the requirement of having easy random access. Existing light field coders concentrate either on compression efficiency, or use ad hoc methods to design prediction that balances random access and compression efficiency requirements. In this paper, we study this joint problem of compression efficiency and random access. We propose a model for light field image generation, light field image coding and rendering novel views from these light field images. We present a view-dependent rate-distortion measure that allows us to consider random access and compression efficiency simultaneously. We compare the theoretical results from the model with the experimental results from our DCT-based coder and show that they qualitatively give similar results. Finally, we suggest how. with this model, we can better optimize the prediction dependency structure in our coder for random access and compression efficiency performance.

Proceedings Article
01 Jan 2004
TL;DR: A method for recovering gaps in light fields of scenes that contain significant occluders using a naive technique to fill-in parts of gaps using the available information from the multiple images is described.
Abstract: A light field is a 4D function representing radiance as a function of ray position and direction in 3D space. In this paper we describe a method for recovering gaps in light fields of scenes that contain significant occluders. In these situations, although a large fraction of the scene may be blocked in any one view, most scene points are visible in at least some views. As a consequence, although too much information is missing to employ 2D completion methods that operate within a single view, it may be possible to recover the lost information by completion in 4D the full dimensionality of the light field. The proposed light field completion method has three main steps: registration, initial estimation, and high dimensional texture synthesis and/or inpainting. At the registration stage, the set of images are shifted and re-projected so that the corresponding pixels from different images are aligned in the reconstructed light Held. Following this, the estimation step uses a naive technique to fill-in parts of gaps using the available information from the multiple images. This serves as the initial condition for the next and last step, where the missing information is recovered via high dimensional texture synthesis and/or inpainting. These two steps of initial condition and completion are iterated. The algorithm is illustrated with real examples.

01 Jan 2004
TL;DR: Experimental results show that the rate-distortion streaming performance with multiple representations is superior to that using independent encoding of images, which restricts random access and can possibly increase the rate for transmitting the required set of images.
Abstract: Light field rendering has been proposed as a way of enabling interactive photorealistic viewing of objects and scenes without the complexity of traditional computer graphics rendering techniques. Light field rendering, however, relies on a large amount of image data to achieve photorealistic quality and freedom in viewing directions and positions. In order to reduce the size of the data set, efficient compression is used. We focus on one class of compression algorithms that uses closed-loop prediction of the light field images. For remote viewing over a network, compressed light field data sets can be streamed to an interacting user. A rate-distortion optimized packet scheduling framework has been proposed for interactive streaming of compressed light fields. Experiments show that for several streaming scenarios and data sets, better streaming performance can be obtained by using independent encoding of the images instead of prediction. One reason for this is that using prediction restricts random access and can possibly increase the rate for transmitting the required set of images. Recently, we have proposed a light field coding scheme that uses multiple representations to provide similar random access capabilities as independent encoding, but with better compression efficiency. In this paper, we extend the rate-distortion optimized streaming framework designed for conventionally encoded light fields to this new multiple representations encoding scheme. Experimental results show that the rate-distortion streaming performance with multiple representations is superior to that using independent encoding of images.

Book ChapterDOI
26 Oct 2004
TL;DR: A simple method for estimating the surface radiance function from single images of smooth surfaces made of materials whose reflectance function is isotropic and monotonic using an implicit mapping of the Gauss map between the surface and a unit sphere is described.
Abstract: This paper describes a simple method for estimating the surface radiance function from single images of smooth surfaces made of materials whose reflectance function is isotropic and monotonic. The method makes use of an implicit mapping of the Gauss map between the surface and a unit sphere. By assuming the material brightness is monotonic with respect to the angle between the illuminant direction and the surface normal, we show how the radiance function can be represented by a polar function on the unit sphere. Under conditions in which the light source direction and the viewer direction are identical, we show how the recovery of the radiance function may be posed as that of estimating a tabular representation of this polar function. A simple differential geometry analysis shows how the tabular representation of the radiance function can be obtained using the cumulative distribution of image gradients. We illustrate the utility of the tabular representation of the radiance function for purposes of material classification.

Journal ArticleDOI
TL;DR: In this article, experimental results on pattern selection in a nonlinear optical system based on a single-mirror feedback-scheme are reported. But the pattern selection depends crucially on the polarization ellipticity of the input beam.
Abstract: The paper reports on experimental results on pattern selection in a nonlinear optical system based on a single-mirror feedback-scheme. Zeeman pumping in sodium vapor is utilized as optical nonlinearity. Above a certain power threshold the unstructured state with defined polarization becomes simultaneously unstable against a pattern forming and a polarization instability. In the resulting patterns the right- and left-hand circular polarization components of the light field tend to separate in space. The pattern selection depends crucially on the polarization ellipticity of the input beam. Transitions between positive and negative hexagons via stripes or squares are observed. They are determined by the symmetry of the interaction between the spin of the light field and the atomic spin and are considered as experimental demonstrations of general principles of pattern formation.

Proceedings ArticleDOI
01 Jan 2004

Patent
02 Dec 2004
TL;DR: In this article, an electronic sensor is positioned in the field of projection of an X-ray source, and the electronic sensor measures the deviation between a visible light field and an Xray field.
Abstract: Systems, methods and apparatus are provided through which, in some embodiments, an electronic sensor is positioned in the field of projection of an X-ray source, and the electronic sensor measures the deviation between a visible light field and an X-ray field. In some embodiments, the deviation is scaled in reference to the position of the electronic sensor between an X-ray receptor and the X-ray source.

Journal ArticleDOI
TL;DR: In this paper, a novel experimental technique combining near-field optics and femtosecond pump-probe spectroscopy is demonstrated to analyse the coherent nonlinear optical response of single quantum dots on ultrafast time scales.
Abstract: A novel experimental technique, combining near-field optics and femtosecond pump–probe spectroscopy, is demonstrated to analyse the coherent nonlinear optical response of single quantum dots on ultrafast time scales. The technique is used to study the effects of strong non-resonant light fields on the optical spectra of single excitons in interface quantum dots. Transient reflectivity spectra show dispersive line shapes reflecting the light-induced shift of the quantum dot resonance. The nonlinear spectra are governed by the phase shift of the coherent quantum dot polarization acquired during the interaction with the light field. The phase shift is measured and ultrafast control of the quantum dot polarization is demonstrated.