scispace - formally typeset
Search or ask a question

Showing papers by "Bahram Javidi published in 2008"


Journal ArticleDOI
TL;DR: This paper presents three dimensional object reconstruction using photon-counted elemental images acquired by a passive 3D Integral Imaging (II) system and the maximum likelihood (ML) estimator is derived to reconstruct the irradiance of the 3D scene pixels.
Abstract: In this paper, we present three dimensional (3D) object reconstruction using photon-counted elemental images acquired by a passive 3D Integral Imaging (II) system. The maximum likelihood (ML) estimator is derived to reconstruct the irradiance of the 3D scene pixels and the reliability of the estimator is described by confidence intervals. For applications in photon scarce environments, our proposed technique provides 3D reconstruction for better visualization as well as significant reduction in the computational burden and required bandwidth for transmission of integral images. The performance of the reconstruction is illustrated qualitatively and compared quantitatively with Peak to Signal to Noise Ratio (PSNR) criterion.

127 citations


Journal ArticleDOI
TL;DR: It is shown that recognition capacity for high frequency details is increased and an inverse Fresnel transformation provides an enhanced resolution reconstruction of single-exposure on-line (SEOL) digital holograms with improved resolution using a synthetic aperture.
Abstract: We present a system for reconstructing single-exposure on-line (SEOL) digital holograms with improved resolution using a synthetic aperture. Several recordings are made in order to compose the synthetic aperture, shifting the camera within the hologram plane. After processing the synthetic hologram, an inverse Fresnel transformation provides an enhanced resolution reconstruction. We show that recognition capacity for high frequency details is increased. Experimental results with a test target and with a microscopic biological sample are presented. Both visualization and correlation results are reported.

108 citations


Journal ArticleDOI
TL;DR: This is the first report on using PTILI computational holographic microscopy for identification of biological microorganisms and experiments indicate that the proposed system can be useful for identifying biological micro organisms.
Abstract: We present a new method for three-dimensional (3-D) visualization and identification of biological microorganisms using partially temporal incoherent light in-line (PTILI) computational holographic imaging and multivariate statistical methods. For 3-D data acquisition of biological microorganisms, the band-pass filtered white light is used to illuminate a biological sample. The transversely and longitudinally diffracted pattern of the biological sample is magnified by microscope objective (MO) and is optically recorded with an image sensor array interfaced with a computer. Three-dimensional reconstruction of the biological sample from the diffraction pattern is accomplished by using computational Fresnel propagation method. Principal components analysis and nonparametric inference algorithms are applied to the 3-D complex amplitude biological sample for identification purposes. Experiments indicate that the proposed system can be useful for identifying biological microorganisms. To the best of our knowledge, this is the first report on using PTILI computational holographic microscopy for identification of biological microorganisms.

57 citations


Journal ArticleDOI
TL;DR: This paper presents a generalized framework for 3D II with arbitrary pickup surface geometry and randomly distributed sensor configuration, and is the first report on 3D imaging using randomly distributed sensors.
Abstract: As a promising three dimensional passive imaging modality, Integral Imaging (II) has been investigated widely within the research community. In virtually all of such investigations, there is an implicit assumption that the collection of elemental images lie on a simple geometric surface (e.g. flat, concave, etc), also known as pickup surface. In this paper, we present a generalized framework for 3D II with arbitrary pickup surface geometry and randomly distributed sensor configuration. In particular, we will study the case of Synthetic Aperture Integral Imaging (SAII) with random location of cameras in space, while all cameras have parallel optical axes but different distances from the 3D scene. We assume that the sensors are randomly distributed in 3D volume of pick up space. For 3D reconstruction, a finite number of sensors with known coordinates are randomly selected from within this volume. The mathematical framework for 3D scene reconstruction is developed based on an affine transform representation of imaging under geometrical optics regime. We demonstrate the feasibility of the methods proposed here by experimental results. To the best of our knowledge, this is the first report on 3D imaging using randomly distributed sensors.

52 citations


Journal ArticleDOI
TL;DR: This work proposes a method to three-dimensionally visualize objects in a scattering medium using integral imaging based on a particular use of the interference phenomenon between the ballistic photons getting through the scattering medium and the scattered photons being scattered by the medium.
Abstract: In this paper, we propose a method to three-dimensionally visualize objects in a scattering medium using integral imaging. Our approach is based on a particular use of the interference phenomenon between the ballistic photons getting through the scattering medium and the scattered photons being scattered by the medium. For three-dimensional (3D) sensing of the scattered objects, the synthetic aperture integral imaging system under coherent illumination records the scattered elemental images of the objects. Then, the computational geometrical ray propagation algorithm is applied to the scattered elemental images in order to eliminate the interference patterns between scattered and object beams. The original 3D information of the scattered objects is recovered by multiple imaging channels, each with a unique perspective of the object. We present both simulation and experimental results with virtual and real objects to demonstrate the proposed concepts.

50 citations


Journal ArticleDOI
TL;DR: This report analyzes the extension of depth of field using both amplitude and phase modulation of the pupil function to establish the range of applicability of each method based on therange of spatial frequencies of interest in the imaging system.
Abstract: We analyze the extension of depth of field using both amplitude and phase modulation of the pupil function. In particular, we discuss the advantages and disadvantages of each approach and establish the range of applicability of each method based on the range of spatial frequencies of interest in the imaging system. To the best of our knowledge, this is the first such report on the range of applicability of amplitude and phase modulation to extend the depth of field.

50 citations


Proceedings ArticleDOI
28 May 2008
TL;DR: An overview of recent advances in Three-dimensional (3D) sensing, imaging and display is presented to discuss both passive sensing integral imaging and active sensing computational holographic imaging for 3D visualization, display, and image recognition.
Abstract: This keynote address will present an overview of recent advances in Three-dimensional (3D) sensing, imaging and display. We shall discuss both passive sensing integral imaging and active sensing computational holographic imaging for 3D visualization, display, and image recognition. Mathematical analysis, computer simulations, and optical experimental results will be presented. There are numerous applications of these technologies including medical 3D imaging, 3D visualization, 3D identification and inspection, 3D television, 3D video, 3D multimedia, interactive communication, education, entertainment, and commerce.

37 citations


Journal ArticleDOI
TL;DR: This work investigates the necessary condition on the object size and spatial bandwidth for complete 3D microscopic imaging with phase-shifting digital holography with various common arrangements.
Abstract: Microscopy by holographic means is attractive because it permits true three-dimensional (3D) visualization and 3D display of the objects. We investigate the necessary condition on the object size and spatial bandwidth for complete 3D microscopic imaging with phase-shifting digital holography with various common arrangements. The cases for which a Fresnel holographic arrangement is sufficient and those for which object magnification is necessary are defined. Limitations set by digital sensors are analyzed in the Wigner domain. The trade-offs between the various holographic arrangements in terms of conditions on the object size and bandwidth, recording conditions required for complete representation, and complexity are discussed.

37 citations


Journal ArticleDOI
TL;DR: This work uses a summation of absolute difference (SAD) algorithm between pixels of consecutive frames of a moving object for 3D tracking of occluded objects using three-dimensional (3D) integral imaging (II).
Abstract: We present experiments to illustrate tracking of occluded objects using three-dimensional (3D) integral imaging (II). Tracking of heavily occluded objects by conventional two-dimensional (2D) image processing may be difficult owing to the superposition of occlusion noise and object details. The effects of occlusion are remedied by 3D computational II reconstruction. We use a summation of absolute difference (SAD) algorithm between pixels of consecutive frames of a moving object for 3D tracking. SAD algorithm is not robust for 2D images of occluded objects; 3D computation reconstruction of the scene allows implementation of a SAD algorithm by reducing occlusion effects. Experimental results demonstrate 3D tracking of occluded objects. To the best of our knowledge, this is the first report on 3D tracking of objects using II.

36 citations


Journal ArticleDOI
TL;DR: Viewing zone-forming geometry of multiview imaging systems and lens image forming principle is applied to prove that the depth sensing mechanism of integral photography is both parallaxes.
Abstract: In this paper, viewing zone-forming geometry of multiview imaging systems and lens image forming principle is applied to prove that the depth sensing mechanism of integral photography (IP) is both parallaxes. The proof is based on the fact that making the cell size of each elemental image the same as a microlens pitch does not change the geometry, because there are no changes in the field-of-view of each microlens. The total number of different view images perceived in the viewing zone of IP, and the compositions of the images and condition of increasing the number are also identified.

33 citations


Journal ArticleDOI
TL;DR: The effects of beam propagation in shallow water on the traditional SAII system are studied and computational reconstructions of 3D scenes are presented, resulting in the first report on underwater multi-view 3D imaging.
Abstract: In this paper, is proposed the use of multi-view three-dimensional (3D) imaging techniques, such as synthetic aperture integral imaging (SAII) for underwater applications. We analyze SAII systems for reconstructing 3D objects submerged in water. The effects of beam propagation in shallow water on the traditional SAII system are studied and computational reconstructions of 3D scenes are presented. We present experiments with SAII to reconstruct underwater 3D objects placed behind heavy occlusion. These systems could benefit deployments in unmanned underwater vehicles. To the best of our knowledge, this is the first report on underwater multi-view 3D imaging.

Patent
10 Dec 2008
TL;DR: In this paper, a method for three-dimensional reconstruction of a 3D scene and target object recognition may include acquiring a plurality of elemental images through a microlens array, generating a reconstructed display plane based on the plurality of images using 3D volumetric computational integral imaging, and recognizing the target object in the reconstructed display planes by using an image recognition or classification algorithm.
Abstract: A method for three-dimensional reconstruction of a three-dimensional scene and target object recognition may include acquiring a plurality of elemental images of a three-dimensional scene through a microlens array; generating a reconstructed display plane based on the plurality of elemental images using three-dimensional volumetric computational integral imaging; and recognizing the target object in the reconstructed display plane by using an image recognition or classification algorithm.

Patent
15 Oct 2008
TL;DR: In this article, a telecentric relay system is used for 3D imaging of a three-dimensional object, where a microlens array, a sensor device, and a relay system are positioned between the microlenses array and the sensor device.
Abstract: A three-dimensional imaging apparatus for imaging a three-dimensional object may include a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device. A telecentric relay system may include a field lens and a macro objective that may include a macro lens and an aperture stop. A method of imaging a three-dimensional object may include providing a three-dimensional imaging apparatus including a microlens array, a sensor device, and a telecentric relay system positioned between the microlens array and the sensor device; and generating a plurality of elemental images on the sensor device, wherein each of the plurality of elemental images has a different perspective of the three-dimensional object.

Proceedings ArticleDOI
24 Mar 2008
TL;DR: An optical implementation of compressed sensing is presented, with this method a compressed version of an object's image is captured directly and the compression is accomplished by optical means with a single exposure.
Abstract: The common approach in digital imaging today is to capture as many pixels as possible and later to compress the captured image by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to change the order of these operations, that is, to compress the information before it is captured. In this paper we present an optical implementation of compressed sensing. With this method a compressed version of an object's image is captured directly. The compression is accomplished by optical means with a single exposure. One implication of this imaging approach is that the effective space-bandwidth-product of the imaging system is larger than that of conventional imaging systems. This implies, for example, that more object pixels may be reconstructed and visualized than the number of pixels of the image sensor.

Proceedings ArticleDOI
03 Apr 2008
TL;DR: An overview of 3D sensing approaches based on passive sensing using commercially available detector technology is presented, likely that 3D passive imaging will be preferable to active 3D imaging for small, inexpensive UAVs.
Abstract: Three dimensional imaging is a powerful tool for object detection, identification, and classification. 3D imaging allows removal of partial obscurations in front of the imaged object. Traditional 3D image sensing has been Laser Radar (LADAR) based. Active imaging has benefits; however, its disadvantages are costs, detector array complexity, power, weight, and size. In this keynote address paper, we present an overview of 3D sensing approaches based on passive sensing using commercially available detector technology. 3D passive sensing will provide many benefits, including advantages at shorter ranges. For small, inexpensive UAVs, it is likely that 3D passive imaging will be preferable to active 3D imaging.

Journal ArticleDOI
TL;DR: This paper is the first to report using ICA in three-dimensional imaging technology, using kurtosis maximization-based algorithm to reduce data dimension and ICA to recognize the three- dimensional objects.
Abstract: We present computational holographic three-dimensional imaging and automated object recognition based on independent component analysis (ICA). Three-dimensional sensing of the scene is performed by...

Proceedings ArticleDOI
03 Apr 2008
TL;DR: In this article, the amplitude and phase modulation of the pupil function was analyzed for depth of field extension, and the advantages and disadvantages of each approach were discussed and the range of applicability of each method was established based on the spatial frequencies of interest in the imaging system.
Abstract: We analyze the extension of depth of field using both amplitude and phase modulation of the pupil function. In particular, we discuss the advantages and disadvantages of each approach and establish the range of applicability of each method based on the range of spatial frequencies of interest in the imaging system. Our result serves as a starting point for choosing the right form of modulation for extension of depth of field.

Journal ArticleDOI
TL;DR: It is shown that a low number of photons are sufficient to classify occluded objects with the proposed photon-counting linear discriminant analysis with computational integral imaging method.
Abstract: This paper discusses a photon-counting linear discriminant analysis (LDA) with computational integral imaging (II). The computational II method reconstructs three-dimensional (3D) objects on the reconstruction planes located at arbitrary depth-levels. A maximum likelihood estimation (MLE) can be used to estimate the Poisson parameters of photon counts in the reconstruction space. The photon-counting LDA combined with the computational II method is developed in order to classify partially occluded objects with photon-limited images. Unknown targets are classified with the estimated Poisson parameters while reconstructed irradiance images are trained. It is shown that a low number of photons are sufficient to classify occluded objects with the proposed method.

Proceedings ArticleDOI
16 Mar 2008
TL;DR: The potential of optical techniques in security tasks is reviewed and proposals to combine some of them in the design of new optical ID tags for automatic vehicle identification and authentication are proposed.
Abstract: We review the potential of optical techniques in security tasks and propose to combine some of them in the design of new optical ID tags for automatic vehicle identification and authentication. More specifically, we propose to combine visible and near infrared imaging, optical decryption, distortion-invariant ID tags, optoelectronic devices, coherent image processor, optical correlation, and multiple authenticators. A variety of images and signatures, including biometric and random sequences, can be combined in an optical ID tag for multifactor identification. Encryption of the information codified in the ID tag allows increasing security and deters from unauthorized usage of optical tags. A novel NIR ID tag is designed and built by using commonly available materials. The ID tag content cannot be visually perceived at naked eye; it cannot be copied, scanned, or captured by any conventional device. The identification process encompasses several steps such as detection, information decoding and verification which are all detailed in this work. Design of rotation and scale invariant ID tags is taken into account to achieve a correct authentication even if the ID tag is captured in different positions.

Proceedings ArticleDOI
15 Apr 2008
TL;DR: In this paper, the authors overview several important in-line digital holography arrangements used for microscopy and describe their performance in terms of lateral resolution and field-of-view.
Abstract: There is an increasing effort in using digital holography for microscopy in various fields of life and material science because its unique advantage of capturing three dimensional information about the object simultaneously. In this paper we overview several important in-line digital holography arrangements used for microscopy and describe their performance in terms of lateral resolution and field-of-view.

Proceedings ArticleDOI
16 Apr 2008
TL;DR: In this article, the first and second order statistical properties of the nonlinear matched filtering can improve the recognition performance compared to the linear matched filtering for photon counting recognition with 3D passive sensing.
Abstract: In this paper we overview the nonlinear matched filtering for photon counting recognition with 3D passive sensing. The first and second order statistical properties of the nonlinear matched filtering can improve the recognition performance compared to the linear matched filtering. Automatic target reconstruction and recognition are addressed for partially occluded objects. The recognition performance is shown to be improved significantly in the reconstruction space. The discrimination capability is analyzed in terms of Fisher ratio (FR) and receiver operating characteristic (ROC) curves.

Proceedings ArticleDOI
24 Apr 2008
TL;DR: Spatial multiplexing of the encrypted signature allows us to build a distortion‐invariant ID tag, so that remote authentication can be achieved even if the tag is captured under rotation or at different distances.
Abstract: We propose to combine information from visible (VIS) and near infrared (NIR) spectral bands to increase robustness on security systems and deter from unauthorized use of optical tags that permit the identification of a given person or object The signature that identifies the element under surveillance will be only obtained by the appropriate combination of the visible content and the NIR data The fully‐phase encryption technique is applied to avoid an easy recognition of the resultant signature at the naked eye and an easy reproduction using conventional devices for imaging or scanning The obtained complex‐amplitude encrypted distribution is encoded on an identity (ID) tag Spatial multiplexing of the encrypted signature allows us to build a distortion‐invariant ID tag, so that remote authentication can be achieved even if the tag is captured under rotation or at different distances We explore the possibility of using partial information of the encrypted distribution Simulation results are provided and discussed


Proceedings ArticleDOI
03 Apr 2008
TL;DR: In this paper, a near-infrared (NIR) 3D sensing and reconstruction of occluded objects using synthetic aperture integral integral imaging (SAII) is presented.
Abstract: We present near-infrared (NIR) 3D sensing and reconstruction of occluded objects using synthetic aperture integral imaging (SAII). We present experiments with the NIR 3D imaging system using a radiant object. The occluded object is not observed in visible spectrum due to front obstruction. However, with 3D computational reconstruction, the NIR image of the object shows substantially reduced front obstruction.

Proceedings ArticleDOI
15 Apr 2008
TL;DR: The 3D Imaging and Display Group at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing as mentioned in this paper.
Abstract: Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.