scispace - formally typeset
Search or ask a question
Author

Chenliang Chang

Bio: Chenliang Chang is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Holography & Holographic display. The author has an hindex of 13, co-authored 51 publications receiving 573 citations. Previous affiliations of Chenliang Chang include Nanjing Normal University & University of California, Los Angeles.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
20 Nov 2020
TL;DR: Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations and holds great promise to be the enabling technology for next-generation VR/AR devices.
Abstract: Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human–computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.

175 citations

Journal ArticleDOI
Chenliang Chang1, Jun Xia1, Lei Yang1, Wei Lei1, Zhiming Yang, Jianhong Chen 
TL;DR: This paper proposes a method to suppress the speckle noise by simultaneously reconstructing the desired amplitude and phase distribution of the phase-only computer-generated hologram by using the Gerchberg-Saxton algorithm.
Abstract: The Gerchberg-Saxton (GS) algorithm is widely used to calculate the phase-only computer-generated hologram (CGH) for holographic three-dimensional (3D) display. However, speckle noise exists in the reconstruction of the CGH due to the uncontrolled phase distribution. In this paper, we propose a method to suppress the speckle noise by simultaneously reconstructing the desired amplitude and phase distribution. The phase-only CGH is calculated by using a double-constraint GS algorithm, in which both the desired amplitude and phase information are constrained in the image plane in each iteration. The calculated phase-only CGH can reconstruct the 3D object on multiple planes with a desired amplitude distribution and uniform phase distribution. Thus the speckle noise caused by the phase fluctuation between adjacent pixels is suppressed. Both simulations and experiments are presented to demonstrate the effective speckle noise suppression by our algorithm.

116 citations

Journal ArticleDOI
TL;DR: The results indicate that this method can effectively reduce the speckle in the reconstruction in 3-D holographic display and is free of iteration which allows improving the image quality and the calculation speed at the same time.
Abstract: The purpose of this study is to implement speckle reduced three-dimensional (3-D) holographic display by single phase-only spatial light modulator (SLM). The complex amplitude of hologram is transformed to pure phase value based on double-phase method. To suppress noises and higher order diffractions, we introduced a 4-f system with a filter at the frequency plane. A blazing grating is proposed to separate the complex amplitude on the frequency plane. Due to the complex modulation, the speckle noise is reduced. Both computer simulation and optical experiment have been conducted to verify the effectiveness of the method. The results indicate that this method can effectively reduce the speckle in the reconstruction in 3-D holographic display. Furthermore, the method is free of iteration which allows improving the image quality and the calculation speed at the same time.

114 citations

Journal ArticleDOI
TL;DR: Simulation and experimental results prove the performance of the proposed method of generating a complex structured vortex array, which is of significance for potential applications including multiple trapping of micro-sized particles.
Abstract: We propose an approach for creating optical vortex array (OVA) arranged along arbitrary curvilinear path, based on the coaxial interference of two width-controllable component curves calculated by modified holographic beam shaping technique. The two component curve beams have different radial dimensions as well as phase gradients along each beam such that the number of phase singularity in the curvilinear arranged optical vortex array (CA-OVA) is freely tunable on demand. Hybrid CA-OVA that comprises of multiple OVA structures along different respective curves is also discussed and demonstrated. Furthermore, we study the conversion of CA-OVA into vector mode that comprises of polarization vortex array with varied polarization state distribution. Both simulation and experimental results prove the performance of the proposed method of generating a complex structured vortex array, which is of significance for potential applications including multiple trapping of micro-sized particles.

59 citations

Journal ArticleDOI
TL;DR: The speckle noise is reduced due to the reconstruction of the complex amplitude of the image via a lensless optical filtering system, and the size of the projected image can reach to the maximum diffraction bandwidth of the spatial light modulator (SLM) at a given distance.
Abstract: This paper presents a method for the implementation of speckle reduced lensless holographic projection based on phase-only computer-generated hologram (CGH). The CGH is calculated from the image by double-step Fresnel diffraction. A virtual convergence light is imposed to the image to ensure the focusing of its wavefront to the virtual plane, which is established between the image and the hologram plane. The speckle noise is reduced due to the reconstruction of the complex amplitude of the image via a lensless optical filtering system. Both simulation and optical experiments are carried out to confirm the feasibility of the proposed method. Furthermore, the size of the projected image can reach to the maximum diffraction bandwidth of the spatial light modulator (SLM) at a given distance. The method is effective for improving the image quality as well as the image size at the same time in compact lensless holographic projection system.

57 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The authors survey the steady refinement of techniques used to create optical vortices, and explore their applications, which include sophisticated optical computing processes, novel microscopy and imaging techniques, the creation of ‘optical tweezers’ to trap particles of matter, and optical machining using light to pattern structures on the nanoscale.
Abstract: Thirty years ago, Coullet et al. proposed that a special optical field exists in laser cavities bearing some analogy with the superfluid vortex. Since then, optical vortices have been widely studied, inspired by the hydrodynamics sharing similar mathematics. Akin to a fluid vortex with a central flow singularity, an optical vortex beam has a phase singularity with a certain topological charge, giving rise to a hollow intensity distribution. Such a beam with helical phase fronts and orbital angular momentum reveals a subtle connection between macroscopic physical optics and microscopic quantum optics. These amazing properties provide a new understanding of a wide range of optical and physical phenomena, including twisting photons, spin-orbital interactions, Bose-Einstein condensates, etc., while the associated technologies for manipulating optical vortices have become increasingly tunable and flexible. Hitherto, owing to these salient properties and optical manipulation technologies, tunable vortex beams have engendered tremendous advanced applications such as optical tweezers, high-order quantum entanglement, and nonlinear optics. This article reviews the recent progress in tunable vortex technologies along with their advanced applications.

1,016 citations

Journal ArticleDOI
TL;DR: A unified focus, aberration correction, and vision correction model, along with a user calibration process, accounts for any optical defects between the light source and retina to enable truly compact, eyeglasses-like displays with wide fields of view that would be inaccessible through conventional means.
Abstract: We present novel designs for virtual and augmented reality near-eye displays based on phase-only holographic projection. Our approach is built on the principles of Fresnel holography and double phase amplitude encoding with additional hardware, phase correction factors, and spatial light modulator encodings to achieve full color, high contrast and low noise holograms with high resolution and true per-pixel focal control. We provide a GPU-accelerated implementation of all holographic computation that integrates with the standard graphics pipeline and enables real-time (≥90 Hz) calculation directly or through eye tracked approximations. A unified focus, aberration correction, and vision correction model, along with a user calibration process, accounts for any optical defects between the light source and retina. We use this optical correction ability not only to fix minor aberrations but to enable truly compact, eyeglasses-like displays with wide fields of view (80°) that would be inaccessible through conventional means. All functionality is evaluated across a series of hardware prototypes; we discuss remaining challenges to incorporate all features into a single device.

510 citations

Dissertation
01 Jan 2014
TL;DR: This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments, and serves as an example application for merging the digital and the physical through flexible Materials, embodied computation, and actuation.
Abstract: In 1965 Ivan E. Sutherland envisioned the Ultimate Display, a room in which a computer can directly control the existence of matter. This type of display would merge the digital and the physical world, dramatically changing how people interact with computers. This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments. %Dynamic environments are inherent to human behavior, but pose big problems to Human-Computer Interaction since computing devices rely on many assumptions of the interaction. Two aspects of the dynamic environment are considered. One is mobile human nature -- a person moving through or inside an environment. The other is the change or movement of the environment itself. The initial study consisted of a mixed reality application, based on recent motor learning research. It tested if a performer's attentional focus on markers external to the body improves the accuracy and duration of acquiring a motor skill, as compared with the performer focusing on their own body accompanied by verbal instructions. This experiment showed the need for displays that resemble physical reality. Deformable displays and Organic User Interfaces (OUIs) leverage shape, material, and the inherent properties of matter in order to create natural, intuitive forms of interaction. We suggested designing OUIs employing depth sensors as 3D input, deformable displays as 3D output, and identifying attributes that couple matter to human perception and motor skills. Flexible materials were explored by developing a soft gripper able to hold everyday objects of various shapes and sizes. It did not use complex hardware or control algorithms, but rather combined sheets of flexible plastic materials and a single servo motor. The gripper showed how a simple design with a minimal control mechanism can solve a complex problem in a dynamic environment. It serves as an example application for merging the digital and the physical through flexible materials, embodied computation, and actuation. The next two experiments merge digital information with the physical dynamic environment by using mobile and static projectors. The mobile projector experiment consisted of GPS navigation using a bike-mounted projector, displaying a map on the pavement in front of the bike. We found out that if compared with a bike-mounted smartphone, the mobile projector yields a lower cognitive load for the map navigation task. A dynamic space emerges from the navigation task requirements, and the projected display becomes a part of the physical environment. In the final experiment, a person interacts with a changing, growing environment, on which digital information is projected from above using a static projector. The interactive space consists of cardboard building blocks, the arrangement of which are limited by the area of projection. The user adds cardboard blocks to the cluster based upon feedback projected from above. Concepts from artificial intelligence and architecture were applied for understanding the interaction between the environment, the user, the morphology, and the material of the physical building system.

319 citations

Journal ArticleDOI
TL;DR: In this article, the basic structures of AR and VR headsets and operation principles of various holographic optical elements (HOEs) and lithography-enabled devices are described, with detailed description and analysis of some state-of-the-art architectures.
Abstract: With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.

219 citations

Journal ArticleDOI
10 Mar 2021-Nature
TL;DR: In this article, a deep learning-based approach using a convolutional neural network is used to synthesize photorealistic colour 3D holograms from a single RGB-depth image in real time, and termed tensor holography.
Abstract: The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact on virtual and augmented reality, human–computer interaction, education and training. Computer-generated holography (CGH) enables high-spatio-angular-resolution 3D projection via numerical simulation of diffraction and interference1. Yet, existing physically based methods fail to produce holograms with both per-pixel focal control and accurate occlusion2,3. The computationally taxing Fresnel diffraction simulation further places an explicit trade-off between image quality and runtime, making dynamic holography impractical4. Here we demonstrate a deep-learning-based CGH pipeline capable of synthesizing a photorealistic colour 3D hologram from a single RGB-depth image in real time. Our convolutional neural network (CNN) is extremely memory efficient (below 620 kilobytes) and runs at 60 hertz for a resolution of 1,920 × 1,080 pixels on a single consumer-grade graphics processing unit. Leveraging low-power on-device artificial intelligence acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at 1.1 hertz) and edge (Google Edge TPU at 2.0 hertz) devices, promising real-time performance in future-generation virtual and augmented-reality mobile headsets. We enable this pipeline by introducing a large-scale CGH dataset (MIT-CGH-4K) with 4,000 pairs of RGB-depth images and corresponding 3D holograms. Our CNN is trained with differentiable wave-based loss functions5 and physically approximates Fresnel diffraction. With an anti-aliasing phase-only encoding method, we experimentally demonstrate speckle-free, natural-looking, high-resolution 3D holograms. Our learning-based approach and the Fresnel hologram dataset will help to unlock the full potential of holography and enable applications in metasurface design6,7, optical and acoustic tweezer-based microscopic manipulation8–10, holographic microscopy11 and single-exposure volumetric 3D printing12,13. A deep-learning-based approach using a convolutional neural network is used to synthesize photorealistic colour three-dimensional holograms from a single RGB-depth image in real time, and termed tensor holography.

180 citations