scispace - formally typeset
Search or ask a question

Light field head-mounted display with correct focus cue using micro structure array

01 Jan 2014-Iss: 6, pp 39-42
TL;DR: In this paper, a new type of light field display was proposed using a head-mounted display(HMD) and a micro structure array(MSA,lens array or pinhole array).
Abstract: A new type of light field display is proposed using a head-mounted display(HMD)and a micro structure array(MSA,lens array or pinhole array).Each rendering point emits abundant rays from different directions into the viewer’s pupil,and at one time the dense light field is generated inside the exit pupil of the HMD through the eyepiece.Therefore,the proposed method not only solves the problem of accommodation and convergence conflict in a traditional HMD,but also drastically reduces the huge data in real three-dimensional(3D)display.To demonstrate the proposed method,a prototype is developed,which is capable of giving the observer a real perception of depth.
Citations
More filters
Journal ArticleDOI
TL;DR: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information.
Abstract: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.

79 citations

Journal ArticleDOI
TL;DR: A compact design of a light field head-mounted-display is demonstrated that offers a true 3D display view of 30° by 18°, maintains a spatial resolution of 3 arc minutes across a depth range of over 3 diopters, and provides a see-through field of view of 65° by 40°.
Abstract: A new integral-imaging-based light field augmented-reality display is proposed and implemented for the first time, to our best knowledge, to achieve a wide see-through view and high image quality over a large depth range. By using custom-designed freeform optics and incorporating a tunable lens and an aperture array, we demonstrated a compact design of a light field head-mounted-display that offers a true 3D display view of 30° by 18°, maintains a spatial resolution of 3 arc minutes across a depth range of over 3 diopters, and provides a see-through field of view of 65° by 40°.

79 citations

Journal ArticleDOI
TL;DR: A generalized framework to model the image formation process of the existing light-field display methods is described and a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display is presented.
Abstract: One of the key issues in conventional stereoscopic displays is the well-known vergence-accommodation conflict problem due to the lack of the ability to render correct focus cues for 3D scenes. Recently several light field display methods have been explored to reconstruct a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. These methods are potentially capable of rendering correct or nearly correct focus cues and addressing the vergence-accommodation conflict problem. In this paper, we describe a generalized framework to model the image formation process of the existing light-field display methods and present a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display. We further employ this framework to investigate the trade-offs and guidelines for an optimal 3D light field display design. Our method is based on quantitatively evaluating the modulation transfer functions of the perceived retinal image of a light field display by accounting for the ocular factors of the human visual system.

71 citations

Journal ArticleDOI
TL;DR: An efficient algorithm for optimal decompositions is presented, incorporating insights from vision science, and eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display.
Abstract: As head-mounted displays (HMDs) commonly present a single, fixed-focus display plane, a conflict can be created between the vergence and accommodation responses of the viewer. Multifocal HMDs have long been investigated as a potential solution in which multiple image planes span the viewer's accommodation range. Such displays require a scene decomposition algorithm to distribute the depiction of objects across image planes, and previous work has shown that simple decompositions can be achieved in real-time. However, recent optimal decompositions further improve image quality, particularly with complex content. Such decompositions are more computationally involved and likely require better alignment of the image planes with the viewer's eyes, which are potential barriers to practical applications.Our goal is to enable interactive optimal decomposition algorithms capable of driving a vergence- and accommodation-tracked multifocal testbed. Ultimately, such a testbed is necessary to establish the requirements for the practical use of multifocal displays, in terms of computational demand and hardware accuracy. To this end, we present an efficient algorithm for optimal decompositions, incorporating insights from vision science. Our method is amenable to GPU implementations and achieves a three-orders-of-magnitude speedup over previous work. We further show that eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display. We also build the first binocular multifocal testbed with integrated eye tracking and accommodation measurement, paving the way to establish practical eye tracking and rendering requirements for this promising class of display. Finally, we report preliminary results from a pilot user study utilizing our testbed, investigating the accommodation response of users to dynamic stimuli presented under optimal decomposition.

54 citations

Journal ArticleDOI
TL;DR: In this article, an ultrathin, polarization-insensitive focus-tunable liquid crystal (LC) diffractive lens with a large aperture, a low weight, and a low operating voltage is proposed to produce two foci: one for a real object and the other for a virtual object with addressable focal planes.

28 citations

References
More filters
Journal ArticleDOI
TL;DR: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information.
Abstract: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.

79 citations

Journal ArticleDOI
TL;DR: A compact design of a light field head-mounted-display is demonstrated that offers a true 3D display view of 30° by 18°, maintains a spatial resolution of 3 arc minutes across a depth range of over 3 diopters, and provides a see-through field of view of 65° by 40°.
Abstract: A new integral-imaging-based light field augmented-reality display is proposed and implemented for the first time, to our best knowledge, to achieve a wide see-through view and high image quality over a large depth range. By using custom-designed freeform optics and incorporating a tunable lens and an aperture array, we demonstrated a compact design of a light field head-mounted-display that offers a true 3D display view of 30° by 18°, maintains a spatial resolution of 3 arc minutes across a depth range of over 3 diopters, and provides a see-through field of view of 65° by 40°.

79 citations

Journal ArticleDOI
TL;DR: A generalized framework to model the image formation process of the existing light-field display methods is described and a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display is presented.
Abstract: One of the key issues in conventional stereoscopic displays is the well-known vergence-accommodation conflict problem due to the lack of the ability to render correct focus cues for 3D scenes. Recently several light field display methods have been explored to reconstruct a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. These methods are potentially capable of rendering correct or nearly correct focus cues and addressing the vergence-accommodation conflict problem. In this paper, we describe a generalized framework to model the image formation process of the existing light-field display methods and present a systematic method to simulate and characterize the retinal image and the accommodation response rendered by a light field display. We further employ this framework to investigate the trade-offs and guidelines for an optimal 3D light field display design. Our method is based on quantitatively evaluating the modulation transfer functions of the perceived retinal image of a light field display by accounting for the ocular factors of the human visual system.

71 citations

Journal ArticleDOI
TL;DR: An efficient algorithm for optimal decompositions is presented, incorporating insights from vision science, and eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display.
Abstract: As head-mounted displays (HMDs) commonly present a single, fixed-focus display plane, a conflict can be created between the vergence and accommodation responses of the viewer. Multifocal HMDs have long been investigated as a potential solution in which multiple image planes span the viewer's accommodation range. Such displays require a scene decomposition algorithm to distribute the depiction of objects across image planes, and previous work has shown that simple decompositions can be achieved in real-time. However, recent optimal decompositions further improve image quality, particularly with complex content. Such decompositions are more computationally involved and likely require better alignment of the image planes with the viewer's eyes, which are potential barriers to practical applications.Our goal is to enable interactive optimal decomposition algorithms capable of driving a vergence- and accommodation-tracked multifocal testbed. Ultimately, such a testbed is necessary to establish the requirements for the practical use of multifocal displays, in terms of computational demand and hardware accuracy. To this end, we present an efficient algorithm for optimal decompositions, incorporating insights from vision science. Our method is amenable to GPU implementations and achieves a three-orders-of-magnitude speedup over previous work. We further show that eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display. We also build the first binocular multifocal testbed with integrated eye tracking and accommodation measurement, paving the way to establish practical eye tracking and rendering requirements for this promising class of display. Finally, we report preliminary results from a pilot user study utilizing our testbed, investigating the accommodation response of users to dynamic stimuli presented under optimal decomposition.

54 citations

Journal ArticleDOI
TL;DR: In this study, a tiled HMD using two compact rotationally symmetrical eyepieces was designed and developed and demonstrates an FOV of 66°(H)×32°(V) with a 7.5 mm exit pupil diameter and a 15.7 mm eye relief.
Abstract: It has always been a challenge to break the resolution/field-of-view (FOV) invariant to design a large FOV and high-resolution optical system, especially for a head-mounted display (HMD) system. In this study, a tiled HMD using two compact rotationally symmetrical eyepieces was designed and developed. Some issues on exit pupil and eye relief were analyzed in detail and taken into consideration during the design procedure. The overall optical system is compact with high performance. The system volume is smaller than 30 mm×35 mm×30 mm. Based on two 0.61 in. microdisplay devices, the overall tiled system demonstrates an FOV of 66°(H)×32°(V) with a 7.5 mm exit pupil diameter and a 15.7 mm eye relief.

17 citations