scispace - formally typeset
Search or ask a question
Author

Qiong-Hua Wang

Bio: Qiong-Hua Wang is an academic researcher from Beihang University. The author has contributed to research in topics: Integral imaging & Holography. The author has an hindex of 27, co-authored 468 publications receiving 3651 citations. Previous affiliations of Qiong-Hua Wang include Fuzhou University & Sichuan University.


Papers
More filters
Journal ArticleDOI
TL;DR: A metalens array that can reproduce a 3D optical image with achromatic integral imaging for white light is developed, opening the door for new applications in microlithography, sensing, and 3D imaging.
Abstract: We realize a polarization-insensitive silicon-nitride metalens-array in visible frequency spectrum, in which there is a set of broadband achromatic metalenses. The achromatic focusing and the achromatic integral imaging are demonstrated for white light.

150 citations

Journal ArticleDOI
TL;DR: The proposed autostereoscopic display has less crosstalk, a wider view angle, and higher efficiency of light utilization and the luminance distribution of the prototypes along the horizontal direction is measured.
Abstract: An autostereoscopic display based on two-layer lenticular lenses is proposed. The two-layer lenticular lenses include one-layer conventional lenticular lenses and additional one-layer concentrating-light lenticular lenses. Two prototypes of the proposed and conventional autostereoscopic displays are developed. At the optimum three-dimensional view distance, the luminance distribution of the prototypes along the horizontal direction is measured. By calculating the luminance distribution, the crosstalk of the prototypes is obtained. Compared with the conventional autostereoscopic display, the proposed autostereoscopic display has less crosstalk, a wider view angle, and higher efficiency of light utilization.

110 citations

Journal ArticleDOI
Di Wang1, Chao Liu1, Chuan Shen2, Yan Xing1, Qiong-Hua Wang1 
01 Dec 2020-PhotoniX
TL;DR: With the proposed system, holographic zoom capture and color reproduction of real objects can be achieved based on a simple structure and is expected to be applied to micro-projection and three-dimensional display technology.
Abstract: In this paper, we propose a holographic capture and projection system of real objects based on tunable zoom lenses. Different from the traditional holographic system, a liquid lens-based zoom camera and a digital conical lens are used as key parts to reach the functions of holographic capture and projection, respectively. The zoom camera is produced by combing liquid lenses and solid lenses, which has the advantages of fast response and light weight. By electrically controlling the curvature of the liquid-liquid surface, the focal length of the zoom camera can be changed easily. As another tunable zoom lens, the digital conical lens has a large focal depth and the optical property is perfectly used in the holographic system for adaptive projection, especially for multilayer imaging. By loading the phase of the conical lens on the spatial light modulator, the reconstructed image can be projected with large depths. With the proposed system, holographic zoom capture and color reproduction of real objects can be achieved based on a simple structure. Experimental results verify the feasibility of the proposed system. The proposed system is expected to be applied to micro-projection and three-dimensional display technology.

93 citations

Journal ArticleDOI
TL;DR: A prototype of the dual-view II 3D display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array has good performances.
Abstract: In this paper, a dual-view integral imaging three-dimensional (3D) display consisting of a display panel, two orthogonal polarizer arrays, a polarization switcher, and a micro-lens array is proposed. Two elemental image arrays for two different 3D images are presented by the display panel alternately, and the polarization switcher controls the polarization direction of the light rays synchronously. The two elemental image arrays are modulated by their corresponding and neighboring micro-lenses of the micro-lens array, and reconstruct two different 3D images in viewing zones 1 and 2, respectively. A prototype of the dual-view II 3D display is developed, and it has good performances.

81 citations

Journal ArticleDOI
TL;DR: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information.
Abstract: This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.

79 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: The learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks.
Abstract: We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.

7,091 citations

Posted Content
TL;DR: In this article, the authors proposed a simple and effective approach for spatio-temporal feature learning using deep 3D convolutional networks (3D ConvNets) trained on a large scale supervised video dataset.
Abstract: We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.

3,786 citations

Journal Article
TL;DR: In this article, a fast Fourier transform method of topography and interferometry is proposed to discriminate between elevation and depression of the object or wave-front form, which has not been possible by the fringe-contour generation techniques.
Abstract: A fast-Fourier-transform method of topography and interferometry is proposed. By computer processing of a noncontour type of fringe pattern, automatic discrimination is achieved between elevation and depression of the object or wave-front form, which has not been possible by the fringe-contour-generation techniques. The method has advantages over moire topography and conventional fringe-contour interferometry in both accuracy and sensitivity. Unlike fringe-scanning techniques, the method is easy to apply because it uses no moving components.

3,742 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time.
Abstract: Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3% and 1.8%, respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.

1,192 citations