scispace - formally typeset
Search or ask a question
Author

Guohai Situ

Bio: Guohai Situ is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Encryption & Holography. The author has an hindex of 34, co-authored 124 publications receiving 4403 citations. Previous affiliations of Guohai Situ include University College Dublin & University of Stuttgart.


Papers
More filters
Journal ArticleDOI
TL;DR: A lensless optical security system based on double random-phase encoding in the Fresnel domain is proposed, which can encrypt a primary image to random noise by use of two statistically independent random- phase masks in the input and transform planes, respectively.
Abstract: A lensless optical security system based on double random-phase encoding in the Fresnel domain is proposed. This technique can encrypt a primary image to random noise by use of two statistically independent random-phase masks in the input and transform planes, respectively. In this system the positions of the significant planes and the operation wavelength, as well as the phase codes, are used as keys to encrypt and recover the primary image. Therefore higher security is achieved. The sensitivity of the decrypted image to shifting along the propagation direction and to the wavelength are also investigated.

859 citations

Journal ArticleDOI
20 Aug 2019
TL;DR: This paper relates the deep-learning-inspired solutions to the original computational imaging formulation and use the relationship to derive design insights, principles, and caveats of more general applicability, and explores how the machine learning process is aided by the physics of imaging when ill posedness and uncertainties become particularly severe.
Abstract: Since their inception in the 1930–1960s, the research disciplines of computational imaging and machine learning have followed parallel tracks and, during the last two decades, experienced explosive growth drawing on similar progress in mathematical optimization and computing hardware. While these developments have always been to the benefit of image interpretation and machine vision, only recently has it become evident that machine learning architectures, and deep neural networks in particular, can be effective for computational image formation, aside from interpretation. The deep learning approach has proven to be especially attractive when the measurement is noisy and the measurement operator ill posed or uncertain. Examples reviewed here are: super-resolution; lensless retrieval of phase and complex amplitude from intensity; photon-limited scenes, including ghost imaging; and imaging through scatter. In this paper, we cast these works in a common framework. We relate the deep-learning-inspired solutions to the original computational imaging formulation and use the relationship to derive design insights, principles, and caveats of more general applicability. We also explore how the machine learning process is aided by the physics of imaging when ill posedness and uncertainties become particularly severe. It is hoped that the present unifying exposition will stimulate further progress in this promising field of research.

473 citations

Journal ArticleDOI
TL;DR: This work introduces the technique of wavelength multiplexing into a double random-phase encoding system to achieve multiple-image encryption and analyzes the minimum separation between two adjacentmultiplexing wavelengths through cross talk and the multiplexed capacity through the correlation coefficient.
Abstract: We introduce the technique of wavelength multiplexing into a double random-phase encoding system to achieve multiple-image encryption. Each primary image is first encrypted by the double phase encoding method and then superposed to yield the final enciphered image. We analyze the minimum separation between two adjacent multiplexing wavelengths through cross talk and the multiplexing capacity through the correlation coefficient. Computer simulations are performed to demonstrate the concept. This technique can be used for hiding multiple images as well.

367 citations

Journal ArticleDOI
TL;DR: An overview of the potential, recent advances, and challenges of optical security and encryption using free space optics is presented, highlighting the need for more specialized hardware and image processing algorithms.
Abstract: Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Perez-Cabre], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.

317 citations

Journal ArticleDOI
Meng Lyu1, Wei Wang1, Hao Wang1, Haichao Wang1, Guowei Li1, Ni Chen1, Guohai Situ1 
TL;DR: Detailed comparisons between the image reconstructed using deep learning and compressive sensing shows that the proposed GIDL has a much better performance in extremely low sampling rate.
Abstract: In this manuscript, we propose a novel framework of computational ghost imaging, i.e., ghost imaging using deep learning (GIDL). With a set of images reconstructed using traditional GI and the corresponding ground-truth counterparts, a deep neural network was trained so that it can learn the sensing model and increase the quality image reconstruction. Moreover, detailed comparisons between the image reconstructed using deep learning and compressive sensing shows that the proposed GIDL has a much better performance in extremely low sampling rate. Numerical simulations and optical experiments were carried out for the demonstration of the proposed GIDL.

249 citations


Cited by
More filters
Journal Article
TL;DR: In this article, a fast Fourier transform method of topography and interferometry is proposed to discriminate between elevation and depression of the object or wave-front form, which has not been possible by the fringe-contour generation techniques.
Abstract: A fast-Fourier-transform method of topography and interferometry is proposed. By computer processing of a noncontour type of fringe pattern, automatic discrimination is achieved between elevation and depression of the object or wave-front form, which has not been possible by the fringe-contour-generation techniques. The method has advantages over moire topography and conventional fringe-contour interferometry in both accuracy and sensitivity. Unlike fringe-scanning techniques, the method is easy to apply because it uses no moving components.

3,742 citations

Journal Article
J. Walkup1
TL;DR: Development of this more comprehensive model of the behavior of light draws upon the use of tools traditionally available to the electrical engineer, such as linear system theory and the theory of stochastic processes.
Abstract: Course Description This is an advanced course in which we explore the field of Statistical Optics. Topics covered include such subjects as the statistical properties of natural (thermal) and laser light, spatial and temporal coherence, effects of partial coherence on optical imaging instruments, effects on imaging due to randomly inhomogeneous media, and a statistical treatment of the detection of light. Development of this more comprehensive model of the behavior of light draws upon the use of tools traditionally available to the electrical engineer, such as linear system theory and the theory of stochastic processes.

1,364 citations

Journal Article
TL;DR: In this article, a self-scanned 1024 element photodiode array and a minicomputer are used to measure the phase (wavefront) in the interference pattern of an interferometer to lambda/100.
Abstract: A self-scanned 1024 element photodiode array and minicomputer are used to measure the phase (wavefront) in the interference pattern of an interferometer to lambda/100. The photodiode array samples intensities over a 32 x 32 matrix in the interference pattern as the length of the reference arm is varied piezoelectrically. Using these data the minicomputer synchronously detects the phase at each of the 1024 points by a Fourier series method and displays the wavefront in contour and perspective plot on a storage oscilloscope in less than 1 min (Bruning et al. Paper WE16, OSA Annual Meeting, Oct. 1972). The array of intensities is sampled and averaged many times in a random fashion so that the effects of air turbulence, vibrations, and thermal drifts are minimized. Very significant is the fact that wavefront errors in the interferometer are easily determined and may be automatically subtracted from current or subsequent wavefrots. Various programs supporting the measurement system include software for determining the aperture boundary, sum and difference of wavefronts, removal or insertion of tilt and focus errors, and routines for spatial manipulation of wavefronts. FFT programs transform wavefront data into point spread function and modulus and phase of the optical transfer function of lenses. Display programs plot these functions in contour and perspective. The system has been designed to optimize the collection of data to give higher than usual accuracy in measuring the individual elements and final performance of assembled diffraction limited optical systems, and furthermore, the short loop time of a few minutes makes the system an attractive alternative to constraints imposed by test glasses in the optical shop.

1,300 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field is provided, and the challenges and suggested solutions to help researchers understand the existing research gaps.
Abstract: In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

1,084 citations