scispace - formally typeset
Search or ask a question
Author

Jianbo Shao

Bio: Jianbo Shao is an academic researcher from University of Arizona. The author has contributed to research in topics: Image restoration & Fiber bundle. The author has an hindex of 5, co-authored 9 publications receiving 108 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A new approach is proposed, where the task of phase unwrapping is transferred into a multi-class classification problem and an efficient segmentation network is introduced to identify classes and a noise-to-noise denoised network is integrated to preprocess noisy wrapped phase.
Abstract: The interferometry technique is commonly used to obtain the phase information of an object in optical metrology. The obtained wrapped phase is subject to a 2π ambiguity. To remove the ambiguity and obtain the correct phase, phase unwrapping is essential. Conventional phase unwrapping approaches are time-consuming and noise sensitive. To address those issues, we propose a new approach, where we transfer the task of phase unwrapping into a multi-class classification problem and introduce an efficient segmentation network to identify classes. Moreover, a noise-to-noise denoised network is integrated to preprocess noisy wrapped phase. We have demonstrated the proposed method with simulated data and in a real interferometric system.

78 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed convolutional neural network to address the image demosaicing issue outperforms other state-of-the-art methods by a large margin in terms of quantitative measures and visual quality.
Abstract: We propose a polarization demosaicing convolutional neural network to address the image demosaicing issue, the last unsolved issue in microgrid polarimeters. This network learns an end-to-end mapping between the mosaic images and full-resolution ones. Skip connections and customized loss function are used to boost the performance. Experimental results show that our proposed network outperforms other state-of-the-art methods by a large margin in terms of quantitative measures and visual quality.

48 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed unsupervised deep network outperforms other state-of-the-art methods in terms of visual quality and quantitative measurement.
Abstract: Image fusion is the key step to improve the performance of object detection in polarization images. We propose an unsupervised deep network to address the polarization image fusion issue. The network learns end-to-end mapping for fused images from intensity and degree of linear polarization images, without the ground truth of fused images. Customized architecture and loss function are designed to boost performance. Experimental results show that our proposed network outperforms other state-of-the-art methods in terms of visual quality and quantitative measurement.

37 citations

Journal ArticleDOI
TL;DR: A deep learning-based restoration method to remove honeycomb patterns and improve resolution for fiber bundle (FB) images is proposed and evaluated with data obtained from lens tissues and human histological specimens using both objective and subjective measures.
Abstract: We propose a deep learning-based restoration method to remove honeycomb patterns and improve resolution for fiber bundle (FB) images. By building and calibrating a dual-sensor imaging system, we capture FB images and corresponding ground truth data to train the network. Images without fiber bundle fixed patterns are restored from raw FB images as direct inputs, and spatial resolution is significantly enhanced for the trained sample type. We also construct the brightness mapping between the two image types for the effective use of all data, providing the ability to output images of the expected brightness. We evaluate our framework with data obtained from lens tissues and human histological specimens using both objective and subjective measures.

35 citations

Journal ArticleDOI
TL;DR: A new framework to jointly improve spatial resolution and remove fixed structural patterns for coherent fiber bundle imaging systems, based on inverting a principled forward model using a point spread function and a smoothing prior is proposed.
Abstract: We propose a new framework to jointly improve spatial resolution and remove fixed structural patterns for coherent fiber bundle imaging systems, based on inverting a principled forward model. The forward model maps a high-resolution representation to multiple images modeling random probe motions. We then apply a point spread function to simulate low-resolution figure bundle image capture. Our forward model also uses a smoothing prior. We compute a maximum a posteriori (MAP) estimate of the high-resolution image from one or more low-resolution images using conjugate gradient descent. Unique aspects of our approach include (1) supporting a variety of possible applicable transformations; (2) applying principled forward modeling and MAP estimation to this domain. We test our method on data synthesized from the USAF target, data captured from a transmissive USAF target, and data from lens tissue. In the case of the USAF target and 16 low-resolution captures, spatial resolution is enhanced by a factor of 2.8.

20 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Deep learning-enabled optical metrology is a kind of data-driven approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances as discussed by the authors .
Abstract: Abstract With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional “physics-based” approach, deep-learning-enabled optical metrology is a kind of “data-driven” approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.

165 citations

Journal ArticleDOI
TL;DR: Deep learning-enabled optical metrology is a kind of data-driven approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances as discussed by the authors .
Abstract: Abstract With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional “physics-based” approach, deep-learning-enabled optical metrology is a kind of “data-driven” approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.

95 citations

Journal ArticleDOI
TL;DR: The proposed novel deep learning framework for unwrapping the phase does not require post-processing, is highly robust to noise, accurately unwraps the phase even at the severe noise level of −5 dB, and can unwrap the phase maps even at relatively high dynamic ranges.
Abstract: Phase unwrapping is an ill-posed classical problem in many practical applications of significance such as 3D profiling through fringe projection, synthetic aperture radar and magnetic resonance imaging. Conventional phase unwrapping techniques estimate the phase either by integrating through the confined path (referred to as path-following methods) or by minimizing the energy function between the wrapped phase and the approximated true phase (referred to as minimum-norm approaches). However, these conventional methods have some critical challenges like error accumulation and high computational time and often fail under low SNR conditions. To address these problems, this paper proposes a novel deep learning framework for unwrapping the phase and is referred to as “PhaseNet 2.0”. The phase unwrapping problem is formulated as a dense classification problem and a fully convolutional DenseNet based neural network is trained to predict the wrap-count at each pixel from the wrapped phase maps. To train this network, we simulate arbitrary shapes and propose new loss function that integrates the residues by minimizing the difference of gradients and also uses $L_{1}$ loss to overcome class imbalance problem. The proposed method, unlike our previous approach PhaseNet, does not require post-processing, highly robust to noise, accurately unwraps the phase even at the severe noise level of −5 dB, and can unwrap the phase maps even at relatively high dynamic ranges. Simulation results from the proposed framework are compared with different classes of existing phase unwrapping methods for varying SNR values and discontinuity, and these evaluations demonstrate the advantages of the proposed framework. We also demonstrate the generality of the proposed method on 3D reconstruction of synthetic CAD models that have diverse structures and finer geometric variations. Finally, the proposed method is applied to real-data for 3D profiling of objects using fringe projection technique and digital holographic interferometry. The proposed framework achieves significant improvements over existing methods while being highly efficient with interactive frame-rates on modern GPUs.

85 citations

Journal ArticleDOI
TL;DR: Experiments demonstrate that the proposed interpolation method outperforms the state-of-the-art techniques quantitatively as well as visually to reduce nonconformities caused by high-frequency energy.
Abstract: A demand for division of focal plane (DoFP) polarization imaging technology grows rapidly as nanofabrication technologies become mature. For real-time polarization imaging, a DoFP polarimeter often trades off its spatial resolution, which may cause instantaneous field of view (IFoV) errors. To deal with such problems, interpolation methods are often used to fill the missing polarization information. This paper presents an interpolation technique using Newton's polynomial for DoFP polarimeter demosaicking. The interpolation is performed in the polarization difference domain with the interpolation error taken into consideration. The proposed method uses an edge classifier based on polarization difference and a fusion scheme to recover more accurate boundary features. Experiments using both synthetic and real DoFP images in visible and long-wave infrared spectrum demonstrate that the proposed interpolation method outperforms the state-of-the-art techniques quantitatively as well as visually to reduce nonconformities caused by high-frequency energy.

56 citations

Journal ArticleDOI
TL;DR: A label enhanced and patch based deep learning phase retrieval approach which can achieve fast and accurate phase retrieval using only several fringe patterns as training dataset is proposed.
Abstract: We propose a label enhanced and patch based deep learning phase retrieval approach which can achieve fast and accurate phase retrieval using only several fringe patterns as training dataset. To the best of our knowledge, it is the first time that the advantages of the label enhancement and patch strategy for deep learning based phase retrieval are demonstrated in fringe projection. In the proposed method, the enhanced labeled data in training dataset is designed to learn the mapping between the input fringe pattern and the output enhanced fringe part of the deep neural network (DNN). Moreover, the training data is cropped into small overlapped patches to expand the training samples for the DNN. The performance of the proposed approach is verified by experimental projection fringe patterns with applications in dynamic fringe projection 3D measurement.

49 citations