scispace - formally typeset
Search or ask a question
Author

Jun Zhang

Other affiliations: Chinese Academy of Sciences
Bio: Jun Zhang is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Computer science & Phase retrieval. The author has an hindex of 7, co-authored 29 publications receiving 504 citations. Previous affiliations of Jun Zhang include Chinese Academy of Sciences.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper analyzes the ME structure in HEVC and proposes a parallel framework to decouple ME for different partitions on many-core processors and achieves more than 30 and 40 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively.
Abstract: High Efficiency Video Coding (HEVC) provides superior coding efficiency than previous video coding standards at the cost of increasing encoding complexity. The complexity increase of motion estimation (ME) procedure is rather significant, especially when considering the complicated partitioning structure of HEVC. To fully exploit the coding efficiency brought by HEVC requires a huge amount of computations. In this paper, we analyze the ME structure in HEVC and propose a parallel framework to decouple ME for different partitions on many-core processors. Based on local parallel method (LPM), we first use the directed acyclic graph (DAG)-based order to parallelize coding tree units (CTUs) and adopt improved LPM (ILPM) within each CTU (DAGILPM), which exploits the CTU-level and prediction unit (PU)-level parallelism. Then, we find that there exist completely independent PUs (CIPUs) and partially independent PUs (PIPUs). When the degree of parallelism (DP) is smaller than the maximum DP of DAGILPM, we process the CIPUs and PIPUs, which further increases the DP. The data dependencies and coding efficiency stay the same as LPM. Experiments show that on a 64-core system, compared with serial execution, our proposed scheme achieves more than 30 and 40 times speedup for 1920 × 1080 and 2560 × 1600 video sequences, respectively.

366 citations

Proceedings ArticleDOI
10 Dec 2015
TL;DR: A subaperture images streaming scheme to compress lenselet images, in which rotation scan mapping is adopted to further improve compression efficiency, is proposed and results show the approach can efficient compress the redundancy in lensinglet images and outperform traditional image compression method.
Abstract: Plenoptic cameras capture the light field in a scene with a single shot and produce lenselet images. From a lenselet image, light field can be reconstructed, with which we can render images with different viewpoints and focal length. Because of large volume data, high efficient image compression scheme for storage and transmission is urgent. Containing 4D light field information, lenselet images have much more redundant information than traditional 2D images. In this paper, we propose a subaperture images streaming scheme to compress lenselet images, in which rotation scan mapping is adopted to further improve compression efficiency. The experiment results show our approach can efficient compress the redundancy in lenselet images and outperform traditional image compression method.

85 citations

Journal ArticleDOI
TL;DR: A new hyper-spectrometer can “see” more of the electromagnetic spectrum and with greater detail than the human eye, according to a jointed team led by Prof. Jun Zhang at Beijing Institute of Technology.
Abstract: The quantum dot spectrometer, fabricated by integrating different quantum dots with an image sensor to reconstruct the target spectrum from spectral-coupled measurements, is an emerging and promising hyperspectrometry technology with high resolution and a compact size. The spectral resolution and spectral range of quantum dot spectrometers have been limited by the spectral variety of the available quantum dots and the robustness of algorithmic reconstruction. Moreover, the spectrometer integration of quantum dots also suffers from inherent photoluminescence emission and poor batch-to-batch repeatability. In this work, we developed nonemissive in situ fabricated MA3Bi2X9 and Cs2SnX6 (MA = CH3NH3; X = Cl, Br, I) perovskite-quantum-dot-embedded films (PQDFs) with precisely tunable transmittance spectra for quantum dot spectrometer applications. The resulting PQDFs contain in situ fabricated perovskite nanocrystals with homogenous dispersion in a polymeric matrix, giving them advantageous features such as high transmittance efficiency and good batch-to-batch repeatability. By integrating a filter array of 361 kinds of PQDFs with a silicon-based photodetector array, we successfully demonstrated the construction of a perovskite quantum dot spectrometer combined with a compressive-sensing-based total-variation optimization algorithm. A spectral resolution of ~1.6 nm was achieved in the broadband of 250-1000 nm. The performance of the perovskite quantum dot spectrometer is well beyond that of human eyes in terms of both the spectral range and spectral resolution. This advancement will not only pave the way for using quantum dot spectrometers for practical applications but also significantly impact the development of artificial intelligence products, clinical treatment equipment, scientific instruments, etc.

77 citations

Journal ArticleDOI
01 Dec 2021
TL;DR: In this article, an efficient large-scale phase retrieval technique termed as LPR was proposed, which extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space.
Abstract: High-throughput computational imaging requires efficient processing algorithms to retrieve multi-dimensional and multi-scale information. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. The existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization on different modalities. In this work, we report an efficient large-scale phase retrieval technique termed as LPR. It extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space. The alternating projection solver and enhancing neural network are respectively derived to tackle the measurement formation and statistical prior regularization. This framework compensates the shortcomings of each operator, so as to realize high-fidelity phase retrieval with low computational complexity and strong generalization. We applied the technique for a series of computational phase imaging modalities including coherent diffraction imaging, coded diffraction pattern imaging, and Fourier ptychographic microscopy. Extensive simulations and experiments validate that the technique outperforms the existing PR algorithms with as much as 17dB enhancement on signal-to-noise ratio, and more than one order-of-magnitude increased running efficiency. Besides, we for the first time demonstrate ultra-large-scale phase retrieval at the 8K level ( $$7680\times 4320$$ pixels) in minute-level time.

40 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

01 Jan 2010
TL;DR: Spatial light interference microscopy reveals the intrinsic contrast of cell structures and renders quantitative optical path-length maps across the sample, which may prove instrumental in impacting the light microscopy field at a large scale.
Abstract: We present SLIM, a new optical method measuring optical pathlength changes of 0.3 nm spatially and 0.03nm temporally. SLIM combines two classic ideas in light imaging: Zernike’s phase contrast microscopyand Gabor’s holography.

445 citations

Journal ArticleDOI
TL;DR: A comprehensive overview and discussion of research in light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data are presented.
Abstract: Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.

412 citations

Journal ArticleDOI
TL;DR: A deep HSI sharpening method is presented for the fusion of an LR-HSI with an HR-MSI, which directly learns the image priors via deep convolutional neural network-based residual learning.
Abstract: Hyperspectral image (HSI) sharpening, which aims at fusing an observable low spatial resolution (LR) HSI (LR-HSI) with a high spatial resolution (HR) multispectral image (HR-MSI) of the same scene to acquire an HR-HSI, has recently attracted much attention. Most of the recent HSI sharpening approaches are based on image priors modeling, which are usually sensitive to the parameters selection and time-consuming. This paper presents a deep HSI sharpening method (named DHSIS) for the fusion of an LR-HSI with an HR-MSI, which directly learns the image priors via deep convolutional neural network-based residual learning. The DHSIS method incorporates the learned deep priors into the LR-HSI and HR-MSI fusion framework. Specifically, we first initialize the HR-HSI from the fusion framework via solving a Sylvester equation. Then, we map the initialized HR-HSI to the reference HR-HSI via deep residual learning to learn the image priors. Finally, the learned image priors are returned to the fusion framework to reconstruct the final HR-HSI. Experimental results demonstrate the superiority of the DHSIS approach over existing state-of-the-art HSI sharpening approaches in terms of reconstruction accuracy and running time.

302 citations

Proceedings Article
01 Jan 2012
TL;DR: Quantitative phase imaging, i.e., measuring the map of pathlength shifts due to the specimen of interest, has been developing rapidly over the past decade with main methods and exciting applications to biomedicine reviewed.
Abstract: Quantitative phase imaging, i.e., measuring the map of pathlength shifts due to the specimen of interest, has been developing rapidly over the past decade. The main methods and exciting applications to biomedicine will be reviewed.

297 citations