scispace - formally typeset
Search or ask a question
Author

Shaohai Hu

Bio: Shaohai Hu is an academic researcher from Beijing Jiaotong University. The author has contributed to research in topics: Sparse approximation & Image fusion. The author has an hindex of 1, co-authored 1 publications receiving 1 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A fusion method based on multi-scale sparse representation for registered multi-focus images (MIF-MsSR), which not only reserves the integrity of the information in source images, but also has better fusion performance on subjective and objective indicators than other state-of-the-art methods.

4 citations


Cited by
More filters
26 Jun 2009
TL;DR: In this article, the authors proposed a method for navigation system with the assistance of the Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation (NNSF) of China (60472081).
Abstract: Supported by Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation of P. R. China (60472081)

5 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper found that image dissimilarities are unavoidable due to the spectral coverage of different image sensors and that image fusion should integrate these disimilarities when they are representing spatial improvement.
Abstract: Abstract. Image fusion technique has been extended its development from multi-sensor fusion, multi-model fusion to multi-focus fusion. More and more advanced techniques such as deep learning have been integrated into the development of image fusion algorithms. However, as an important aspect, fusion quality assessment has been received less attention. This paper intends to reflect on the commonly used indices for quantitative assessment and investigate how they can represent the fusion quality regarding spectral preservation and spatial improvement. We found that image dissimilarities are unavoidable due to the spectral coverage of different image sensors. Image fusion should integrate these dissimilarities when they are representing spatial improvement. Such integration will naturally change the pixel values. However, as the quality indices for the assessment of spectral preservation are measuring image dissimilarities, the integration of spatial information will lead to a low fusion quality assessment. For the evaluation of spatial improvement, the quality indices only work if the spatial details have been lost; however, in the case of spatial details gain, these indices do not reflect them as spatial improvements. Moreover, this paper raises attention to image processing procedures involved in image fusion, including image geo-registration, image clipping and image resampling, which will change image statistics and thereby influence the quality assessment when statistical indices are used.

1 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a new gradient-intensity joint proportional constraint generative adversarial network for multi-focus image fusion, with the name of GIPC-GAN.
Abstract: Abstract As for the problems of boundary blurring and information loss in the multi-focus image fusion method based on the generative decision maps, this paper proposes a new gradient-intensity joint proportional constraint generative adversarial network for multi-focus image fusion, with the name of GIPC-GAN. First, a set of labeled multi-focus image datasets using the deep region competition algorithm on a public dataset is constructed. It can train the network and generate fused images in an end-to-end manner, while avoiding boundary errors caused by artificially constructed decision maps. Second, the most meaningful information in the multi-focus image fusion task is defined as the target intensity and detail gradient, and a jointly constrained loss function based on intensity and gradient proportional maintenance is proposed. Constrained by a specific loss function to force the generated image to retain the information of target intensity, global texture and local texture of the source image as much as possible and maintain the structural consistency between the fused image and the source image. Third, we introduce GAN into the network, and establish an adversarial game between the generator and the discriminator, so that the intensity structure and texture gradient retained by the fused image are kept in a balance, and the detailed information of the fused image is further enhanced. Last but not least, experiments are conducted on two multi-focus public datasets and a multi-source multi-focus image sequence dataset and compared with other 7 state-of-the-art algorithms. The experimental results show that the images fused by the GIPC-GAN model are superior to other comparison algorithms in both subjective performance and objective measurement, and basically meet the requirements of real-time image fusion in terms of running efficiency and mode parameters quantity.