scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A fractal dimension based framework for night vision fusion

TL;DR: A novel fusion framework is proposed for night-vision applications such as pedestrian recognition, vehicle navigation and surveillance that is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
Abstract: In this paper, a novel fusion framework is proposed for night-vision applications such as pedestrian recognition, vehicle navigation and surveillance. The underlying concept is to combine low-light visible and infrared imagery into a single output to enhance visual perception. The proposed framework is computationally simple since it is only realized in the spatial domain. The core idea is to obtain an initial fused image by averaging all the source images. The initial fused image is then enhanced by selecting the most salient features guided from the root mean square error ( RMSE ) and fractal dimension of the visual and infrared images to obtain the final fused image. Extensive experiments on different scene imaginary demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
Citations
More filters
Journal ArticleDOI
06 Oct 2021
TL;DR: In this paper, a new 64-layer architecture named 4-BSMAB derived from deep AlexNet is proposed for pedestrian gender classification, which achieved the highest accuracy of 85.4%, and 92% AUC on MIT dataset.
Abstract: Pedestrian gender classification is one of the key assignments of pedestrian study, and it finds practical applications in content-based image retrieval, population statistics, human–computer interaction, health care, multimedia retrieval systems, demographic collection, and visual surveillance. In this research work, gender classification was carried out using a deep learning approach. A new 64-layer architecture named 4-BSMAB derived from deep AlexNet is proposed. The proposed model was trained on CIFAR-100 dataset utilizing SoftMax classifier. Then, features were obtained from applied datasets with this pre-trained model. The obtained feature set was optimized with ant colony system (ACS) optimization technique. Various classifiers of SVM and KNN were used to perform gender classification utilizing the optimized feature set. Comprehensive experimentation was performed on gender classification datasets, and proposed model produced better results than the existing methods. The suggested model attained highest accuracy, i.e., 85.4%, and 92% AUC on MIT dataset, and best classification results, i.e., 93% accuracy and 96% AUC, on PKU-Reid dataset. The outcomes of extensive experiments carried out on existing standard pedestrian datasets demonstrate that the proposed framework outperformed existing pedestrian gender classification methods, and acceptable results prove the proposed model as a robust model.

6 citations

Journal ArticleDOI
TL;DR: In this paper , Infrared and visible image fusion (IVIF) technologies are used to extract complementary information from source images and generate a single fused result, which is widely applied in various high-level visual tasks such as segmentation and object detection.
Abstract: Dear editor, Infrared and visible image fusion (IVIF) technologies are to extract complementary information from source images and generate a single fused result [1], which is widely applied in various high-level visual tasks such as segmentation and object detection [2].

5 citations

Journal ArticleDOI
01 Mar 2022-Optik
TL;DR: In this paper , a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion.
Abstract: Image fusion is the process of fusing multiple images of the same scene to obtain a more informative image for human eye perception. In this paper, a new fusion framework based on Quaternion Non-Subsampled Contourlet Transform (QNSCT) and Guided Filter detail enhancement is designed to address the problems of inconspicuous infrared targets and poor background texture in Infrared and visible image fusion. The proposed method uses the quaternion wavelet transform for the first time instead of the traditional Non-Subsampled Pyramid Filter Bank structure in the Non-Subsampled Contourlet Transform (NSCT). The flexible multi-resolution of quaternion wavelet and the multi-directionality of NSCT are fully utilized to refine the multi-scale decomposition scheme. On the other hand, the coefficient matrix obtained from the proposed QNSCT algorithm is fused using a weight refinement algorithm based on the guided filter. The fusion scheme is divided into four steps. First, the Infrared and visible images are decomposed into multi-directional and multiscale coefficient matrices using QNSCT. The experimental results show that the proposed algorithm not only extracts important visual information from the source image, but also preserves the texture information in the scene better. Meanwhile, the scheme outperforms state-of-the-art methods in both subjective and objective evaluations.

4 citations

Journal ArticleDOI
TL;DR: The detailed analysis and comparative results show that the proposed Cloudlet Federation for Resource Optimization (CFRO), a federated cloudlet model for resource optimization, offers improved performance and more resource elasticity as compared to the conventional cloudlets.
Abstract: A Cloud computing paradigm augments the limited resources of mobile devices resulting in increased distance, limited Internet bandwidth, and seamless connectivity challenges between a remote cloud and mobile devices. Cloudlet computing based solutions are widely used to address these challenges by bringing the computational facility closer to the user. The ever growing number of mobile devices, Internet of Things (IoT) sensors and Information Communication Technology (ICT) infrastructure used for smart cities demand more resources. The existing cloudlet based solutions are unable to manage the ever-increasing demand for power, storage, and computational resources, and therefore forward the resource extensive tasks to a remote cloud, limiting cloudlet computing benefits. We present the Cloudlet Federation for Resource Optimization (CFRO), a federated cloudlet model for resource optimization to address these resource scarcity challenges. The proposed model exerts the features of scalability, resource collaboration, and robustness. The underlying scheme for resource optimization has been modeled as a Nested Multi Objective Resource Optimization Problem (NMOROP) and a novel algorithm has been proposed to solve it. The detailed analysis and comparative results show that the proposed model offers improved performance and more resource elasticity as compared to the conventional cloudlet model.

4 citations

Book ChapterDOI
01 Jan 2020
TL;DR: This method focuses on the quantitative combination of Canny, LoG, and Sobel (CLS) edge detection operators to detect the edges of gray scale and color fractal images.
Abstract: Research in the field of fractal image processing (FIP) has increased in the current era. Edge detection of fractal images can be considered as an important domain of research in FIP. Detecting edges in different fractal images accurate manner is a challenging problem in FIP. Several methods have introduced by different researchers to detect the edges of images. However, no method works suitably under all conditions. In this chapter, an edge detection method is proposed to detect the edges of gray scale and color fractal images. This method focuses on the quantitative combination of Canny, LoG, and Sobel (CLS) edge detection operators. The output of the proposed method is produced using matrix laboratory (MATLAB) R2015b and compared with the edge detection operators such as Sobel, Prewitt, Roberts, LoG, Canny, and mathematical morphological operator. The experimental outputs show that the proposed method performs better as compared to other traditional methods. An Edge Detection Approach for Fractal Image Processing

3 citations

References
More filters
Book
01 Jan 1982
TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.
Abstract: "...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right." Nature

24,199 citations

Journal ArticleDOI
01 Jul 1984
TL;DR: A blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) is presented in this article.
Abstract: "...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right." Nature

7,560 citations


"A fractal dimension based framework..." refers background in this paper

  • ...In reference to the images, it provides variations in the features resulting from changes in the scale and hence acts as the texture masking function [16], [17]....

    [...]

Journal ArticleDOI
TL;DR: In this article, an image fusion scheme based on the wavelet transform is presented, where wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transformation of the fused wavelet coefficients.
Abstract: The goal of image fusion is to integrate complementary information from multisensor data such that the new images are more suitable for the purpose of human visual perception and computer-processing tasks such as segmentation, feature extraction, and object recognition. This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. An area-based maximum selection rule and a consistency verification step are used for feature selection. The proposed scheme performs better than the Laplacian pyramid-based methods due to the compactness, directional selectivity, and orthogonality of the wavelet transform. A performance measure using specially generated test images is suggested and is used in the evaluation of different fusion methods, and in comparing the merits of different wavelet transform kernels. Extensive experimental results including the fusion of multifocus images, Landsat and Spot images, Landsat and Seasat SAR images, IR and visible images, and MRI and PET images are presented in the paper.

1,532 citations

Journal ArticleDOI
TL;DR: Experimental results clearly indicate that this metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms.
Abstract: A measure for objectively assessing the pixel level fusion performance is defined. The proposed metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms. Experimental results clearly indicate that this metric is perceptually meaningful.

1,446 citations

Journal ArticleDOI
TL;DR: The results show that the measure represents how much information is obtained from the input images and is meaningful and explicit.
Abstract: Mutual information is proposed as an information measure for evaluating image fusion performance. The proposed measure represents how much information is obtained from the input images. No assumption is made regarding the nature of the relation between the intensities in both input modalities. The results show that the measure is meaningful and explicit.

1,059 citations


"A fractal dimension based framework..." refers background in this paper

  • ...Based on the above definition, the quality of fused image can be expressed as [20], [21]...

    [...]