scispace - formally typeset
Search or ask a question
Author

Iman Roosta

Bio: Iman Roosta is an academic researcher from Isfahan University of Technology. The author has contributed to research in topics: Image fusion & Digital image processing. The author has an hindex of 2, co-authored 3 publications receiving 39 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel focus measure based on the surface area of regions surrounded by intersection points of input source images is proposed, which has the potential to distinguish focused regions from the blurred ones.
Abstract: Nowadays image processing and machine vision fields have become important research topics due to numerous applications in almost every field of science. Performance in these fields is critically dependent to the quality of input images. In most of the imaging devices, optical lenses are used to capture images from a particular scene. But due to the limited depth of field of optical lenses, objects in different distances from focal point will be captured with different sharpness and details. Thus, important details of the scene might be lost in some regions. Multi-focus image fusion is an effective technique to cope with this problem. The main challenge in multi-focus fusion is the selection of an appropriate focus measure. In this paper, we propose a novel focus measure based on the surface area of regions surrounded by intersection points of input source images. The potential of this measure to distinguish focused regions from the blurred ones is proved. In our fusion algorithm, intersection points of input images are calculated and then input images are segmented using these intersection points. After that, the surface area of each segment is considered as a measure to determine focused regions. Using this measure we obtain an initial selection map of fusion which is then refined by morphological modifications. To demonstrate the performance of the proposed method, we compare its results with several competing methods. The results show the effectiveness of our proposed method.

42 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: A new criteria to determine focused pixels in an image is proposed that finds the points that have the same intensities in input images and segment the input images based on them and calculates surface area of pixels inside every segment based on intensity variations over rows and columns.
Abstract: Multifocus image fusion is an important research area in image processing and machine vision applications. Due to use of optical lenses, captured images are not usually focused everywhere in the image. Therefore objects near the focal range have evident details while other objects appear blurry. Multifocus image fusion algorithm takes several images with different focal ranges and combines them to produce an image that is focused everywhere. To identify focused regions in each of input images, generally spatial domain and transform domain methods are used. These methods usually suffer from artifacts such as blockiness or ringing. In this paper we propose a new criteria to determine focused pixels in an image. We find the points that have the same intensities in input images and segment the input images based on them. Subsequently we calculate surface area of pixels inside every segment based on intensity variations over rows and columns. Segment with more surface area of input images selected as the focus segment. Experimental results reveal the superiority of our method in comparison to compared algorithms.

3 citations

Proceedings ArticleDOI
20 May 2014
TL;DR: An energy term is defined and a region's energy is categorized into low, medium, and high levels based on the level of energy each pixel is defined as either focused or not and experimental results reveal the superiority of the method compared to comparable algorithms.
Abstract: Multifocus image fusion plays an important role in image processing and machine vision applications. In frequent occasions, captured images are not focus throughout the image because the optical lenses that are commonly used for producing image have limited depth of field. Therefore only the objects that are near the focal range of the camera are clear while other parts are blurred. One solution is to capture several images with different focal ranges and combine them to produce an image that is focused everywhere. To identify focused regions, current implementations of the mentioned solution use spatial or transform-domain. These methods usually suffer from artifacts such as blockiness or ringing. In this paper we have defined an energy term and categorized a region's energy into low, medium, and high levels. Then based on the level of energy each pixel is defined as either focused or not. The output fused image is constructed from focused pixels of the two source images. Experimental results reveal the superiority of our method compared to comparable algorithms.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A first hand classification of region based fusion methods is carried out and a comprehensive list of objective fusion evaluation metrics is highlighted to compare the existing methods.
Abstract: Image fusion has been emerging as an important area of research. It has attracted many applications such as surveillance, photography, medical diagnosis, etc. Image fusion techniques are developed at three levels: pixel, feature and decision. Region based image fusion is one of the methods of feature level. It possesses certain advantages – less sensitive to noise, more robust and avoids misregistration. This paper presents a review of region based fusion approaches. A first hand classification of region based fusion methods is carried out. A comprehensive list of objective fusion evaluation metrics is highlighted to compare the existing methods. A detailed analysis is carried out and results are presented in tabular form. This may attract researchers to further explore the research in this direction.

153 citations

Journal ArticleDOI
TL;DR: The obtained experimental results indicate that the proposed CNNs based network is more accurate and have the better decision map without post-processing algorithms than the other existing state of the art multi-focus fusion methods which used many post- processing algorithms.
Abstract: The Convolutional Neural Networks (CNNs) based multi-focus image fusion methods have recently attracted enormous attention. They greatly enhanced the constructed decision map compared with the previous state of the art methods that have been done in the spatial and transform domains. Nevertheless, these methods have not reached to the satisfactory initial decision map, and they need to undergo vast post-processing algorithms to achieve a satisfactory decision map. In this paper, a novel CNNs based method with the help of the ensemble learning is proposed. It is very reasonable to use various models and datasets rather than just one. The ensemble learning based methods intend to pursue increasing diversity among the models and datasets in order to decrease the problem of the overfitting on the training dataset. It is obvious that the results of an ensemble of CNNs are better than just one single CNNs. Also, the proposed method introduces a new simple type of multi-focus images dataset. It simply changes the arranging of the patches of the multi-focus datasets, which is very useful for obtaining the better accuracy. With this new type arrangement of datasets, the three different datasets including the original and the Gradient in directions of vertical and horizontal patches are generated from the COCO dataset. Therefore, the proposed method introduces a new network that three CNNs models which have been trained on three different created datasets to construct the initial segmented decision map. These ideas greatly improve the initial segmented decision map of the proposed method which is similar, or even better than, the other final decision map of CNNs based methods obtained after applying many post-processing algorithms. Many real multi-focus test images are used in our experiments, and the results are compared with quantitative and qualitative criteria. The obtained experimental results indicate that the proposed CNNs based network is more accurate and have the better decision map without post-processing algorithms than the other existing state of the art multi-focus fusion methods which used many post-processing algorithms.

144 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of existing multi-focus image fusion methods is presented and a new taxonomy is introduced to classify existing methods into four main categories: transformdomain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods.
Abstract: Multi-focus image fusion is an effective technique to extend the depth-of-field of optical lenses by creating an all-in-focus image from a set of partially focused images of the same scene. In the last few years, great progress has been achieved in this field along with the rapid development of image representation theories and approaches such as multi-scale geometric analysis, sparse representation, deep learning, etc. This survey paper first presents a comprehensive overview of existing multi-focus image fusion methods. To keep up with the latest development in this field, a new taxonomy is introduced to classify existing methods into four main categories: transform domain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods. For each category, representative fusion methods are introduced and summarized. Then, a comparative study for 18 representative fusion methods is conducted based on 30 pairs of commonly-used multi-focus images and 8 popular objective fusion metrics. All the relevant resources including source images, objective metrics and fusion results are released online, aiming to provide a benchmark for the future study of multi-focus image fusion. Finally, several major challenges remained in the current research of this field are discussed and some future prospects are put forward.

143 citations

Journal ArticleDOI
TL;DR: A novel focus region detection method is presented, which uses guided filter to refine the rough focus maps obtained by mean filter and difference operator and is optimized to generate final decision map by using guided filter again.
Abstract: Being an efficient method of information fusion, multi-focus image fusion has attracted increasing interests in image processing and computer vision. This paper proposes a multi-focus image fusion method based on focus region detection using mean filter and guided filter. Firstly, a novel focus region detection method is presented, which uses guided filter to refine the rough focus maps obtained by mean filter and difference operator. Then, An initial decision map is got via the pixel-wise maximum rule, and optimized to generate final decision map by using guided filter again. Finally, the fused image is obtained by the pixel-wise weighted-averaging rule with the final decision map. Experimental results demonstrate that the novel focus region detection method has stronger robustness to different noises, and higher computational efficiency than other focus measures. Furthermore, the proposed fusion method implements efficiently and outperforms some state-of-the-art approaches both in visual effect and objective evaluation.

109 citations

Journal ArticleDOI
TL;DR: A novel end-to-end multi- focus image fusion with a natural enhancement method based on deep convolutional neural network (CNN) that can deliver superior fusion and enhancement performance than the state-of-the-art methods in the presence of multi-focus images with common non-focused areas, anisotropic blur, and misregistration.
Abstract: Common non-focused areas are often present in multi-focus images due to the limitation of the number of focused images. This factor severely degrades the fusion quality of multi-focus images. To address this problem, we propose a novel end-to-end multi-focus image fusion with a natural enhancement method based on deep convolutional neural network (CNN). Several end-to-end CNN architectures that are specifically adapted to this task are first designed and researched. On the basis of the observation that low-level feature extraction can capture low-frequency content, whereas high-level feature extraction effectively captures high-frequency details, we further combine multi-level outputs such that the most visually distinctive features can be extracted, fused, and enhanced. In addition, the multi-level outputs are simultaneously supervised during training to boost the performance of image fusion and enhancement. Extensive experiments show that the proposed method can deliver superior fusion and enhancement performance than the state-of-the-art methods in the presence of multi-focus images with common non-focused areas, anisotropic blur, and misregistration.

93 citations