scispace - formally typeset
Search or ask a question
Author

Shadrokh Samavi

Bio: Shadrokh Samavi is an academic researcher from Isfahan University of Technology. The author has contributed to research in topics: Digital watermarking & Convolutional neural network. The author has an hindex of 25, co-authored 279 publications receiving 2764 citations. Previous affiliations of Shadrokh Samavi include Jackson State University & University of Michigan.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images and outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.
Abstract: Multi-focus image fusion has emerged as a major topic in image processing to generate all-focus images with increased depth-of-field from multi-focus photographs. Different approaches have been used in spatial or transform domain for this purpose. But most of them are subject to one or more of image fusion quality degradations such as blocking artifacts, ringing effects, artificial edges, halo artifacts, contrast decrease, sharpness reduction, and misalignment of decision map with object boundaries. In this paper we present a novel multi-focus image fusion method in spatial domain that utilizes a dictionary which is learned from local patches of source images. Sparse representation of relative sharpness measure over this trained dictionary are pooled together to get the corresponding pooled features. Correlation of the pooled features with sparse representations of input images produces a pixel level score for decision map of fusion. Final regularized decision map is obtained using Markov Random Field (MRF) optimization. We also gathered a new color multi-focus image dataset which has more variety than traditional multi-focus image sets. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods, in terms of visual and quantitative evaluations.

343 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: Experimental results show that the proposed method for detection of melanoma lesions is superior in terms of diagnostic accuracy in comparison with the state-of-the-art methods.
Abstract: Melanoma, most threatening type of skin cancer, is on the rise. In this paper an implementation of a deep-learning system on a computer server, equipped with graphic processing unit (GPU), is proposed for detection of melanoma lesions. Clinical (non-dermoscopic) images are used in the proposed system, which could assist a dermatologist in early diagnosis of this type of skin cancer. In the proposed system, input clinical images, which could contain illumination and noise effects, are preprocessed in order to reduce such artifacts. Afterward, the enhanced images are fed to a pre-trained convolutional neural network (CNN) which is a member of deep learning models. The CNN classifier, which is trained by large number of training samples, distinguishes between melanoma and benign cases. Experimental results show that the proposed method is superior in terms of diagnostic accuracy in comparison with the state-of-the-art methods.

221 citations

Journal ArticleDOI
TL;DR: This paper proposes to use variations in silhouette area that are obtained from only one camera to find the silhouette, and shows that the proposed feature is view invariant.
Abstract: Population of old generation is growing in most countries. Many of these seniors are living alone at home. Falling is among the most dangerous events that often happen and may need immediate medical care. Automatic fall detection systems could help old people and patients to live independently. Vision-based systems have advantage over wearable devices. These visual systems extract some features from video sequences and classify fall and normal activities. These features usually depend on camera's view direction. Using several cameras to solve this problem increases the complexity of the final system. In this paper, we propose to use variations in silhouette area that are obtained from only one camera. We use a simple background separation method to find the silhouette. We show that the proposed feature is view invariant. Extracted feature is fed into a support vector machine for classification. Simulation of the proposed method using a publicly available dataset shows promising results.

157 citations

Proceedings ArticleDOI
01 Jul 2018
TL;DR: Wang et al. as discussed by the authors proposed a polyp segmentation method based on the convolutional neural network, which performed a novel image patch selection method in the training phase of the network and performed effective post-processing on the probability map that is produced by the network.
Abstract: Colorectal cancer is one of the highest causes of cancer-related death, especially in men. Polyps are one of the main causes of colorectal cancer, and early diagnosis of polyps by colonoscopy could result in successful treatment. Diagnosis of polyps in colonoscopy videos is a challenging task due to variations in the size and shape of polyps. In this paper, we proposed a polyp segmentation method based on the convolutional neural network. Two strategies enhance the performance of the method. First, we perform a novel image patch selection method in the training phase of the network. Second, in the test phase, we perform effective post-processing on the probability map that is produced by the network. Evaluation of the proposed method using the CVC-ColonDB database shows that our proposed method achieves more accurate results in comparison with previous colonoscopy video-segmentation methods.

144 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The experimental results show that the proposed method for accurate extraction of lesion region can outperform the existing state-of-the-art algorithms in terms of segmentation accuracy.
Abstract: Melanoma is the most aggressive form of skin cancer and is on rise. There exists a research trend for computerized analysis of suspicious skin lesions for malignancy using images captured by digital cameras. Analysis of these images is usually challenging due to existence of disturbing factors such as illumination variations and light reflections from skin surface. One important stage in diagnosis of melanoma is segmentation of lesion region from normal skin. In this paper, a method for accurate extraction of lesion region is proposed that is based on deep learning approaches. The input image, after being preprocessed to reduce noisy artifacts, is applied to a deep convolutional neural network (CNN). The CNN combines local and global contextual information and outputs a label for each pixel, producing a segmentation mask that shows the lesion region. This mask will be further refined by some post processing operations. The experimental results show that our proposed method can outperform the existing state-of-the-art algorithms in terms of segmentation accuracy.

140 citations


Cited by
More filters
01 Jan 2006

3,012 citations

Journal ArticleDOI
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Abstract: Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.

1,491 citations

Journal ArticleDOI
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Abstract: This review provides a survey of various pixel-level image fusion methods according to the adopted transform strategy.The existing fusion performance evaluation methods and the unresolved problems are concluded.The major challenges met in different image fusion applications are analyzed and concluded. Pixel-level image fusion is designed to combine multiple input images into a fused image, which is expected to be more informative for human or machine perception as compared to any of the input images. Due to this advantage, pixel-level image fusion has shown notable achievements in remote sensing, medical imaging, and night vision applications. In this paper, we first provide a comprehensive survey of the state of the art pixel-level image fusion methods. Then, the existing fusion quality measures are summarized. Next, four major applications, i.e., remote sensing, medical diagnosis, surveillance, photography, and challenges in pixel-level image fusion applications are analyzed. At last, this review concludes that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications. Therefore, the researches in the image fusion field are still expected to significantly grow in the coming years.

871 citations