Signal, Image and Video Processing
Springer Science+Business Media
About: Signal, Image and Video Processing is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Computer science & Artificial intelligence. It has an ISSN identifier of 1863-1703. Over the lifetime, 2614 publications have been published receiving 27060 citations.
Topics: Computer science, Artificial intelligence, Pattern recognition (psychology), Convolutional neural network, Segmentation
Papers published on a yearly basis
TL;DR: The proposed method to fuse source images by weighted average using the weights computed from the detail images that are extracted from the source images using CBF has shown good performance and the visual quality of the fused image by the proposed method is superior to other methods.
Abstract: Like bilateral filter (BF), cross bilateral filter (CBF) considers both gray-level similarities and geometric closeness of the neighboring pixels without smoothing edges, but it uses one image for finding the kernel and other to filter, and vice versa. In this paper, it is proposed to fuse source images by weighted average using the weights computed from the detail images that are extracted from the source images using CBF. The performance of the proposed method has been verified on several pairs of multisensor and multifocus images and compared with the existing methods visually and quantitatively. It is found that, none of the methods have shown consistence performance for all the performance metrics. But as compared to them, the proposed method has shown good performance in most of the cases. Further, the visual quality of the fused image by the proposed method is superior to other methods.
TL;DR: This paper provides a comprehensive review of SR image and video reconstruction methods developed in the literature and highlights the future research challenges.
Abstract: The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of SR image and video reconstruction methods developed in the literature and highlight the future research challenges. The SR image approaches reconstruct a single higher-resolution image from a set of given lower-resolution images, and the SR video approaches reconstruct an image sequence with a higher-resolution from a group of adjacent lower-resolution image frames. Furthermore, several SR applications are discussed to contribute some insightful comments on future SR research directions. Specifically, the SR computations for multi-view images and the SR video computation in the temporal domain are discussed.
TL;DR: The best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image, having a linear correlation coefficient with human subjective scores of almost 0.91.
Abstract: In this work, we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained convolutional neural networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image. Experimental results on the LIVE In the Wild Image Quality Challenge Database show that DeepBIQ outperforms the state-of-the-art methods compared, having a linear correlation coefficient with human subjective scores of almost 0.91. These results are further confirmed also on four benchmark databases of synthetically distorted images: LIVE, CSIQ, TID2008, and TID2013.
TL;DR: A discrete cosine harmonic wavelet (DCHWT)-based image fusion is proposed and it is found that the performance of DCHWT is similar to convolution- based wavelets and superior/similar to lifting-based wavelets.
Abstract: The energy compaction and multiresolution properties of wavelets have made the image fusion successful in combining important features such as edges and textures from source images without introducing any artifacts for context enhancement and situational awareness. The wavelet transform is visualized as a convolution of wavelet filter coefficients with the image under consideration and is computationally intensive. The advent of lifting-based wavelets has reduced the computations but at the cost of visual quality and performance of the fused image. To retain the visual quality and performance of the fused image with reduced computations, a discrete cosine harmonic wavelet (DCHWT)-based image fusion is proposed. The performance of DCHWT is compared with both convolution and lifting-based image fusion approaches. It is found that the performance of DCHWT is similar to convolution-based wavelets and superior/similar to lifting-based wavelets. Also, the computational complexity (in terms of additions and multiplications) of the proposed method scores over convolution-based wavelets and is competitive to lifting-based wavelets.
TL;DR: Feed-forward back-propagation neural network has been used for classification and training algorithm for this network that updates the weight and bias values according to Levenberg–Marquardt optimization technique to detect seizures with 100 % classification accuracy using artificial neural network.
Abstract: There are numerous neurological disorders such as dementia, headache, traumatic brain injuries, stroke, and epilepsy. Out of these epilepsy is the most prevalent neurological disorder in the human after stroke. Electroencephalogram (EEG) contains valuable information related to different physiological state of the brain. A scheme is presented for detecting epileptic seizures from EEG data recorded from normal subjects and epileptic patients. The scheme is based on discrete wavelet transform (DWT) analysis and approximate entropy (ApEn) of EEG signals. Seizure detection is performed in two stages. In the first stage, EEG signals are decomposed by DWT to calculate approximation and detail coefficients. In the second stage, ApEn values of the approximation and detail coefficients are calculated. Significant differences have been found between the ApEn values of the epileptic and the normal EEG allowing us to detect seizures with 100 % classification accuracy using artificial neural network. The analysis results depicted that during seizure activity, EEG had lower ApEn values compared to normal EEG. This gives that epileptic EEG is more predictable or less complex than the normal EEG. In this study, feed-forward back-propagation neural network has been used for classification and training algorithm for this network that updates the weight and bias values according to Levenberg–Marquardt optimization technique.