scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Comparative Analysis on Image Quality Assessment for Real Time Satellite Images

S. Rajkumar1, G. Malathi1
23 Sep 2016-Indian journal of science and technology (The Indian Society of Education and Environment)-Vol. 9, Iss: 34, pp 1-11
TL;DR: The characteristics of different quality metrics are concluded and further the quality metric appropriate to various distortions are identified and the proposed quality metric is successfully identified.
Abstract: Objectives: The objective of this paper is to analyze the different image quality metrics by testing and comparing with different distorted set of satellite images. Methods/Statistical Analysis: In this paper, we propose the methods for analyzing the quality of real time images that are corrupted due to different distortions. The several quality metrics are applied and ultimately the best metrics are derived based on the type of degradation. Different metrics such as metric based on single image and metric based on two images have been tested with different real time satellite images from NASA data sets. Findings: This framework will help to identify the metrics in order to prove the proposed filtering schemes that are applied to the corrupted images. Based on the results, we have concluded the characteristics of different quality metrics and further we successfully identified the quality metric appropriate to various distortions. Application/Improvements: The proposed quality metric analysis is used to estimate the performance of any filtering schemes which are used to enhance the quality of any real time images such as remote sensing field.
Citations
More filters
Book ChapterDOI
S. Kavitha1, P. Sripriya1
01 Jan 2020
TL;DR: A Total Variation (TV) homomorphic filter to reduce the noise and enhance the edges of MRI data to detect brain tumor and produce better results than conventional methods like median filtering, anisotropic filtering, etc.
Abstract: Brain tumor is a serious medical condition if not detected early will reduce the life span of humans. Magnetic resonance imaging (MRI) is a common method nowadays for abnormality detection and classification. But the manual detection is less accurate also a large amount data to be processed. Thus, manual detection leads to error in tumor segmentation. The data obtained from MRI images also inherent to noise produced by the MRI machine parts. The detection and removal of this noise play a vital part in the detection accuracy of tumor. So, here, we propose a Total Variation (TV) homomorphic filter to reduce the noise and enhance the edges of MRI data. SVM classifier is employed for learning and classification. The method produces better results than conventional methods like median filtering, anisotropic filtering, etc.
Journal Article
TL;DR: The techniques of the image noise reduction filter based on image quality standards at a different level of noise images are explained, which must be done based on the specific results for different noise conditions.
Abstract: Noise reduction is often necessary and the first step to be taken before analyzing image data. The main goal of digital image processing is to remove various noise from all kinds of different images. It is therefore necessary that you have knowledge of the various noise in the image to determine the appropriate noise reduction algorithm. Noise is particularly affected by images taken from satellites, which reduces image quality. There are many reasons for this noise, such as salt, pepper, speckle, Gaussian. Because noise is very specific, many noise removal methods are designed for this purpose only. There are also some general methods for removing noise that have been slightly modified in order to remove noise spots. In this paper, filters reduce the spots are the frost filter, Gaussian filter smooth and the average filter which must be done based on the specific results for different noise conditions. This paper explains the techniques of the image noise reduction filter based on image quality standards at a different level of noise images.
Book ChapterDOI
03 Jan 2020
TL;DR: In this paper, the authors proposed a contrast optimal visual cryptography scheme for half-tone images, which can share Secrete Half-tone Image among n members and all n number of members can reveal the Secrete by superimposing the n share together and n − 1 members can not reveal the secrete.
Abstract: In this paper, we propose a new contrast optimal visual Cryptography Scheme for Half-tone images. This scheme can share Secrete Half-tone Image among n members and all n number of members can reveal the secrete by superimposing the n share together and n − 1 members can not reveal the secrete. The proposed method generate the noise like shares but share size and the image recovered after superimposition is of same size which is equal to original secrete image. It reduces the pixel expansion based on the designed codebook and matrix transposition. It also reduces the transmission speed, complexity computation, security issue and restore the secrete image without any distortion. The scheme proposed can share the black& white and gray level image and produces the more accurate results and is easy to implement.
Journal ArticleDOI
TL;DR: The comparative analysis of these segmentation techniques based on some performance parameters is performed and Kmeans clustering technique shows more appropriate outcome for segregating Modi numerals efficiently.
Abstract: The segregation of image is most significant phase to identify borderline information of an image effectively. As various degradation exists in an image such as distorted pixel value, blurring in an image, poor luminance etc that affects the visual representation of an image. Using segmentation techniques, we attempts to improve the content of an image and make it clear for representation. There exist various edges and clustering based segmentation techniques such as perwitt, roberts, canny, sobel and K-means clustering that assist in segregating distortion information from a Modi character image to great extent. The comparative analysis of these segmentation techniques based on some performance parameters is performed to segment Modi character components. As a result, Kmeans clustering technique shows more appropriate outcome for segregating Modi numerals efficiently.

Cites background from "A Comparative Analysis on Image Qua..."

  • ...It reveals that greater value of SC indicates the deprived quality of image [15]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Journal ArticleDOI
TL;DR: Although the new index is mathematically defined and no human visual system model is explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
Abstract: We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

5,285 citations

Journal ArticleDOI
TL;DR: A definition of local band-limited contrast in images is proposed that assigns a contrast value to every point in the image as a function of the spatial frequency band and is helpful in understanding the effects of image-processing algorithms on the perceived contrast.
Abstract: The physical contrast of simple images such as sinusoidal gratings or a single patch of light on a uniform background is well defined and agrees with the perceived contrast, but this is not so for complex images. Most definitions assign a single contrast value to the whole image, but perceived contrast may vary greatly across the image. Human contrast sensitivity is a function of spatial frequency; therefore the spatial frequency content of an image should be considered in the definition of contrast. In this paper a definition of local band-limited contrast in images is proposed that assigns a contrast value to every point in the image as a function of the spatial frequency band. For each frequency band, the contrast is defined as the ratio of the bandpass-filtered image at the frequency to the low-pass image filtered to an octave below the same frequency (local luminance mean). This definition raises important implications regarding the perception of contrast in complex images and is helpful in understanding the effects of image-processing algorithms on the perceived contrast. A pyramidal image-contrast structure based on this definition is useful in simulating nonlinear, threshold characteristics of spatial vision in both normal observers and the visually impaired.

1,370 citations

Journal ArticleDOI
TL;DR: This work proposes a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian response.
Abstract: Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models.

535 citations

Journal ArticleDOI
TL;DR: A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts and the experimental results show that the proposed method is highly competitive compared with other state-of-the-art approaches.
Abstract: To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

444 citations