scispace - formally typeset
Search or ask a question
Author

Karen Panetta

Bio: Karen Panetta is an academic researcher from Tufts University. The author has contributed to research in topics: Image processing & Human visual system model. The author has an hindex of 28, co-authored 197 publications receiving 3860 citations. Previous affiliations of Karen Panetta include University of Niš & Seoul National University of Science and Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A new nonreference underwater image quality measure (UIQM) is presented, which comprises three underwater image attribute measures selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs).
Abstract: Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of underwater images have been developed. Unfortunately, no well-accepted objective measure exists that can evaluate the quality of underwater images similar to human perception. Predominant underwater image processing algorithms use either a subjective evaluation, which is time consuming and biased, or a generic image quality measure, which fails to consider the properties of underwater images. To address this problem, a new nonreference underwater image quality measure (UIQM) is presented in this paper. The UIQM comprises three underwater image attribute measures: the underwater image colorfulness measure (UICM), the underwater image sharpness measure (UISM), and the underwater image contrast measure (UIConM). Each attribute is selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs). The experimental results demonstrate that the measures effectively evaluate the underwater image quality in accordance with the human perceptions. These measures are also used on the AirAsia 8501 wreckage images to show their importance in practical applications.

671 citations

Journal ArticleDOI
TL;DR: The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency to choose the best parameters and transform for each enhancement.
Abstract: Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms

527 citations

Journal ArticleDOI
TL;DR: A new class of the "frequency domain"-based signal/image enhancement algorithms including magnitude reduction, log-magnitude reduction, iterative magnitude and a log-reduction zonal magnitude technique, based on the so-called sequency ordered orthogonal transforms, which include the well-known Fourier, Hartley, cosine, and Hadamard transforms.
Abstract: This paper presents a new class of the "frequency domain"-based signal/image enhancement algorithms including magnitude reduction, log-magnitude reduction, iterative magnitude and a log-reduction zonal magnitude technique. These algorithms are described and applied for detection and visualization of objects within an image. The new technique is based on the so-called sequency ordered orthogonal transforms, which include the well-known Fourier, Hartley, cosine, and Hadamard transforms, as well as new enhancement parametric operators. A wide range of image characteristics can be obtained from a single transform, by varying the parameters of the operators. We also introduce a quantifying method to measure signal/image enhancement called EME. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms.

373 citations

Journal ArticleDOI
01 Feb 2008
TL;DR: Two novel image enhancement algorithms are introduced: edge-preserving contrast enhancement, which is able to better preserve edge details while enhancing contrast in images with varying illumination, and a novel multihistogram equalization method which utilizes the human visual system to segment the image, allowing a fast and efficient correction of nonuniform illumination.
Abstract: Varying scene illumination poses many challenging problems for machine vision systems. One such issue is developing global enhancement methods that work effectively across the varying illumination. In this paper, we introduce two novel image enhancement algorithms: edge-preserving contrast enhancement, which is able to better preserve edge details while enhancing contrast in images with varying illumination, and a novel multihistogram equalization method which utilizes the human visual system (HVS) to segment the image, allowing a fast and efficient correction of nonuniform illumination. We then extend this HVS-based multihistogram equalization approach to create a general enhancement method that can utilize any combination of enhancement algorithms for an improved performance. Additionally, we propose new quantitative measures of image enhancement, called the logarithmic Michelson contrast measure (AME) and the logarithmic AME by entropy. Many image enhancement methods require selection of operating parameters, which are typically chosen using subjective methods, but these new measures allow for automated selection. We present experimental results for these methods and make a comparison against other leading algorithms.

270 citations

Journal ArticleDOI
01 Nov 2011
TL;DR: The comparison and evaluation of enhancement performance demonstrate that the NLUM can improve the disease diagnosis by enhancing the fine details in mammograms with no a priori knowledge of the image contents.
Abstract: This paper introduces a new unsharp masking (UM) scheme, called nonlinear UM (NLUM), for mammogram enhancement. The NLUM offers users the flexibility 1) to embed different types of filters into the nonlinear filtering operator; 2) to choose different linear or nonlinear operations for the fusion processes that combines the enhanced filtered portion of the mammogram with the original mammogram; and 3) to allow the NLUM parameter selection to be performed manually or by using a quantitative enhancement measure to obtain the optimal enhancement parameters. We also introduce a new enhancement measure approach, called the second-derivative-like measure of enhancement, which is shown to have better performance than other measures in evaluating the visual quality of image enhancement. The comparison and evaluation of enhancement performance demonstrate that the NLUM can improve the disease diagnosis by enhancing the fine details in mammograms with no a priori knowledge of the image contents. The human-visual-system-based image decomposition is used for analysis and visualization of mammogram enhancement.

195 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed enhancement algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.
Abstract: Image enhancement plays an important role in image processing and analysis. Among various enhancement algorithms, Retinex-based algorithms can efficiently enhance details and have been widely adopted. Since Retinex-based algorithms regard illumination removal as a default preference and fail to limit the range of reflectance, the naturalness of non-uniform illumination images cannot be effectively preserved. However, naturalness is essential for image enhancement to achieve pleasing perceptual quality. In order to preserve naturalness while enhancing details, we propose an enhancement algorithm for non-uniform illumination images. In general, this paper makes the following three major contributions. First, a lightness-order-error measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, we propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness. Experimental results demonstrate that the proposed algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.

918 citations

Journal ArticleDOI
TL;DR: A general framework based on histogram equalization for image contrast enhancement, and a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.
Abstract: A general framework based on histogram equalization for image contrast enhancement is presented. In this framework, contrast enhancement is posed as an optimization problem that minimizes a cost function. Histogram equalization is an effective technique for contrast enhancement. However, a conventional histogram equalization (HE) usually results in excessive contrast enhancement, which in turn gives the processed image an unnatural look and creates visual artifacts. By introducing specifically designed penalty terms, the level of contrast enhancement can be adjusted; noise robustness, white/black stretching and mean-brightness preservation may easily be incorporated into the optimization. Analytic solutions for some of the important criteria are presented. Finally, a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.

794 citations