Other affiliations: Massachusetts Institute of Technology
Bio: Shahan Nercessian is an academic researcher from Tufts University. The author has contributed to research in topics: Image processing & Edge detection. The author has an hindex of 10, co-authored 25 publications receiving 356 citations. Previous affiliations of Shahan Nercessian include Massachusetts Institute of Technology.
TL;DR: A multi-scale image enhancement algorithm based on a new parametric contrast measure that incorporates not only the luminance masksing characteristic, but also the contrast masking characteristic of the human visual system is presented.
Abstract: Image enhancement is a crucial pre-processing step for various image processing applications and vision systems. Many enhancement algorithms have been proposed based on different sets of criteria. However, a direct multi-scale image enhancement algorithm capable of independently and/or simultaneously providing adequate contrast enhancement, tonal rendition, dynamic range compression, and accurate edge preservation in a controlled manner has yet to be produced. In this paper, a multi-scale image enhancement algorithm based on a new parametric contrast measure is presented. The parametric contrast measure incorporates not only the luminance masking characteristic, but also the contrast masking characteristic of the human visual system. The formulation of the contrast measure can be adapted for any multi-resolution decomposition scheme in order to yield new human visual system-inspired multi-scale transforms. In this article, it is exemplified using the Laplacian pyramid, discrete wavelet transform, stationary wavelet transform, and dual-tree complex wavelet transform. Consequently, the proposed enhancement procedure is developed. The advantages of the proposed method include: 1) the integration of both the luminance and contrast masking phenomena; 2) the extension of non-linear mapping schemes to human visual system inspired multi-scale contrast coefficients; 3) the extension of human visual system-based image enhancement approaches to the stationary and dual-tree complex wavelet transforms, and a direct means of; 4) adjusting overall brightness; and 5) achieving dynamic range compression for image enhancement within a direct multi-scale enhancement framework. Experimental results demonstrate the ability of the proposed algorithm to achieve simultaneous local and global enhancements.
TL;DR: New pixel- and region-based multiresolution image fusion algorithms are introduced in this paper using the Parameterized Logarithmic Image Processing (PLIP) model, a framework more suitable for processing images.
Abstract: New pixel- and region-based multiresolution image fusion algorithms are introduced in this paper using the Parameterized Logarithmic Image Processing (PLIP) model, a framework more suitable for processing images. A mathematical analysis shows that the Logarithmic Image Processing (LIP) model and standard mathematical operators are extreme cases of the PLIP model operators. Moreover, the PLIP model operators also have the ability to take on cases in between LIP and standard operators based on the visual requirements of the input images. PLIP-based multiresolution decomposition schemes are developed and thoroughly applied for image fusion as analysis and synthesis methods. The new decomposition schemes and fusion rules yield novel image fusion algorithms which are able to provide visually more pleasing fusion results. LIP-based multiresolution image fusion approaches are consequently formulated due to the generalized nature of the PLIP model. Computer simulations illustrate that the proposed image fusion algorithms using the Parameterized Logarithmic Laplacian Pyramid, Parameterized Logarithmic DiscreteWavelet Transform, and Parameterized Logarithmic Stationary Wavelet Transform outperform their respective traditional approaches by both qualitative and quantitative means. The algorithms were tested over a range of different image classes, including out-of-focus, medical, surveillance, and remote sensing images.
••12 May 2008
TL;DR: The experimental results show that the system can effectively detect the handguns in X-ray luggage scan images with minimal amounts of false positives and is suitable for real-time applications.
Abstract: The detection of threat objects using X-ray luggage scan images has become an important means of aviation security. Most airport screening is still based on the manual detection of potential threat objects by human experts. This paper presents a system for the automatic detection of potential threat objects in X-ray luggage scan images. Segmentation and edge-based feature vectors form the basis of the automatic detection system. The system is illustrated using handguns as the threat objects in question. The experimental results show that the system can effectively detect the handguns in X-ray luggage scan images with minimal amounts of false positives. Also, apart from the initial setup of the classification database, the algorithm is suitable for real-time applications.
••01 Apr 2010
TL;DR: It is shown that Boolean function derivatives are useful for the application of identifying the location of edge pixels in binary images and the development of a new edge detection algorithm for grayscale images, which yields competitive results, compared with those of traditional methods.
Abstract: This paper introduces a new concept of Boolean derivatives as a fusion of partial derivatives of Boolean functions (PDBFs). Three efficient algorithms for the calculation of PDBFs are presented. It is shown that Boolean function derivatives are useful for the application of identifying the location of edge pixels in binary images. The same concept is extended to the development of a new edge detection algorithm for grayscale images, which yields competitive results, compared with those of traditional methods. Furthermore, a new measure is introduced to automatically determine the parameter values used in the thresholding portion of the binarization procedure. Through computer simulations, demonstrations of Boolean derivatives and the effectiveness of the presented edge detection algorithm, compared with traditional edge detection algorithms, are shown using several synthetic and natural test images. In order to make quantitative comparisons, two quantitative measures are used: one based on the recovery of the original image from the output edge map and the Pratt's figure of merit.
TL;DR: A new measure is proposed which enhances the gradient information used for quality assessment and an analysis of the proposed image similarity measure using the LIVE database of distorted images and their corresponding subjective evaluations of visual quality illustrate the improved performance of the proposal.
Abstract: Image similarity measures are crucial for image processing applications which require comparisons to ideal reference images in order to assess performance. The Structural Similarity (SSIM), Gradient Structural Similarity (GSSIM), 4-component SSIM (4-SSIM) and 4-component GSSIM (4-GSSIM) indexes are motivated by the fact that the human visual system is adapted to extract local structural information. In this paper, we propose a new measure which enhances the gradient information used for quality assessment. An analysis of the proposed image similarity measure using the LIVE database of distorted images and their corresponding subjective evaluations of visual quality illustrate the improved performance of the proposed metric.
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
TL;DR: A new nonreference underwater image quality measure (UIQM) is presented, which comprises three underwater image attribute measures selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs).
Abstract: Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of underwater images have been developed. Unfortunately, no well-accepted objective measure exists that can evaluate the quality of underwater images similar to human perception. Predominant underwater image processing algorithms use either a subjective evaluation, which is time consuming and biased, or a generic image quality measure, which fails to consider the properties of underwater images. To address this problem, a new nonreference underwater image quality measure (UIQM) is presented in this paper. The UIQM comprises three underwater image attribute measures: the underwater image colorfulness measure (UICM), the underwater image sharpness measure (UISM), and the underwater image contrast measure (UIConM). Each attribute is selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs). The experimental results demonstrate that the measures effectively evaluate the underwater image quality in accordance with the human perceptions. These measures are also used on the AirAsia 8501 wreckage images to show their importance in practical applications.
TL;DR: A new template-based methodology for segmenting the OD from digital retinal images using morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation is presented.
Abstract: Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
TL;DR: A fusion-based method for enhancing various weakly illuminated images that requires only one input to obtain the enhanced image and represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image.
Abstract: We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.
TL;DR: A new NR contrast based grayscale image contrast measure: Root Mean Enhancement (RME); aNR color RME contrast measure CRME which explores the three dimensional contrast relationships of the RGB color channels; and a NR color quality measure Color Quality Enhancement (CQE) which is based on the linear combination of colorfulness, sharpness and contrast.
Abstract: No-reference (NR) image quality assessment is essential in evaluating the performance of image enhancement and retrieval algorithms. Much effort has been made in recent years to develop objective NR grayscale and color image quality metrics that correlate with perceived quality evaluations. Unfortunately, only limited success has been achieved and most existing NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This paper present: a) a new NR contrast based grayscale image contrast measure: Root Mean Enhancement (RME); b) a NR color RME contrast measure CRME which explores the three dimensional contrast relationships of the RGB color channels; c) a NR color quality measure Color Quality Enhancement (CQE) which is based on the linear combination of colorfulness, sharpness and contrast. Computer simulations demonstrate that each measure has its own advantages: the CRME measure is fast and suitable for real time processing of low contrast images; the CQE measure can be used for a wider variety of distorted images. The effectiveness of the presented measures is demonstrated by using the TID2008 database. Experimental results also show strong correlations between the presented measures and Mean Opinion Score (MOS)1.