scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 2005"


Journal ArticleDOI
TL;DR: A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction, using changes in specimen position at previous tilt angles to predict the position at the current tilt angle.

3,995 citations


Journal ArticleDOI
TL;DR: A new iterative regularization procedure for inverse problems based on the use of Bregman distances is introduced, with particular focus on problems arising in image processing.
Abstract: We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.

1,858 citations


Journal ArticleDOI
TL;DR: A computationally efficient, two-dimensional, feature point tracking algorithm for the automated detection and quantitative analysis of particle trajectories as recorded by video imaging in cell biology.

1,397 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel information fidelity criterion that is based on natural scene statistics and derives a novel QA algorithm that provides clear advantages over the traditional approaches and outperforms current methods in testing.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].

1,334 citations



Journal ArticleDOI
TL;DR: The proposed texture representation is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.
Abstract: This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.

1,185 citations


MonographDOI
01 Sep 2005
TL;DR: The author’s research focused on image modeling and representation, which focused on the representation of black-and-white images through the lens of a discrete-time model.
Abstract: Preface 1. Introduction 2. Some modern image analysis tools 3. Image modeling and representation 4. Image denoising 5. Image deblurring 6. Image inpainting 7. Image processing: segmentation Bibliography Index.

1,025 citations


Journal ArticleDOI
TL;DR: A new class of bases are introduced, called bandelet bases, which decompose the image along multiscale vectors that are elongated in the direction of a geometric flow, which leads to optimal approximation rates for geometrically regular images.
Abstract: This paper introduces a new class of bases, called bandelet bases, which decompose the image along multiscale vectors that are elongated in the direction of a geometric flow. This geometric flow indicates directions in which the image gray levels have regular variations. The image decomposition in a bandelet basis is implemented with a fast subband-filtering algorithm. Bandelet bases lead to optimal approximation rates for geometrically regular images. For image compression and noise removal applications, the geometric flow is optimized with fast algorithms so that the resulting bandelet basis produces minimum distortion. Comparisons are made with wavelet image compression and noise-removal algorithms.

922 citations


Patent
07 Jul 2005
TL;DR: In this article, a surgical imaging device includes at least one light source for illuminating an object, at least two image sensors configured to generate image data corresponding to the object in the form of an image frame, and a video processor configured to receive from each image sensor the image data correspond to the image frames and to process the data so as to generate a composite image.
Abstract: A surgical imaging device includes at least one light source for illuminating an object, at least two image sensors configured to generate image data corresponding to the object in the form of an image frame, and a video processor configured to receive from each image sensor the image data corresponding to the image frames and to process the image data so as to generate a composite image. The video processor may be configured to normalize, stabilize, orient and/or stitch the image data received from each image sensor so as to generate the composite image. Preferably, the video processor stitches the image data received from each image sensor by processing a portion of image data received from one image sensor that overlaps with a portion of image data received from another image sensor. Alternatively, the surgical device may be, e.g., a circular stapler, that includes a first part, e.g., a DLU portion, having an image sensor a second part, e.g., an anvil portion, that is moveable relative to the first @0rt. The second @0rt includes an arrangement, e.g., a bore extending therethrough, for conveying the image to the image sensor. The arrangement enables the image to be received by the image sensor without removing the surgical device from the surgical site.

893 citations


Journal ArticleDOI
John C. Fiala1
TL;DR: Reconstruct is a free editor designed to facilitate montaging, alignment, analysis and visualization of serial sections, which can reduce the time and resources expended for serial section studies and allows a larger tissue volume to be analysed more quickly.
Abstract: Many microscopy studies require reconstruction from serial sections, a method of analysis that is sometimes difficult and time-consuming. When each section is cut, mounted and imaged separately, section images must be montaged and realigned to accurately analyse and visualize the three-dimensional (3D) structure. Reconstruct is a free editor designed to facilitate montaging, alignment, analysis and visualization of serial sections. The methods used by Reconstruct for organizing, transforming and displaying data enable the analysis of series with large numbers of sections and images over a large range of magnifications by making efficient use of computer memory. Alignments can correct for some types of non-linear deformations, including cracks and folds, as often encountered in serial electron microscopy. A large number of different structures can be easily traced and placed together in a single 3D scene that can be animated or saved. As a flexible editor, Reconstruct can reduce the time and resources expended for serial section studies and allows a larger tissue volume to be analysed more quickly.

854 citations


Journal ArticleDOI
TL;DR: This PDF file contains the editorial “Image Processing: Principles and Applications” for JEI Vol.
Abstract: This PDF file contains the editorial “Image Processing: Principles and Applications” for JEI Vol. 15 Issue 03

Journal ArticleDOI
TL;DR: This work describes how resampling introduces specific statistical correlations, and describes how these correlations can be automatically detected in any portion of an image, and expects this technique to be among the first of many tools that will be needed to expose digital forgeries.
Abstract: The unique stature of photographs as a definitive recording of events is being diminished due, in part, to the ease with which digital images can be manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. For example, we describe how resampling (e.g., scaling or rotating) introduces specific statistical correlations, and describe how these correlations can be automatically detected in any portion of an image. This technique works in the absence of any digital watermark or signature. We show the efficacy of this approach on uncompressed TIFF images, and JPEG and GIF images with minimal compression. We expect this technique to be among the first of many tools that will be needed to expose digital forgeries.

Journal ArticleDOI
TL;DR: This paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods.
Abstract: There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level

Journal ArticleDOI
TL;DR: The theoretical behaviour of the FSC in conjunction with the various factors which influence it are discussed: the number of "voxels" in a given Fourier shell, the symmetry of theructure, and the size of the structure within the reconstruction volume.

Journal ArticleDOI
TL;DR: The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods.
Abstract: Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.

Journal ArticleDOI
TL;DR: An alternative respiratory correlated CBCT procedure is developed that reduces respiration induced geometrical uncertainties, enabling safe delivery of 4D radiotherapy such as gated radiotherapy with small margins.
Abstract: A cone beam computed tomography (CBCT) scanner integrated with a linear accelerator is a powerful tool for image guided radiotherapy. Respiratory motion, however, induces artifacts in CBCT, while the respiratory correlated procedures, developed to reduce motion artifacts in axial and helical CT are not suitable for such CBCT scanners. We have developed an alternative respiratory correlated procedure for CBCT and evaluated its performance. This respiratory correlated CBCT procedure consists of retrospective sorting in projection space, yielding subsets of projections that each corresponds to a certain breathing phase. Subsequently, these subsets are reconstructed into a four-dimensional (4D) CBCT dataset. The breathing signal, required for respiratory correlation, was directly extracted from the 2D projection data, removing the need for an additional respiratory monitor system. Due to the reduced number of projections per phase, the contrast-to-noise ratio in a 4D scan reduced by a factor 2.6-3.7 compared to a 3D scan based on all projections. Projection data of a spherical phantom moving with a 3 and 5 s period with and without simulated breathing irregularities were acquired and reconstructed into 3D and 4D CBCT datasets. The positional deviations of the phantoms center of gravity between 4D CBCT and fluoroscopy were small: 0.13 +/- 0.09 mm for the regular motion and 0.39 +/- 0.24 mm for the irregular motion. Motion artifacts, clearly present in the 3D CBCT datasets, were substantially reduced in the 4D datasets, even in the presence of breathing irregularities, such that the shape of the moving structures could be identified more accurately. Moreover, the 4D CBCT dataset provided information on the 3D trajectory of the moving structures, absent in the 3D data. Considerable breathing irregularities, however, substantially reduces the image quality. Data presented for three different lung cancer patients were in line with the results obtained from the phantom study. In conclusion, we have successfully implemented a respiratory correlated CBCT procedure yielding a 4D dataset. With respiratory correlated CBCT on a linear accelerator, the mean position, trajectory, and shape of a moving tumor can be verified just prior to treatment. Such verification reduces respiration induced geometrical uncertainties, enabling safe delivery of 4D radiotherapy such as gated radiotherapy with small margins.

Journal ArticleDOI
TL;DR: By reducing the signal strength using higher image resolution, the ratio of physiologic to image noise could be reduced to a regime where increased sensitivity afforded by higher field strength still translated to improved SNR in the fMRI time-series.

Book
01 Jan 2005
TL;DR: 3 stages: Image Restoration correcting errors and distortion, image enhancement emphasizing the information that is of interest, and image analysis extracting specific information for subsequent analyses.
Abstract: 3 stages: Image Restoration correcting errors and distortion. Warping and correcting systematic distortion related to viewing geometry Correcting "drop outs", striping and other instrument noise Applying corrections to compensate for atmospheric absorbtion & radiance Image Enhancement emphasizing the information that is of interest Spatial Filtering to enhance or suppress features on the basis of size Contrast enhancement to emphasize subtle tonal variations. Image Analysis extracting specific information for subsequent analyses Principal component transformations Multispectral classification Temporal Variation

Journal ArticleDOI
TL;DR: This work quantifies the specific correlations introduced by CFA interpolation, and describes how these correlations can be automatically detected in any portion of an image and shows the efficacy of this approach in revealing traces of digital tampering in lossless and lossy compressed color images interpolated with several different CFA algorithms.
Abstract: With the advent of low-cost and high-resolution digital cameras, and sophisticated photo editing software, digital images can be easily manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. Most digital cameras, for example, employ a single sensor in conjunction with a color filter array (CFA), and then interpolate the missing color samples to obtain a three channel color image. This interpolation introduces specific correlations which are likely to be destroyed when tampering with an image. We quantify the specific correlations introduced by CFA interpolation, and describe how these correlations, or lack thereof, can be automatically detected in any portion of an image. We show the efficacy of this approach in revealing traces of digital tampering in lossless and lossy compressed color images interpolated with several different CFA algorithms.

Journal ArticleDOI
TL;DR: It is claimed that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality.
Abstract: Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a "reference" or "perfect" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.

BookDOI
01 Nov 2005
TL;DR: This comprehensive volume provides a detailed discourse on the mathematical models used in computational vision from leading educators and active research experts in this field and serves as a complete reference work for professionals.
Abstract: This comprehensive volume is an essential reference tool for professional and academic researchers in the filed of computer vision, image processing, and applied mathematics. Continuing rapid advances in image processing have been enhanced by the theoretical efforts of mathematicians and engineers. This marriage of mathematics and computer vision - computational vision - has resulted in a discrete approach to image processing that is more reliable when leveraging in practical tasks. This comprehensive volume provides a detailed discourse on the mathematical models used in computational vision from leading educators and active research experts in this field. Topical areas include: image reconstruction, segmentation and object extraction, shape modeling and registration, motion analysis and tracking, and 3D from images, geometry and reconstruction. The book also includes a study of applications in medical image analysis. Handbook of Mathematical Models in Computer Vision provides a graduate-level treatment of this subject as well as serving as a complete reference work for professionals.

Journal ArticleDOI
TL;DR: A new approach termed “controlled aliasing in parallel imaging results in higher acceleration” (CAIPIRINHA) is presented, which modifies the appearance of aliasing artifacts during the acquisition to improve the subsequent parallel image reconstruction procedure.
Abstract: In all current parallel imaging techniques, aliasing artifacts resulting from an undersampled acquisition are removed by means of a specialized image reconstruction algorithm. In this study a new approach termed "controlled aliasing in parallel imaging results in higher acceleration" (CAIPIRINHA) is presented. This technique modifies the appearance of aliasing artifacts during the acquisition to improve the subsequent parallel image reconstruction procedure. This new parallel multi-slice technique is more efficient compared to other multi-slice parallel imaging concepts that use only a pure postprocessing approach. In this new approach, multiple slices of arbitrary thickness and distance are excited simultaneously with the use of multi-band radiofrequency (RF) pulses similar to Hadamard pulses. These data are then undersampled, yielding superimposed slices that appear shifted with respect to each other. The shift of the aliased slices is controlled by modulating the phase of the individual slices in the multi-band excitation pulse from echo to echo. We show that the reconstruction quality of the aliased slices is better using this shift. This may potentially allow one to use higher acceleration factors than are used in techniques without this excitation scheme. Additionally, slices that have essentially the same coil sensitivity profiles can be separated with this technique.

Reference BookDOI
01 Aug 2005
TL;DR: This paper discusses the development of image sensors for digital still cameras, and some of the current and future designs of these devices and their applications are discussed.
Abstract: Preface DIGITAL STILL CAMERAS AT A GLANCE Kenji Toyoda What Is a Digital Still Camera? History of Digital Still Cameras Variations of Digital Still Cameras Basic Structure of Digital Still Cameras Applications of Digital Still Cameras OPTICS IN DIGITAL STILL CAMERAS Takeshi Koyama Optical System Fundamentals and Standards for Evaluating Optical Performance Characteristics of DSC Imaging Optics Important Aspects of Imaging Optics Design for DSCs DSC Imaging Lens Zoom Types and Their Applications Conclusion References BASICS OF IMAGE SENSORS Junichi Nakamura Functions of an Image Sensor Photodetector in a Pixel Noise Photoconversion Characteristics Array Performance Optical Format and Pixel Size CCD Image Sensor vs. CMOS Image Sensor References CCD IMAGE SENSORS Tetsuo Yamada Basics of CCDs Structures and Characteristics of CCD Image Sensor DSC Applications Future Prospects References CMOS IMAGE SENSORS Isao Takayanagi Introduction to CMOS Image Sensors CMOS Active Pixel Technology Signal Processing and Noise Behavior CMOS Image Sensors for DSC Applications Future Prospects of CMOS Image Sensors for DSC Applications References EVALUATION OF IMAGE SENSORS Toyokazu Mizoguchi What is Evaluation of Image Sensors? Evaluation Environment Evaluation Methods COLOR THEORY AND ITS APPLICATION TO DIGITAL STILL CAMERAS Po-Chieh Hung Color Theory Camera Spectral Sensitivity Characterization of a Camera White Balance Conversion for Display (Color Management) Summary References IMAGE-PROCESSING ALGORITHMS Kazuhiro Sato Basic Image-Processing Algorithms Camera Control Algorithm Advanced Image Processing: How to Obtain Improved Image Quality References IMAGE-PROCESSING ENGINES Seiichiro Watanabe Key Characteristics of an Image-Processing Engine Imaging Engine Architecture Comparison Analog Front End (AFE) Digital Back End (DBE) Future Design Engines References EVALUATION OF IMAGE QUALITY Hideaki Yoshida What is Image Quality? General Items or Parameters Detailed Items or Factors Standards Relating to Image Quality SOME THOUGHTS ON FUTURE DIGITAL STILL CAMERAS Eric R. Fossum The Future of DSC Image Sensors Some Future Digital Cameras References

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the hazard of landslides at Penang, Malaysia, using a Geographical Information System (GIS) and remote sensing, and identify landslide locations from interpretation of aerial photographs and from field surveys.
Abstract: The aim of this study is to evaluate the hazard of landslides at Penang, Malaysia, using a Geographical Information System (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and from field surveys. Topographical and geological data and satellite images were collected, processed and constructed into a spatial database using GIS and image processing. The factors chosen that influence landslide occurrence were: topographic slope, topographic aspect, topographic curvature and distance from drainage, all from the topographic database; lithology and distance from lineament, taken from the geologic database; land use from Thematic Mapper (TM) satellite images; and the vegetation index value from Systeme Probatoire de l'Observation de la Terre (SPOT) satellite images. Landslide hazardous areas were analysed and mapped using the landslide‐occurrence factors by logistic regression model. The results of the analysis were verified using the landsl...

Journal ArticleDOI
TL;DR: Improvements to the nonlocal means image denoising method introduced by Buades et al. are presented and filters that eliminate unrelated neighborhoods from the weighted average are introduced.
Abstract: In this letter, improvements to the nonlocal means image denoising method introduced by Buades et al. are presented. The original nonlocal means method replaces a noisy pixel by the weighted average of pixels with related surrounding neighborhoods. While producing state-of-the-art denoising results, this method is computationally impractical. In order to accelerate the algorithm, we introduce filters that eliminate unrelated neighborhoods from the weighted average. These filters are based on local average gray values and gradients, preclassifying neighborhoods and thereby reducing the original quadratic complexity to a linear one and reducing the influence of less-related areas in the denoising of a given pixel. We present the underlying framework and experimental results for gray level and color images as well as for video.

Journal ArticleDOI
TL;DR: This paper investigates high-capacity lossless data-embedding methods that allow one to embed large amounts of data into digital images in such a way that the original image can be reconstructed from the watermarked image.
Abstract: The proliferation of digital information in our society has enticed a lot of research into data-embedding techniques that add information to digital content, like images, audio, and video. In this paper, we investigate high-capacity lossless data-embedding methods that allow one to embed large amounts of data into digital images (or video) in such a way that the original image can be reconstructed from the watermarked image. We present two new techniques: one based on least significant bit prediction and Sweldens' lifting scheme and another that is an improvement of Tian's technique of difference expansion. The new techniques are then compared with various existing embedding methods by looking at capacity-distortion behavior and capacity control.

Proceedings ArticleDOI
Andrew D. Wilson1
23 Oct 2005
TL;DR: PlayAnywhere is introduced, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor and makes a number of contributions related to image processing techniques for front- Projection-vision table systems.
Abstract: We introduce PlayAnywhere, a front-projected computer vision-based interactive table system which uses a new commercially available projection technology to obtain a compact, self-contained form factor. PlayAnywhere's configuration addresses installation, calibration, and portability issues that are typical of most vision-based table systems, and thereby is particularly motivated in consumer applications. PlayAnywhere also makes a number of contributions related to image processing techniques for front-projected vision-based table systems, including a shadow-based touch detection algorithm, a fast, simple visual bar code scheme tailored to projection-vision table systems, the ability to continuously track sheets of paper, and an optical flow-based algorithm for the manipulation of onscreen objects that does not rely on fragile tracking algorithms.

Journal ArticleDOI
TL;DR: TOM software toolbox integrates established algorithms and new concepts tailored to the special needs of low dose ET, which provides a user-friendly unified platform for all processing steps: acquisition, alignment, reconstruction, and analysis.

Journal ArticleDOI
TL;DR: Results on images returned by Google's Image Search reveal the potential of applying CLUE to real-world image data and integrating CLUE as a part of the interface for keyword-based image retrieval systems.
Abstract: In a typical content-based image retrieval (CBIR) system, target images (images in the database) are sorted by feature similarities with respect to the query. Similarities among target images are usually ignored. This paper introduces a new technique, cluster-based retrieval of images by unsupervised learning (CLUE), for improving user interaction with image retrieval systems by fully exploiting the similarity information. CLUE retrieves image clusters by applying a graph-theoretic clustering algorithm to a collection of images in the vicinity of the query. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. CLUE can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus, it may be embedded in many current CBIR systems, including relevance feedback systems. The performance of an experimental image retrieval system using CLUE is evaluated on a database of around 60,000 images from COREL. Empirical results demonstrate improved performance compared with a CBIR system using the same image similarity measure. In addition, results on images returned by Google's Image Search reveal the potential of applying CLUE to real-world image data and integrating CLUE as a part of the interface for keyword-based image retrieval systems.

Book ChapterDOI
TL;DR: The feasibility of template protecting biometric authentication systems is shown and it is shown that the scheme achieves an EER of approximately 4.2% with secret length of 40 bits in experiments.
Abstract: In this paper we show the feasibility of template protecting biometric authentication systems In particular, we apply template protection schemes to fingerprint data Therefore we first make a fixed length representation of the fingerprint data by applying Gabor filtering Next we introduce the reliable components scheme In order to make a binary representation of the fingerprint images we extract and then quantize during the enrollment phase the reliable components with the highest signal to noise ratio Finally, error correction coding is applied to the binary representation It is shown that the scheme achieves an EER of approximately 42% with secret length of 40 bits in experiments