scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 2008"


Journal ArticleDOI
TL;DR: This paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics by demonstrating the absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes.
Abstract: Digital imaging has experienced tremendous growth in recent decades, and digital camera images have been used in a growing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. This paper introduces a new methodology for the forensic analysis of digital camera images. The proposed method is based on the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images, and these intrinsic fingerprints can be identified and employed to verify the integrity of digital data. The intrinsic fingerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a manipulation filter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to estimate the intrinsic fingerprints associated with these postcamera operations. The absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed fingerprints, or the presence of new types of fingerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics.

281 citations


Patent
30 Oct 2008
TL;DR: In this article, a group of pixels are identified that correspond to an image of a face within the digital image and values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital images including the image of the face and the default values.
Abstract: A method of processing a digital image using face detection within the image achieves one or more desired image processing parameters. A group of pixels is identified that correspond to an image of a face within the digital image. Default values are determined of one or more parameters of at least some portion of the digital image. Values are adjusted of the one or more parameters within the digitally-detected image based upon an analysis of the digital image including the image of the face and the default values.

272 citations


Patent
20 Jun 2008
TL;DR: In this paper, a digital image processing technique detects and corrects visual imperfections using a reference image, where the device corrects the defect based on the information, image data and/or meta data to create an enhanced version of the main image.
Abstract: A digital image processing technique detects and corrects visual imperfections using a reference image. A main image and one or more reference images having a temporal and/or spatial overlap and/or proximity with the original image are captured. Device information, image data and/or meta data are analyzed of the one or more reference images relating to a defect in the main image. The device corrects the defect based on the information, image data and/or meta-data to create an enhanced version of the main image.

206 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: A new image database for testing full-reference image quality assessment metrics is presented, based on 1700 test images, which can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics.
Abstract: In this contribution, a new image database for testing full-reference image quality assessment metrics is presented. It is based on 1700 test images (25 reference images, 17 types of distortions for each reference image, 4 levels for each type of distortion). Using this image database, 654 observers from three different countries (Finland, Italy, and Ukraine) have carried out about 400000 individual human quality judgments (more than 200 judgments for each distorted image). The obtained mean opinion scores for the considered images can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics. The database, with testing results, is freely available.

198 citations


Patent
29 Apr 2008
TL;DR: In this article, a light-field preprocessing module reshapes the angular data in a captured light field image into shapes compatible with the blocking scheme of the compression technique so that blocking artifacts of block-based compression are not introduced in the final compressed image.
Abstract: A method and apparatus for the block-based compression of light-field images. Light-field images may be preprocessed by a preprocessing module into a format that is compatible with the blocking scheme of a block-based compression technique, for example JPEG. The compression technique is then used to compress the preprocessed light-field images. The light-field preprocessing module reshapes the angular data in a captured light-field image into shapes compatible with the blocking scheme of the compression technique so that blocking artifacts of block-based compression are not introduced in the final compressed image. Embodiments may produce compressed 2D images for which no specific light-field image viewer is needed to preview the full light-field image. Full light-field information is contained in one compressed 2D image.

181 citations


Journal ArticleDOI
TL;DR: The proposed method is multiresolution and gray scale invariant and can be used for defect detection in patterned and unpatterned fabrics and because of its simplicity, online implementation is possible as well.
Abstract: Local binary patterns (LBPs) are one of the features which have been used for texture classification. In this paper, a method based on using these features is proposed for fabric defect detection. In the training stage, at first step, LBP operator is applied to an image of defect free fabric, pixel by pixel, and the reference feature vector is computed. Then this image is divided into windows and LBP operator is applied to each of these windows. Based on comparison with the reference feature vector, a suitable threshold for defect free windows is found. In the detection stage, a test image is divided into windows and using the threshold, defective windows can be detected. The proposed method is multiresolution and gray scale invariant and can be used for defect detection in patterned and unpatterned fabrics. Because of its simplicity, online implementation is possible as well.

133 citations


Proceedings ArticleDOI
14 Oct 2008
TL;DR: It is shown that the proposed algorithm is able to consistently identify 80% of the corners on omnidirectional images of as low as VGA resolution and approaches 100% correct corner extraction at higher resolutions, outperforming the existing implementation significantly.
Abstract: Most of the existing camera calibration toolboxes require the observation of a checkerboard shown by the user at different positions and orientations. This paper presents an algorithm for the automatic detection of checkerboards, described by the position and the arrangement of their corners, in blurred and heavily distorted images. The method can be applied to both perspective and omnidirectional cameras. An existing corner detection method is evaluated and its strengths and shortcomings in detecting corners on blurred and distorted test image sets are analyzed. Starting from the results of this analysis, several improvements are proposed, implemented, and tested. We show that the proposed algorithm is able to consistently identify 80% of the corners on omnidirectional images of as low as VGA resolution and approaches 100% correct corner extraction at higher resolutions, outperforming the existing implementation significantly. The performance of the proposed method is demonstrated on several test image sets of various resolution, distortion, and blur, which are exemplary for different kinds of camera-mirror setups in use.

99 citations


Proceedings ArticleDOI
01 Sep 2008
TL;DR: A key point in the formulation is to base this reconstruction solely on the visible data in the training and testing sets, which allows to have partial occlusions in both theTraining and testing samples, while previous methods only dealt with occlusion in the testing set.
Abstract: Partial occlusions in face images pose a great problem for most face recognition algorithms. Several solutions to this problem have been proposed over the years - ranging from dividing the face image into a set of local regions to sophisticated statistical methods. In the present paper, we pose the problem as a reconstruction one. In this approach, each test image is described as a linear combination of the training samples in each class. The class samples providing the best reconstruction determine the class label. Here, ldquobest reconstructionrdquo means that reconstruction providing the smallest matching error when using an appropriate metric to compare the reconstructed and test images. A key point in our formulation is to base this reconstruction solely on the visible data in the training and testing sets. This allows to have partial occlusions in both the training and testing samples, while previous methods only dealt with occlusions in the testing set. We show extensive experimental results using a large variety of comparative studies, demonstrating the superiority of the proposed approach over the state of the art.

86 citations


Proceedings Article
01 Jan 2008
TL;DR: A novel discriminative feature description that encodes underlying shape well and is insensitive to illumination and other common variations in facial appearance, such as skin colour etc., is proposed and a pose similarity feature space (PSFS) is generated that turns the multi-class problem into two-class by using inter-pose and intra-pose similarities.
Abstract: We present a robust front-end pose classification/estimation procedure to be used in face recognition scenarios. A novel discriminative feature description that encodes underlying shape well and is insensitive to illumination and other common variations in facial appearance, such as skin colour etc., is proposed. Using such features we generate a pose similarity feature space (PSFS) that turns the multi-class problem into two-class by using inter-pose and intra-pose similarities. A new classification procedure is laid down which models this feature space and copes well with discriminating between nearest poses. For a test image it outputs a measure of confidence or so called posterior probability for all poses without explicitly estimating underlying densities. The pose estimation system is evaluated using CMU Pose, Illumination and Expression (PIE) database.

82 citations


Patent
16 Jun 2008
TL;DR: In this paper, a digital image processing technique gathers visual meta data using a reference image and one or more reference images are captured on a hand-held or otherwise portable or spatial or temporal performance-based image capture device.
Abstract: A digital image processing technique gathers visual meta data using a reference image. A main image and one or more reference images are captured on a hand-held or otherwise portable or spatial or temporal performance-based image capture device. The reference images are analyzed based on predefined criteria in comparison to the main image. Based on said analyzing, supplemental meta data are created and added to the main image at a digital data storage location.

76 citations


Proceedings Article
26 Sep 2008
TL;DR: A novel approach on objective non-reference image fusion performance assessment that takes into account local measurements to estimate how well the important information in the source images is represented by the fused image.
Abstract: We present a novel approach on objective non-reference image fusion performance assessment. The Global-Local Image Quality Analysis (GLIQA) approach takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is an extended version of the Universal Image Quality Index (UIQI) and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors. When the difference of an image pixel in the input images and its correspondence in the fused image is larger than a threshold and difficult to assess the fusion quality, global measurements will be applied to assist the judgment. The global measurement metric considers a set of properties of human Gestalt visual perception, such as image structure, texture, and spectral signature, for image quality assessment. Preliminary study results confirm that the performance scores of the proposed metrics correlate well with the subjective quality of the fused images.

Journal ArticleDOI
01 Jan 2008
TL;DR: An interpretation-based quality (IBQ) estimation approach, which combines qualitative and quantitative methodology, is used, which enables simultaneous examination of psychometric results and the subjective meanings related to the perceived image-quality changes.
Abstract: Test image contents affect subjective image-quality evaluations. Psychometric methods might show that contents have an influence on image quality, but they do not tell what this influence is like, i.e., how the contents influence image quality. To obtain a holistic description of subjective image quality, we have used an interpretation-based quality (IBQ) estimation approach, which combines qualitative and quantitative methodology. The method enables simultaneous examination of psychometric results and the subjective meanings related to the perceived image-quality changes. In this way, the relationship between subjective feature detection, subjective preferences, and interpretations are revealed. We report a study that shows that different impressions are conveyed in five test image contents after similar sharpness variations. Thirty naive observers classified and freely described the images after which magnitude estimation was used to verify that they distinguished the changes in the images. The data suggest that in the case of high image quality, the test image selection is crucial. If subjective evaluation is limited only to technical defects in test images, important subjective information of image-quality experience is lost. The approach described here can be used to examine image quality and it will help image scientists to evaluate their test images.

Journal ArticleDOI
TL;DR: An improved CSS corner detector using the affine-length parameterization which is relatively invariant to affine transformations is presented and an improved corner matching technique is presented as a solution to the stage two.
Abstract: There are many applications, such as image copyright protection, where transformed images of a given test image need to be identified. The solution to this identification problem consists of two main stages. In stage one, certain representative features, such as corners, are detected in all images. In stage two, the representative features of the test image and the stored images are compared to identify the transformed images for the test image. Curvature scale-space (CSS) corner detectors look for curvature maxima or inflection points on planar curves. However, the arc-length used to parameterize the planar curves by the existing CSS detectors is not invariant to geometric transformations such as scaling. As a solution to stage one, this paper presents an improved CSS corner detector using the affine-length parameterization which is relatively invariant to affine transformations. We then present an improved corner matching technique as a solution to the stage two. Finally, we apply the proposed corner detection and matching techniques to identify the transformed images for a given image and report the promising results.

Patent
12 May 2008
TL;DR: In this paper, an anomaly detection method is proposed to acquire image data corresponding to nondestructive testing (NDT) of a scanned object, where the NDT image data comprises at least one inspection test image of the scanned object and multiple reference images for the object.
Abstract: An anomaly detection method includes acquiring image data corresponding to nondestructive testing (NDT) of a scanned object. The NDT image data comprises at least one inspection test image of the scanned object and multiple reference images for the scanned object. The anomaly detection method further includes generating an anomaly detection model based on a statistical analysis of one or more image features in the reference images for the scanned object and identifying one or more defects in the inspection test image, based on the anomaly detection model.

Patent
Billy Chen1, Eyal Ofek1
07 Jun 2008
TL;DR: In this paper, a panoramic image is generated by using image context data (e.g., three-dimensional model data, two-dimensional image data, and/or 360° image data from another source) rendered based upon the same position or a nearby position relative to the image data.
Abstract: Methods and systems are provided methods and systems for augmenting image data (e.g., still image data or video image data) utilizing image context data to generate panoramic images. In accordance with embodiments hereof, a position and orientation of received image data is utilized to identify image context data (e.g., three-dimensional model data, two-dimensional image data, and/or 360° image data from another source) rendered based upon the same position or a nearby position relative to the image data and the image data is augmented utilizing the identified context data to create a panoramic image. The panoramic image may then be displayed (e.g., shown on a LCD/CRT screen or projected) to create a user experience that is more immersive than the original image data could create.

Proceedings Article
01 Aug 2008
TL;DR: This article presents a synthetic image set for validation of cell image analysis algorithms, and proposes to use the simulated images for benchmarking along with manually labeled images, and presents case studies of tuning and testing a cell imageAnalysis algorithm based on simulated images.
Abstract: This article presents a synthetic image set for validation of cell image analysis algorithms. To address the problem of validation, we have previously developed a simulation framework for cell population images. Here, we apply the simulation for generating a benchmark set of cell images with varying characteristics. The value of simulation is in the ground truth information known for the generated images. Traditionally, the ground-truth has been obtained through tedious and error-prone manual segmentation of the images. While such approach cannot be fully replaced, we propose to use the simulated images for benchmarking along with manually labeled images, and present case studies of tuning and testing a cell image analysis algorithm based on simulated images.

Journal ArticleDOI
TL;DR: A novel approach utilizing Shannon entropy other than the evaluation of derivates of the image in detecting edges in gray level images has been proposed and it has been observed that the proposed edge detector works effectively for different gray scale digital images.
Abstract: Most of the classical mathematical methods for edge detection based on the derivative of the pixels of the original image are Gradient operators, Laplacian and Laplacian of Gaussian operators. Gradient based edge detection methods, such as Roberts, Sobel and Prewitts, have used two 2-D linear filters to process vertical edges and horizontal edges separately to approximate first-order derivative of pixel values of the image. The Laplacian edge detection method has used a 2-D linear filter to approximate second-order derivative of pixel values of the image. Major drawback of second-order derivative approach is that the response at and around the isolated pixel is much stronger. In this research study, a novel approach utilizing Shannon entropy other than the evaluation of derivates of the image in detecting edges in gray level images has been proposed. The proposed approach solves this problem at some extent. In the proposed method, we have used a suitable threshold value to segment the image and achieve the binary image. After this the proposed edge detector is introduced to detect and locate the edges in the image. A standard test image is used to compare the results of the proposed edge detector with the Laplacian of Gaussian edge detector operator. In order to validate the results, seven different kinds of test images are considered to examine the versatility of the proposed edge detector. It has been observed that the proposed edge detector works effectively for different gray scale digital images. The results of this study were quite promising.

Journal ArticleDOI
08 Dec 2008
TL;DR: This paper develops a probabilistic method (LOOPS) that can learn a shape and appearance model for a particular object class, and be used to consistently localize constituent elements of the object’s outline in test images.
Abstract: Discriminative tasks, including object categorization and detection, are central components of high-level computer vision. However, sometimes we are interested in a finer-grained characterization of the object's properties, such as its pose or articulation. In this paper we develop a probabilistic method (LOOPS) that can learn a shape and appearance model for a particular object class, and be used to consistently localize constituent elements (landmarks) of the object's outline in test images. This localization effectively projects the test image into an alternative representational space that makes it particularly easy to perform various descriptive tasks. We apply our method to a range of object classes in cluttered images and demonstrate its effectiveness in localizing objects and performing descriptive classification, descriptive ranking, and descriptive clustering.

Proceedings ArticleDOI
05 Sep 2008
TL;DR: It is shown that the proposed approach outperforms the traditional pixel-based SVM classification method for land cover classification with PolSAR data, and the integration of SRM and SVM makes the proposed algorithm an attractive and alternative method for polarimetric SAR classification.
Abstract: This paper presents a new object-oriented classification method based on statistical region merging (SRM) for segmentation and support vector machine (SVM) for classification where polarimetric synthetic aperture radar (PolSAR) data are used. The proposed approach makes use of polarimetric information of PolSAR data, and takes advantage of SRM and SVM. The SRM segmentation method not only considers spectral, shape, scale information, but also has the ability to cope with significant noise corruption, handle occlusions. The SVM used for classification takes its advantages of solving sparse sampling, non-linear, high-dimensional, and global optimum problems comparing with other classifiers. It is thus expected that the input vectors of SVM will include fully polarimetric information for image classification. A test image, acquired by the Jet Propulsion Laboratory Airborne SAR (AIRSAR) system, is used to demonstrate the advantages of the proposed method. It is shown that the proposed approach outperforms the traditional pixel-based SVM classification method for land cover classification with PolSAR data, and the integration of SRM and SVM makes the proposed algorithm an attractive and alternative method for polarimetric SAR classification.

Patent
25 Feb 2008
TL;DR: In this article, a recognition-by-parts authentication system for determining if a physical test target represented in test image(s) obtained using an imaging device matches a physical training target representing in training image (s).
Abstract: A recognition-by-parts authentication system for determining if a physical test target represented in test image(s) obtained using an imaging device matches a physical training target represented in training image(s). The system includes a multitude of adaptive and robust correlation filters. Each of the adaptive and robust correlation filters is configured to generate correlation-peak-strength and distance-from-origin data using a multitude of related images. Each of the multitude of related images representing a similar part of a larger image. The related images originate from the test image(s) and training image(s).

Book ChapterDOI
12 Oct 2008
TL;DR: By comparing images of lines rather than of gray levels, this approach avoids the computationally intensive, and some-times impossible, tasks of estimating 3D surfaces and their associated BRDFs in the model-building stage.
Abstract: In this paper, we propose a new approach to change detection that is based on the appearance or disappearance of 3D lines, which may be short, as seen in a new image. These 3D lines are estimated automatically and quickly from a set of previously-taken learning-images from arbitrary view points and under arbitrary lighting conditions. 3D change detection traditionally involves unsupervised estimation of scene geometry and the associated BRDF at each observable voxel in the scene, and the comparison of a new image with its prediction. If a significant number of pixels differ in the two aligned images, a change in the 3D scene is assumed to have occurred. The importance of our approach is that by comparing images of lines rather than of gray levels, we avoid the computationally intensive, and some-times impossible, tasks of estimating 3D surfaces and their associated BRDFs in the model-building stage. We estimate 3D lines instead where the lines are due to 3D ridges or BRDF ridges which are computationally much less costly and are more reliably detected. Our method is widely applicable as man-made structures consisting of 3D line segments are the main focus of most applications. The contributions of this paper are: change detection based on appropriate interpretation of line appearance and disappearance in a new image; unsupervised estimation of "short" 3D lines from multiple images such that the required computation is manageable and the estimation accuracy is high.

Patent
31 Dec 2008
TL;DR: In this article, a method of processing and device configure to process digital images to enhance image quality and correct motion blur is presented, where a number of images of a scene are captured with an exposure time T. An order of sharpness of the images is determined and the sharpest image is used as a reference image for generating an output image.
Abstract: A method of processing and device configure to process digital images to enhance image quality and correct motion blur. A number N of images of a scene are captured with an exposure time T. An order of sharpness of the images is determined and the sharpest image is used as a reference image for generating an output image.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: An integrated decision support system for an automated melanoma recognition of dermoscopic images based on multiple expert fusion using Bayespsila theorem to support decision making by predicting image categories by combining outputs from different classifiers.
Abstract: This paper presents an integrated decision support system for an automated melanoma recognition of dermoscopic images based on multiple expert fusion. In this context, the ultimate aim is to support decision making by predicting image categories (e.g., melanoma, benign and dysplastic nevi) by combining outputs from different classifiers. A fast and automatic segmentation method to detect the lesion from the background healthy skin is proposed and lesion-specific local color and texture-related features are extracted. For the classification, combining experts which are classifiers with different structures, are examined as alternative solution instead of an individual classifier. In this approach, probabilistic outputs of the experts are combined based on the combination rules that are derived by following Bayespsila theorem. The category label with the highest confidence score is considered to be the class of a test image. Experimental results on a collection of 358 dermoscopic images demonstrate the effectiveness of the proposed expert fusion-based approach.

Journal ArticleDOI
TL;DR: In this paper, a cross-convolutional image subtraction algorithm was proposed to solve the problem of sparse stellar images. But the computational efficiency is comparable with similar procedures currently in use, and it requires high quality reference images for comparison.
Abstract: In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: Experimental results show the advantages of the proposed approach in terms of providing wider working range and more precise prediction consistency in noisy condition and comparing it with orthogonal moments-based sharpness metrics.
Abstract: This paper proposes a novel statistical approach to formulate image sharpness metric using eigenvalues. Statistical information of image content is represented effectively using a set of eigenvalues which is computed using singular value decomposition (SVD). The approach is started by normalizing the test image with its energy to minimize the effects of image contrast. Covariance matrix which is computed from the normalized image is then diagonalized using SVD to obtain its eigenvalues. Sharpness score of the test image is determined by taking the trace of the first six largest eigenvalues. The performance of the proposed approach is gauged by comparing it with orthogonal moments-based sharpness metrics. Experimental results show the advantages of the proposed approach in terms of providing wider working range and more precise prediction consistency in noisy condition.

Book ChapterDOI
12 Oct 2008
TL;DR: A novel and efficient method for generic arbitrary-view object class detection and localization that can automatically determine the locations and outlines of multiple objects in the test image with occlusion handling and can accurately estimate both the intrinsic and extrinsic camera parameters in an optimized way.
Abstract: We propose a novel and efficient method for generic arbitrary-view object class detection and localization. In contrast to existing single-view and multi-view methods using complicated mechanisms for relating the structural information in different parts of the objects or different viewpoints, we aim at representing the structural information in their true 3D locations. Uncalibrated multi-view images from a hand-held camera are used to reconstruct the 3D visual word models in the training stage. In the testing stage, beyond bounding boxes, our method can automatically determine the locations and outlines of multiple objects in the test image with occlusion handling, and can accurately estimate both the intrinsic and extrinsic camera parameters in an optimized way. With exemplar models, our method can also handle shape deformation for intra-class variance. To handle large data sets from models, we propose several speedup techniques to make the prediction efficient. Experimental results obtained based on some standard data sets demonstrate the effectiveness of the proposed approach.

Proceedings Article
01 Jan 2008
TL;DR: The decision level fusion, combing matching scores of principal lines and Locality Preserving Projections features, has been made for final identification in small training sub-database.
Abstract: In this paper, we propose two palmprint identification schemes using fusion strategy. In the first fusion scheme, firstly, the principal lines of test image is extracted, and matched with that of all training images. Secondly, those training images with large matching scores are selected to construct a small training sub-database. At last, the decision level fusion, combing matching scores of principal lines and Locality Preserving Projections features, has been made for final identification in small training sub-database. From another point of view, it can be seen that the fusion is restricted by the previous results of principal lines matching, so we call it as restricted fusion. The second fusion scheme is similar to the first One. Just the fusion order is changed. The results of experiments conducted on PolyU palmprint database show that the proposed schemes can achieve 100% accurate recognition rate.

Proceedings ArticleDOI
06 Mar 2008
TL;DR: This work introduces One Registration, Multiple Segmentations (ORMS), a procedure to obtain multiple segmentations with a single online registration by weighting the different segmentations according to the mutual information between the test image and the atlas image after registration.
Abstract: Atlas-based segmentation has proven effective in multiple applications. Usually, several reference images are combined to create a representative average atlas image. Alternatively, a number of independent atlas images can be used, from which multiple segmentations of the image of interest are derived and later combined. One of the major drawbacks of this approach is its large computational burden caused by the high number of required registrations. To address this problem, we introduce One Registration, Multiple Segmentations (ORMS), a procedure to obtain multiple segmentations with a single online registration. This can be achieved by pre-computing intermediate transformations from the initial atlas images to an average image. We show that, compared to the usual approach, our method reduces time considerably with little or no loss in accuracy. On the other hand, optimum combination of these segmentations remains an unresolved problem. Different approaches have been adopted, but they are all far from the upper bound of any combination strategy. This is given by the Combination Oracle, which classifies a voxel correctly if any individual segmentation coincides with the ground truth. We present here a novel combination approach, based on weighting the different segmentations according to the mutual information between the test image and the atlas image after registration. We compare this method with other existing combination strategies using microscopic MR images of mouse brains, achieving statistically significant improvement in segmentation accuracy.

Proceedings ArticleDOI
26 Nov 2008
TL;DR: This paper proposes a new approach using particle swarm optimization (PSO) for medical image registrations, a stochastic, population-based evolutionary computer algorithm that has been demonstrated for both rigid and non-rigid medical image registration.
Abstract: In image guided surgery, the registration of pre- and intra-operative image data is an important issue. In registrations, we seek an estimate of the transformation that registers the reference image and test image by optimizing their metric function (similarity measure). To date, local optimization techniques, such as the gradient decent method, are frequently used for medical image registrations. But these methods need good initial values for estimation in order to avoid the local minimum. In this paper, we propose a new approach using particle swarm optimization (PSO) for medical image registrations. Particle swarm optimization is a stochastic, population-based evolutionary computer algorithm. The effectiveness of PSO has been demonstrated for both rigid and non-rigid medical image registration.

Book
Siwei Lyu1, Hany Farid1
07 Feb 2008
TL;DR: A set of natural image statistics are described that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition that capture certain statistical regularities of natural images.
Abstract: We describe a set of natural image statistics that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition. These image statistics consist of first- and higher-order statistics that capture certain statistical regularities of natural images. We propose to apply these image statistics, together with classification techniques, to three problems in digital image forensics: (1) differentiating photographic images from computer-generated photorealistic images, (2) generic steganalysis; (3) rebroadcast image detection. We also apply these image statistics to the traditional art authentication for forgery detection and identification of artists in an art work. For each application we show the effectiveness of these image statistics and analyze their sensitivity and robustness.