Topic
Standard test image
About: Standard test image is a research topic. Over the lifetime, 5217 publications have been published within this topic receiving 98486 citations.
Papers published on a yearly basis
Papers
More filters
•
23 May 2014TL;DR: In this article, a method and system for automatic algorithm selection for image processing is presented, which automatically selects the correct algorithm(s) for a varying requirement of the image for processing.
Abstract: Disclosed is a method and system for automatic algorithm selection for image processing. The invention discloses the method and system for automatically selecting the correct algorithm(s) for a varying requirement of the image for processing. The selection of algorithm is completely automatic and guided by a plurality of machine learning approaches. The system here is configured to pre-process plurality of images for creating a training data. Next, the test image is extracted, pre-processed and matched for assessing the best possible match of algorithm for processing.
20 citations
•
28 Jan 2013TL;DR: In this paper, a method for automatic image registration of image data of a current medical image MR study and at least one reference study is presented, where corresponding image pairs of the current study and the reference study are formed automatically with an association machine without needing to analyze the respective image data or pixel data.
Abstract: In a method, device and storage medium encoded with programming instructions for automatic image registration of image data of a current medical image MR study and at least one reference study, corresponding image pairs of the current study and the reference study are formed automatically with an association machine without needing the analyze the respective image data or pixel data. The pair determination takes place exclusively on the basis of the DICOM header data. A synchronized image processing and/or presentation of the generated image pairs takes place at a monitor.
20 citations
••
TL;DR: A multilevel reconstruction-based multitask joint sparse representation method, which can not only restrain the background clutter and noise but also augment the data set, is proposed in this paper.
Abstract: Template-matching-based approaches have been developed for many years in the field of synthetic aperture radar (SAR) automatic target recognition (ATR). However, the performance of template-matching-based approaches is strongly affected by two factors: background clutter and noise and the size of the data set. To solve the problems mentioned above, a multilevel reconstruction-based multitask joint sparse representation method is proposed in this paper. According to the theory of the attributed scattering center (ASC) model, a SAR image exhibits strong point-scatter-like behavior, which can be modeled by scattering centers on the target. As a result, the ASCs can be extracted from SAR images based on the ASC model. Then, ASCs extracted from SAR images are used to reconstruct the SAR target at multilevels based on energy ratio (ER). The multilevel reconstruction is a process of data augmentation, which can not only restrain the background clutter and noise but also augment the data set. Several subdictionaries are designed after multilevel reconstruction according to the label of training samples. Meanwhile, a test image chip is reconstructed into multiple test images. The random projection coefficients associated with multiple reconstructed test images are fed into a multitask joint sparse representation classification framework. The final decision is made in terms of accumulated reconstruction error. Experiments on moving and stationary target acquisition and recognition (MSTAR) data set proved the effectiveness of our method.
20 citations
••
21 Apr 1997TL;DR: A novel method is used that deduces the offset parameters for the local fractal transform from the basis functions alone, by inferring the dominant edge position, so that no offset information is required.
Abstract: The performance of any block based image coder can be improved by applying fractal terms to selected blocks. Two novel methods are used to achieve this. Firstly the coder determines whether a local fractal term will improve each image block by examining its rate-distortion contribution, so that only beneficial fractal terms are used. Secondly, the decoder deduces the offset parameters for the local fractal transform from the basis functions alone, by inferring the dominant edge position, so that no offset information is required. To illustrate the method, we use a quadtree decomposed image with a truncated DCT basis. Using a standard test image, the proportion of the picture area enhanced by fractals decreases from 16.1% at 0.6 bpp to 8.1% at a high compression ratio of 80:1 (0.1 bpp). The fractal terms contribute less than 5% of the compressed code in all cases. The PSNR is improved slightly, and edge detail is visually enhanced.
20 citations
•
29 Mar 2002TL;DR: In this article, a method and apparatus are disclosed for testing the accuracy of digital test images generated by computer graphics program executed on a computer graphics system, where a test program is utilized to compare a test image with a reference image.
Abstract: A method and apparatus are disclosed for testing the accuracy of digital test images generated by a computer graphics program executed on a computer graphics system. A test program is utilized to compare a test image with a reference image. Regional image quantification verification aims at an image comparison test that accepts minor color value differences and spatial shifts in rendered pixel values. The test image and reference image are divided into corresponding sub-regions. The average color value for each sub-region of the test image is compared to the average color value of the corresponding reference image sub-region and also to other nearby reference image sub-regions. A test image is unacceptably different from a reference image, if for any sub-region of the test image, no reference image sub-region is found with an average color value difference and spatial shift less than specified maximums.
20 citations