scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 2000"


Book
01 Feb 2000
TL;DR: This book provides readers with a complete library of algorithms for digital image processing, coding, and analysis, supplemented with an ftp site containing detailed lab exercises, PDF transparencies, and source code.
Abstract: From the Publisher: Digital data acquired by scanners, radar systems, and digital cameras are typically computer processed to produce images through digital image processing. Through various techniques employing image processing algorithms, digital images can be enhanced for viewing and human interpretation. This book provides readers with a complete library of algorithms for digital image processing, coding, and analysis. Reviewing all facets of the technology, it is supplemented with an ftp site containing detailed lab exercises, PDF transparencies, and source code (all algorithms are presented in C-code).

507 citations


Book
15 May 2000
TL;DR: 1. Introduction 2. Imaging 3. Digital Images 4. Images in Java 5. Basic Image Manipulation 6. Grey level and colour enhancement 7. Neighbourhood Operations 8. The Frequency Domain 9. Geometric operations 10. Morphological Image Processing 11. Image Compression
Abstract: 1 Introduction 2 Imaging 3 Digital Images 4 Images in Java 5 Basic Image Manipulation 6 Grey level and colour enhancement 7 Neighbourhood Operations 8 The Frequency Domain 9 Geometric operations 10 Segmentation 11 Morphological Image Processing 12 Image Compression Appendix A Glossary of Image Processing Terms

206 citations


Proceedings Article
01 Jan 2000
TL;DR: A neurally-inspired, unsupervised learning algorithm that builds a non-linear generative model for pairs of face images from the same individual and individuals are then recognized by finding the highest relative probability pair among all pairs that consist of a test image and an image whose identity is known.
Abstract: We describe a neurally-inspired, unsupervised learning algorithm that builds a non-linear generative model for pairs of face images from the same individual. Individuals are then recognized by finding the highest relative probability pair among all pairs that consist of a test image and an image whose identity is known. Our method compares favorably with other methods in the literature. The generative model consists of a single layer of rate-coded, non-linear feature detectors and it has the property that, given a data vector, the true posterior probability distribution over the feature detector activities can be inferred rapidly without iteration or approximation. The weights of the feature detectors are learned by comparing the correlations of pixel intensities and feature activations in two phases: When the network is observing real data and when it is observing reconstructions of real data generated from the feature activations.

157 citations


Proceedings ArticleDOI
05 Jun 2000
TL;DR: The criteria for monochrome compressed image quality from 1974 to 1999 is reviewed and attempts to improve quality measurement include incorporation of simple models of the human visual system (HVS) and multi-dimensional tool design.
Abstract: While lossy image compression techniques are vital in reducing bandwidth and storage requirements, they result in distortions in compressed images. A reliable quality measure is a much needed tool for determining the type and amount of image distortion. The traditional subjective criteria, which involve human observers, are inconvenient, time-consuming, and influenced by environmental conditions. Widely used pixel wise measures such as the mean square error (MSE) cannot capture the artifacts like blurriness or blockiness, and do not correlate well with visual error perception. Attempts to improve quality measurement include incorporation of simple models of the human visual system (HVS) and multi-dimensional tool design. We review the criteria for monochrome compressed image quality from 1974 to 1999.

127 citations


Patent
10 Aug 2000
TL;DR: In this paper, a shop checkout is provided with a system for recognising objects, particularly objects that do not carry a bar code or UPC code, and the checkout operator may interact with the system via a touchscreen which displays results of the object recognition process and accepts input such as menu selections from the operator.
Abstract: A shop checkout is provided with a system for recognising objects, particularly objects that do not carry a bar code or UPC code. The object (11) is placed on a viewplate (10) which is lit from behind and a digital camera (13) captures a backlit silhouette image of the object. A data processing system analyses the image to extract characterising features of the image and the object is identified from a list of known object features in a database. The checkout operator may interact with the system via a touchscreen (23) which displays results of the object recognition process and accepts input such as menu selections from the operator.

102 citations


Patent
02 Aug 2000
TL;DR: In this paper, a method for processing a digital image in a distributed manner to produce a final modified digital image includes providing a source digital image at a first computer, providing the source digital images at a second computer, and modifying the source image at the second computer to form a first modified image.
Abstract: A method for processing a digital image in a distributed manner to produce a final modified digital image includes providing a source digital image at a first computer, providing the source digital image at a second computer, and modifying the source digital image at the second computer to form a first modified digital image. The method further includes determining a difference digital image representing the difference between the source digital image and the first modified digital image, transferring the difference digital image to the first computer, and combining the difference digital image with the source digital image at the first computer to form the final modified digital image.

90 citations


Journal ArticleDOI
TL;DR: At x2 magnification, images compressed with either JPEG or WTCQ algorithms were indistinguishable from unaltered original images for most observers at compression ratios between 8:1 and 16:1, indicating that 10:1 compression is acceptable for primary image interpretation.
Abstract: PURPOSE: To determine the degree of irreversible image compression detectable in conservative viewing conditions. MATERIALS AND METHODS: An image-comparison workstation, which alternately displayed two registered and magnified versions of an image, was used to study observer detection of image degradation introduced by irreversible compression. Five observers evaluated 20 16-bit posteroanterior digital chest radiographs compressed with Joint Photographic Experts Group (JPEG) or wavelet-based trellis-coded quantization (WTCQ) algorithms at compression ratios of 8:1–128:1 and ×2 magnification by using (a) traditional two-alternative forced choice; (b) original-revealed two-alternative forced choice, in which the noncompressed image is identified to the observer; and (c) a resolution-metric method of matching test images to degraded reference images. RESULTS: The visually lossless threshold was between 8:1 and 16:1 for four observers. JPEG compression resulted in performance as good as that with WTCQ compres...

86 citations


Proceedings ArticleDOI
03 Sep 2000
TL;DR: A range image segmentation contest was organized in conjunction with ICPR'2000 and the goal is to continue the effort of experimentally evaluating range image segmentsation algorithms initiated by Hoover et al. (1996) and Powell et al (1998).
Abstract: A range image segmentation contest was organized in conjunction with ICPR'2000. The goal is to continue the effort of experimentally evaluating range image segmentation algorithms initiated by Hoover et al. (1996) and Powell et al. (1998). This paper summarizes the results of the contest.

74 citations


Journal Article
TL;DR: For situations where digital image transmission time and costs should be minimized, Wavelet image compression to 15 KB is recommended, although there is a slight cost of computational time.
Abstract: PURPOSE. To investigate image compression of digital retinal images and the effect of various levels of compression on the quality of the images. METHODS. JPEG (Joint Photographic Experts Group) and Wavelet image compression techniques were applied in five different levels to 11 eyes with subtle retinal abnormalities and to 4 normal eyes. Image quality was assessed by four different methods: calculation of the root mean square (RMS) error between the original and compressed image, determining the level of arteriole branching, identification of retinal abnormalities by experienced observers, and a subjective assessment of overall image quality. To verify the techniques used and findings, a second set of retinal images was assessed by calculation of RMS error and overall image quality. RESULTS. Plots and tabulations of the data as a function of the final image size showed that when the original image size of 1.5 MB was reduced to 29 KB using JPEG compression, there was no serious degradation in quality. The smallest Wavelet compressed images in this study (15 KB) were generally still of acceptable quality. CONCLUSIONS. For situations where digital image transmission time and costs should be minimized, Wavelet image compression to 15 KB is recommended, although there is a slight cost of computational time. Where computational time should be minimized, and to remain compatible with other imaging systems, the use of JPEG compression to 29 KB is an excellent alternative.

59 citations


Patent
24 Nov 2000
TL;DR: In this paper, a landmark-based piecewise linear mapping of one volumetric image into another volumeetric image is proposed, and algorithms for the automatic identification of these landmarks are formulated for three orientations, axial, coronal, and sagittal.
Abstract: A system for analysing a brain image compares the image with a brain atlas, labels the image accordingly, and annotating the regions of interest and/or other structures. This atlas-enhanced data is written to a file (or more than one file) in the Dicom format or any web-enabled format such as SGML or XML format. The image used may be produced by any medical imaging modality. A fast algorithm is proposed for a landmark-based piecewise linear mapping of one volumetric image into another volumetric image. Furthermore, a new set of brain landmarks are proposed, and algorithms for the automatic identification of these landmarks are formulated for three orientations, axial, coronal, and sagittal.

50 citations


Proceedings ArticleDOI
30 Jul 2000
TL;DR: A new method for image indexing and retrieval that is based on pixel statistics from varying spatial scales, using isotropic structuring elements to determine the frequency distribution of pixels locally in the image and to detect local groups of pixels with uniform color or texture attributes.
Abstract: We present a new method for image indexing and retrieval that is based on pixel statistics from varying spatial scales. The proposed method employs a structuring element to determine the frequency distribution of pixels locally in the image and to detect local groups of pixels with uniform color or texture attributes. The frequency distribution and relative sizes of such groups are summarized into a table termed as a blob histogram. By embedding spatial information, color blob histograms are able to distinguish images that have the same color pixel distribution but contain objects with different sizes or shapes, without the need for segmentation. Using isotropic structuring elements, blob histograms are invariant to rotations and translations of the objects in an image. Experimental results of using blob histograms in image retrieval are given in the paper.

Patent
12 Dec 2000
TL;DR: In this article, the authors proposed a technique for high-speed and high-accuracy visual inspections, which is appropriately introduced into a production line of boards by using a scanning head 16 of a test unit 14 which scans the substrate 1 with its line sensor 34, and read into an image memory 44 of a main unit 12 as a test image 41.
Abstract: PROBLEM TO BE SOLVED: To provide a technology for high-speed and high-accuracy visual inspections, which is appropriately introduced into a production line of boards. SOLUTION: Image data 54 of a substrate 1 are acquired by using a scanning head 16 of a test unit 14 which scans the substrate 1 with its line sensor 34, and read into an image memory 44 of a main unit 12 as a test image 41. The image memory 44 stores a previously tested image 41 as a reference image 43 which is of good products. An analysis unit 46 compares the test image 41 with the reference image 43, checks whether or not features in images are coincident to each other and makes such determination that the substrate 1 is a defective product in the case a discrepancy is found; otherwise, it is a good product. When the substrate 1 is judged as the good product, the former test image is replaced by the test image 41 of the good product as the reference image 43, thereby solving such problems that operations setting test items and parameters for testing the substrate are complicated; and it takes a long time to carry out the test. COPYRIGHT: (C)2005,JPO&NCIPI

Journal Article
TL;DR: In this paper, the automatic registration of multi-source remote sensing images is discussed, the presentation being based strat* and ~w~l~~ ~l~b~l Image Matching on global image matching.
Abstract: global image matching based on hierarchical probabilistic Image fusion is important in the synthetic application of multi- relaxation. source remotely sensed images, and image registmtion is the In addition, some preprocessing, such as the calculation of the basis of image fusion. The traditional method of manual trmformation parameters of image shift and rotation, the registration is a laborious, tedious, and complex task. For determination of scale factor, and the use of digital image proimproving the eficiency of image fusion, automatic methods cessing functions, may have to be applied. of image registration must be used. In this paper, automatic Different images, such as those from SPOT and Landsat TM, registration based on the theory and methods of image have been tested to check the principle of automatic registramatching is presented. The main contents include extmction tion, mentioned above and described in detail below. Finally, a of single point matching, reliability strategy based on a successful result of the automated registration and fusion of TM hierarchical pyramid image structure, and global image and SPOT images is presented. matching. The automatic registration of multi-source remote sensing images is then discussed, the presentation being based strat* and ~w~l~~ ~l~b~l Image Matching on global image matching. Finally, the successful results of the automated registration and fusion of TM and SPOT images Polnt Matching are presented. After the feature points have been extracted, the initial matching based on the similarity of grey levels is conducted to find lntrodticti~n the candidates of the homologous points in the slave image.

Patent
Kevin M. Ferguson1
01 Nov 2000
TL;DR: In this paper, a method of real-time human vision system modeling to produce a measure of impairment of a test image signal derived from a reference image signal processes the two signals in respective channels.
Abstract: A method of realtime human vision system modeling to produce a measure of impairment of a test image signal derived from a reference image signal processes the two signals in respective channels. The signals are converted to luminance image signals and low-pass filtered in two dimensions. The processed image signals are then segmented and block means values are obtained which are subtracted from the pixels in the corresponding processed image signals. Noise is injected into the segmented processed image signals and a variance is calculated for the reference segmented processed image signal and also for the difference between the segmented processed image signals. The variance of difference segmented processed image signal is normalized by the variance for the reference segmented processed image signal, and the Nth root of the result is taken as the measure of visible impairment of the test image signal. The measure of visible impairment may be converted into appropriate units, such as JND, MOS, etc.

Journal ArticleDOI
TL;DR: A new active contour model is presented, which is a neural network, based on self- organization, which consists in exploiting the principles of spatial isomorphism and self-organization in order to create flexible contours that characterize shapes in images.

Patent
25 Jan 2000
TL;DR: In this article, a method is proposed to infer a scene from a test image using a set of images and corresponding scenes, where each of the images and scenes are partitioned respectively into a plurality of image patches and scene patches, and probabilities of the compatibility matrices are propagated in the network until convergence.
Abstract: A method infers a scene from a test image During a training phase, a plurality of images and corresponding scenes are acquired Each of the images and corresponding scenes are partitioned respectively into a plurality of image patches and scene patches Each image patch is represented as an image vector, and each scene patch is represented as a scene vector The image vectors and scene vectors are modeled as a network During an inference phase, the test image is acquired The test image is partitioned into a plurality of test image patches Each test image patch is represented as a test image vector Candidate scene vectors corresponding to the test image vectors are located in the network Compatibility matrices for the candidate scene vectors are determined, and probabilities of the compatibility matrices are propagated in the network until convergence to infer the scene from the test image

Patent
James N. Wiley1, Jun Ye1, Shauh-Teh Juang1, David S. Alles1, Yen-Wen Lu1, Yu Cao1 
28 Apr 2000
TL;DR: In this paper, a method of inspecting a reticle defining a circuit layer pattern that is used within a corresponding semiconductor process to generate corresponding patterns on a semiconductor wafer is presented.
Abstract: Disclosed is a method of inspecting a reticle defining a circuit layer pattern that is used within a corresponding semiconductor process to generate corresponding patterns on a semiconductor wafer. A test image of the reticle is provided, and the test image has a plurality of test characteristic values. A baseline image containing an expected pattern of the test image is also provided. The baseline image has a plurality of baseline characteristic values that correspond to the test characteristic values. The test characteristic values are compared to the baseline characteristic values such that a plurality of difference values are calculated for each pair of test and baseline characteristic values. Statistical information is also collected.

Proceedings ArticleDOI
28 May 2000
TL;DR: The proposed method is robust in high quality lossy image compression and provides the user not only with a measure for the authenticity of the test image but also with an image map that highlights the unaltered image regions when selective tampering has been made.
Abstract: A novel method for image authentication is proposed. A watermark signal is embedded in a grayscale or a color host image. The watermark key controls a set of parameters of a chaotic system used for the watermark generation. The use of chaotic mixing increases the security of the proposed method and provides the additional feature of imperceptible encryption of the image owner logo in the host image. The method succeeds in detecting any alteration made in a watermarked image. The proposed method is robust in high quality lossy image compression. It provides the user not only with a measure for the authenticity of the test image but also with an image map that highlights the unaltered image regions when selective tampering has been made.

Proceedings ArticleDOI
01 Sep 2000
TL;DR: Experimental results show the advantages of using an FVQ/HMM recognizer engine instead of conventional discrete HMMs.
Abstract: An unconstrained Farsi handwritten word recognition system based on fuzzy vector quantization (FVQ) and a hidden Markov model (HMM) for reading city names in postal addresses is presented. Preprocessing techniques including binarization, noise removal, slope correction and baseline estimation are described. Each word image is represented by its contour information. The histogram of chain code slopes of the image strips (frames), scanned from right to left by a sliding window, is used as feature vectors. Fuzzy c-means (FCM) clustering is used for generating a fuzzy code book. A separate HMM is trained by a modified Baum-Welch algorithm for each city name. A test image is recognized by finding the best match (likelihood) between the image and all of the HMM work models using a forward algorithm. Experimental results show the advantages of using an FVQ/HMM recognizer engine instead of conventional discrete HMMs.

Patent
04 Feb 2000
TL;DR: In this paper, a similarity measurement method for the classification of medical images into predetermined categories is proposed, and a small set of pre-classified images is required to employ the method.
Abstract: A similarity measurement method for the classification of medical images into predetermined categories. A small set of pre-classified images is required to employ the method. The images can be real world images acquired using a camera, computer tomography, etc., or schematic drawings representing samples of different classes. The use of schematic drawings as a source of images allows a quick test of the method for a particular classification problem. The eigenvectors for each category are mathematically derived, and each image in each category is represented as a weighted linear combination of the eigenvectors. A test image is provided and projected onto the eigenvectors of each of the categories so as to reconstruct the test image with the eigenvectors. The RMS (root-mean-square) distance between the test image and each of the categories is measured. The smallest similarity measurement distance is selected, and the test image is classified in accordance with the selected smallest similarity measurement distance.

Journal ArticleDOI
TL;DR: Fundus images can be digitized and stored with significant compression while preserving stereopsis and image quality suitable for quantitative image analysis and semiquantitative grading.
Abstract: Purpose To investigate the effects of image digitization and compression on the ability to identify and quantify features in color fundus photographs. Methods Color fundus photographs were digitized as tagged image file format (TIFF) and high-compression (80:1) and low-compression (30:1) joint photographic experts group (JPEG) images. Rerendered images were subjected to standard grading protocols developed for a clinical trial, and digitized images were subjected to image analysis software for drusen identification and quantitation. Re-created stereoscopic images were compared subjectively with originals. Results Original, TIFF, and low-compression (30:1) JPEG images were virtually indistinguishable when subjected to close scrutiny with magnification. The overall quality of high-compression (80:1) JPEG images and images digitized at 500 dots per inch was markedly reduced. Protocol grading of original and digitized images was highly concordant within the repeatability of multiple grading of original images. The area subtended by drusen differed by less than 1.0% for all uncompressed and compressed image pairs quantified. Stereoscopic information was accurately preserved when compared with originals for TIFF and low-compression JPEG images. Conclusions Fundus images can be digitized and stored with significant compression while preserving stereopsis and image quality suitable for quantitative image analysis and semiquantitative grading. Low-compression (30:1) JPEG images may be suitable for archiving and telemedical applications.

Journal ArticleDOI
01 Feb 2000
TL;DR: An edge-preserving image compression model is presented, based on subband coding and iterative constrained least square regularisation, which could significantly improve both the objective and subjective quality of the reconstructed image by preserving more edge details.
Abstract: An edge-preserving image compression model is presented, based on subband coding and iterative constrained least square regularisation. The idea is to incorporate the technique of image restoration into the current lossy image compression schemes. The model utilises the edge information extracted from the source image as a priori knowledge for the subsequent reconstruction. Generally, the extracted edge information has a limited range of magnitudes and it can be lossily conveyed. Subband coding, one of the outstanding lossy image compression schemes, is incorporated to compress the source image. Vector quantisation, a block-based lossy compression technique, is employed to compromise the bit rate incurred by the additional edge information and the target bit rate. Experiments show that the approach could significantly improve both the objective and subjective quality of the reconstructed image by preserving more edge details. Specifically, the model incorporated with SPIHT (set partitioning in hierarchical trees) outperformed the original SPIHT with the "Baboon" continuous-tone test image. In general, the model may be applied to any lossy image compression systems.

Proceedings ArticleDOI
08 Oct 2000
TL;DR: An algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression, shows that the PIQE model is most accurate in the compression range for which JPEG is most effective.
Abstract: We propose an algorithm for evaluating the quality of JPEG compressed images, called the psychovisually-based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression. The PIQE evaluates the image quality using two psychovisually-based fidelity criteria: blockiness and similarity. The blockiness is an index that measures the patterned square artifact created as a by-product of the lossy DCT-based compression technique used by JPEG and MPEG. The similarity measures the perceivable detail remaining after compression. The blockiness and similarity are combined into a single PIQE index used to assess quality. The PIQE model is tuned by using subjective assessment results of five subjects on six sets of images. The results show that the PIQE model is most accurate in the compression range for which JPEG is most effective.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: A new method for measuring the difference between a distorted image and its original is presented, which provides a quantitative measure that more closely corresponds to a subjective assessment.
Abstract: This paper presents a new method for measuring the difference between a distorted image and its original. Many areas of image processing require the ability to compare such images in order to evaluate the performance of a given algorithm. These areas include image restoration and image compression. The standard method currently used is the mean squared error (MSE). It is a simple value to calculate, but in many instances it provides an inaccurate representation of the image's quality. The new metrics described here provide a quantitative measure that more closely corresponds to a subjective assessment.

Patent
29 Sep 2000
TL;DR: In this paper, an image processing system has a function through which two images are successively picked up by an image pick-up device for prescribed image processing, and the two picked-up images are synthesized into an image of high quality and effectiveness by the image processing device.
Abstract: PROBLEM TO BE SOLVED: To improve image processing in operation efficiency when two images which are successively imaged are synthesized into an image high in quality and effectiveness. SOLUTION: An image processing system has a function through which two images are successively picked up by an image pick-up device for prescribed image processing, and the two picked-up images are synthesized into an image of high quality and effectiveness by an image processing device. The image pick-up device forms information (adjustment of fuzziness) which indicates the content of synthesis processing for each picked-up image, the information is recorded in the tag region of an image file, data on the picked-up image are recorded in an image region. When a scene is designated to be reproduced (#81), the image processing device automatically starts an application software that executes prescribed image synthesis processing on the basis of the information that is recorded in the tag region, and indicates the content of synthesis processing (#83 to #95).

Patent
04 May 2000
TL;DR: In this paper, a method and apparatus for providing images in 2D scanners are disclosed, where a series of images are successively generated from a sensing module in a scanner as the scanner is moving across a scanning document.
Abstract: A method and apparatus for providing images in 2-D scanners are disclosed. According to one aspect of the system, a series of images are successively generated from a sensing module in a scanner as the scanner is moving across a scanning document. A first image is initially kept in a memory. When a second image becomes available, an overlapping between the first image and the second image is located. From the overlapping, it is to determine if the stored first image is precisely registered with the second image. If the two images are registered, a signal-to-noise enhanced image is obtained by averaging the two images. If the two images are not registered, a combined image is obtained by combining the two images. Either the signal-to-noise enhanced image or the combined image is then stored in the memory to work with the next image. The process is repeated till all the images are processed. As a result, a signal-to-noise enhanced image or a combined image representing the entire document is produced.

Proceedings Article
01 Sep 2000
TL;DR: In this paper, a binary watermark is embedded in a grayscale or a color host image and the method succeeds in detecting alterations made in a watermarked image, which provides the user not only with a measure for the authenticity of the test image but also with an image map that highlights the unaltered regions of the image when selective tampering has been made.
Abstract: A novel method for image authentication and tamper proofing is proposed. A binary watermark is embedded in a grayscale or a color host image. The method succeeds in detecting alterations made in a watermarked image. The proposed method is robust against high quality lossy image compression. It provides the user not only with a measure for the authenticity of the test image but also with an image map that highlights the unaltered regions of the image when selective tampering has been made. Mathematical morphology techniques are also developed for the accurate detection of alterations in fine image details.

Journal ArticleDOI
TL;DR: A new scheme is proposed for fusing multisensor images in which one image is regarded as the main image and the other the complementary; based on the evaluation of certain characteristics in the images.
Abstract: A new scheme is proposed for fusing multisensor images in which one image is regarded as the main image and the other the complementary; based on the evaluation of certain characteristics in the images. In effect, the scheme is used to fuse an image pair in which one image is superior to the other for interpretation in terms of higher resolution, better image quality, or having more recognizable features. Feature information is based on local statistical characteristics, which are extracted using the analysis of variance (ANOVA) method, in the framework of experimental designs. In effect, feature information from one image is used to influence the corresponding pixel values of the other image. The fused image leads to a better human and/or machine interpretation of the area of interest in the images.

Patent
18 Apr 2000
TL;DR: In this article, a system for locating features in image data is presented, which includes a first component system that compares first component data, which can be pixel data of a first user-selected component of the features, to first test image data.
Abstract: A system for locating features in image data is provided. The system includes a first component system. The first component system compares first component data, which can be pixel data of a first user-selected component of the features, to first test image data 124), which can be selected by scanning image data of a device (102), such as a die cut from a silicon wafer. The system also includes second component system that is connected to the first component system, such as through data memory locations of a processor. The second component system compares second component data to second test image data if the first component system finds a match between the first component data and the first test image data (118). The second test image data is selected based upon the first test image data, such as by using a known coordinate relationship between pixels of the first component data and the second component data.

Patent
11 Feb 2000
TL;DR: On the basis of an image data of image data, an image discrimination unit discriminates whether the input image is a color image or an image having index data, and then the image is corrected by an image correction unit based upon the results of discrimination as mentioned in this paper.
Abstract: On the basis of an image data of image data, an image discrimination unit discriminates whether the input image is a color image or an image having index data. The input image is corrected by an image correction unit based upon the results of discrimination.