scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1993"


Book
01 May 1993
TL;DR: Digital image processing fundamentals digital image transfor algorithms digital image filtering digital image compression edge detection algorithms image segmentation algorithms shape description.
Abstract: Digital image processing fundamentals digital image transfor algorithms digital image filtering digital image compression edge detection algorithms image segmentation algorithms shape description.

391 citations


Journal ArticleDOI
TL;DR: A method is proposed whereby a color image is treated as a vector field and the edge information carried directly by the vectors is exploited and the efficiency of the detector is demonstrated.
Abstract: A method is proposed whereby a color image is treated as a vector field and the edge information carried directly by the vectors is exploited. A class of color edge detectors is defined as the minimum over the magnitudes of linear combinations of the sorted vector samples. From this class, a specific edge detector is obtained and its performance characteristics studied. Results of a quantitative evaluation and comparison to other color edge detectors, using Pratt's (1991) figure of merit and an artificially generated test image, are presented. Edge detection results obtained for real color images demonstrate the efficiency of the detector. >

168 citations


Patent
29 Oct 1993
TL;DR: In this paper, a method of calibrating the response of the printer to an image described in terms of colorimetric values is proposed, including the steps of: a) setting printer parameters; b) deriving a printer response characteristic by printing a calibration test from device dependent printer signals stored in a device memory, the calibration image including a plurality of color patches, some of which may be repeated at a pluralityof locations on the test image at spatially disparate locations selected to keep local printer non-uniformities from affecting both locations; c) measuring printer response characteristics in
Abstract: A method of calibrating the response of the printer to an image described in terms of colorimetric values, including the steps of: a) setting printer parameters; b) deriving a printer response characteristic by printing a calibration test from device dependent printer signals stored in a device memory, the calibration image including a plurality of color patches, some of which may be repeated at a plurality of locations on the test image at spatially disparate locations selected to keep local printer non-uniformities from affecting both locations; c) measuring printer response characteristics in device independent terms; d) generating a memory mapping of device independent colors to printer responses for subsequent use in printing images defined in device independent terms. The calibration target includes a large number of patches generated from combinations of printer colorants, and may repeat some of those patches either on the same sheet or on a plurality of sheets at positions on the sheet which are spatially distinct.

129 citations


Patent
Michael Keith1
13 May 1993
TL;DR: In this article, a method for performing motion estimation in a system having a test image and a plurality of candidate images is provided for determining a best match unless a time out occurs first.
Abstract: A method is provided for performing motion estimation in a system having a test image and a plurality of candidate images. A candidate image is selected and the difference between the test image and the selected candidate image is determined. The motion of an image is estimated according to this differencing and a determination is made of the duration of the motion estimation process in the system of the present invention. The candidate image selection, the differencing and the motion estimation are then repeated according to the duration determination. The duration determination may be a determination of a time duration or a determination of a number of machine cycles. The system is adapted to iteratively decrease a measurement of the error between the test image and selected candidate images as these actions are repeated. When the error stops decreasing and begins increasing the assumption is made in the system of the present invention that a best match has been determined. Thus a best match is iteratively determined unless a time out occurs first.

83 citations


Patent
Walter F. Wafler1
18 Jun 1993
TL;DR: In this paper, a scanner subsystem is first calibrated by scanning a known original and electronically comparing the scanned digital image with a stored digital image of the original, and a hard copy of a known test image is then printed by a printer subsystem and the calibrated scanner subsystem scans the hard copy.
Abstract: A digital copier includes an automatic copy quality correction and calibration method that corrects a first component of the copier using a known test original before attempting to correct other components that may be affected by the first component. Preferably, a scanner subsystem is first calibrated by scanning a known original and electronically comparing the scanned digital image with a stored digital image of the original. A hard copy of a known test image is then printed by a printer subsystem and the calibrated scanner subsystem scans the hard copy. The scanned digital image is electronically compared with the test image and the printer subsystem is calibrated based on the comparison.

74 citations


Proceedings ArticleDOI
01 Apr 1993
TL;DR: In this paper, the authors survey and give a classification of the criteria for the evaluation of monochrome image quality, including the mean square error (MSE) and mean square errors (SSE).
Abstract: Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

66 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: A generalization of fractal coding of images is presented in which image blocks are represented by mappings derived from least squares approximations using fractal functions, which is called the Bath fractal transform (BFT).
Abstract: A generalization of fractal coding of images is presented in which image blocks are represented by mappings derived from least squares approximations using fractal functions. Previously known matching techniques used in fractal transforms are subjects of this generalized method, which is called the Bath fractal transform (BFT). By introducing searching for the best image region for application of the BFT, a hybrid of known methods is achieved. Their fidelity is evaluated by a root-mean-square error measure for a number of polynomial instances of the BFT, over a range of searching levels using a standard test image. It is shown that the fidelity of the fractal transform increases with both search level and order of the polynomial approximation. The method readily extends to data of higher or lower dimensions, including time as an image sequences. >

59 citations


Patent
20 Apr 1993
TL;DR: In this paper, a plurality of descriptors, called reference keys and reference series, are generated for both the reference images and the test image, and the reference library is screened for likely matches by comparing the descriptors for the test images to the descriptorors in the reference image in the library until a match is found.
Abstract: An image recognition system includes a method and apparatus in which images are characterized and compared on the basis of internal structure, which is independent of image size and image orientation. A library of reference images is first generated and stored, then each input image, or test image, is compared to the images stored in the library until a match is found. The image is represented in memory as nodes, lines, and curves. A plurality of descriptors, called reference keys and reference series, are generated for both the reference images and the test image. The reference library is screened for likely matches by comparing the descriptors for the test image to the descriptors in the reference images in the library. Inclusionary and exclusionary tests are performed. After screening, the each of candidate reference images is searched by comparing the pathway thorough the reference image and the pathway through the test image, and by the degree of correlation between the reference and test images. In addition, the link ratio, a measure of the portion of the test image actually matched to the reference image is computed. Searching criteria, like the screening criteria are based on internal image structure, so that the recognition process is independent of image size and image orientation.

44 citations


Proceedings ArticleDOI
07 Mar 1993
TL;DR: This study covers data compression algorithms, file format schemes, and fractal image compression, examining in depth how an interactive approach to image compression is implemented.
Abstract: Data compression as it is applicable to image processing is addressed. The relative effectiveness of several image compression strategies is analyzed. This study covers data compression algorithms, file format schemes, and fractal image compression. An overview of the popular LZW compression algorithm and its subsequent variations is also given. Several common image file formats are surveyed, highlighting the differing approaches to image compression. Fractal compression is examined in depth to reveal how an interactive approach to image compression is implemented. The performance of these techniques is compared for a variety of landscape images, considering such parameters as data reduction ratios and information loss.

41 citations


Proceedings ArticleDOI
17 Oct 1993
TL;DR: An image compression scheme based on variable resolution (VR) sensing is outlined, performance comparisons are made with other compression methods, and a prototype teleconferencing system based on VR is introduced.
Abstract: The requirements of systems using digitized computer images demand the use of image compression schemes. An image compression scheme based on variable resolution (VR) sensing is outlined. Performance comparisons are made with other compression methods, and a prototype teleconferencing system based on VR is introduced. >

41 citations


Patent
23 Dec 1993
TL;DR: In this paper, a generator for generating an image signal representing a predetermined test image having a plurality of tone levels, an image forming device for forming the test image on a recording medium based on the signal, and a transferring device for transferring the image formed on the recording medium to a recording sheet, wherein the controller controls the generator, the image forming devices, and the transferring devices to correct the conversion data table based on density levels measured by the second measuring device.
Abstract: An image forming system includes a generator for generating an image signal representing a predetermined test image having a plurality of tone levels, an image forming device for forming the predetermined test image on a recording medium based on the signal, a first measuring device for measuring density levels of the formed predetermined test image, corresponding to each of the plurality of tone levels of the image signal, a controller for determining characteristics of a change of density levels in the predetermined test image to a change of tone levels in the image signal based on a plurality of density levels measured by the first measuring device, and for making a conversion data table for converting tone levels of an input image signal in accordance with the characteristics, a designator for generating instructions for designation a correction of the conversion data table, a transferring device for transferring the image formed on the recording medium to a recording sheet, a second measuring device for measuring density levels of the image on the recording sheet, wherein the controller controls the generator, the image forming device, and the transferring devices to correct the conversion data table based on the density levels measured by the second measuring device.

Proceedings ArticleDOI
15 Jun 1993
TL;DR: The authors describe how this contour can be used as an input to a recognition system that classifies the vehicles into five generic categories and the results are promising.
Abstract: A new approach to the extraction of the contour of a moving object is presented. The method is based on the integration of a motion segmentation technique using image substraction and a color segmentation technique based on the split-and-merge algorithm. The advantages of this method are: it can detect large moving objects and extract their boundaries; the background can be arbitrarily complicated and contain many non-moving objects occluded by the moving object; and it requires only three image frames that need not be consecutive, provided that the object is entirely contained in each of the three frames. The method is applied to a large number of color images of vehicles moving on a road and a highway ramp. The results are promising. The moving object boundaries are correctly extracted in 66 out of 73 test image sequences. The authors describe how this contour can be used as an input to a recognition system that classifies the vehicles into five generic categories. Of the 73 vehicles, 67 are correctly classified. >

Patent
16 Aug 1993
TL;DR: In this article, a shuffling/deshuffling technique was proposed to equalize the information content of the data prior to compression, which divides the video image into a multitude of image representing blocks, and selects a predetermined number of the image blocks from different spatial locations in the image, to form a succession of data sets representative of the video information.
Abstract: In a data compression process such as employed to compress video or other data, it is preferable not to compress the image data representative of the video image in a sequential format, or to take the data from the same area of the image. To equalize the information content of the data prior to compression, the present shuffling/deshuffling technique divides the video image into a multitude of image representing blocks, and selects a predetermined number of the image blocks from different spatial locations in the image, to form a succession of data sets representative of the video image information. That is, the selection of the image representing blocks is such that the information content (complexity) in each data set is similar to the information content in each other data set and further similar to the average information content of the entire video image. Thus, the subsequent quantizing factor used in the compression process will tend to be similar for successive data sets, thereby reducing any distortion introduced by the compression process. The image representing blocks may be formed of sequentially scanned blocks of the video image, or of transform coefficients representing similar blocks of the video image. The shuffled data is deshuffled by the inverse process.

Journal ArticleDOI
TL;DR: A hybrid of known methods is achieved, and they are evaluated in the polynomial case with a standard test image by searching the image, which is called the Bath fractal transform (BFT).
Abstract: A general approach to the fractal coding of images is presented, in which image blocks are represented by least squares approximations by fractal functions. Previously known fractal transforms are subsets of this method, which is called the Bath fractal transform (BFT). By searching the image, a hybrid of known methods is achieved, and they are evaluated in the polynomial case with a standard test image.

Patent
08 Sep 1993
TL;DR: In this article, a test image of each needle is acquired with the aid of an image sensor when the needle is moved out of its rest position for forming a stitch, and the analog image signals obtained on reading out the image sensor are converted to digital pixel signals, which are processed for recovering information on the state of the particular needle being imaged.
Abstract: For controlling the quality of the needles of a knitting machine a test image of each needle is acquired with the aid of an image sensor when the needle is moved out of its rest position for forming a stitch. The analog image signals obtained on reading out the image sensor are converted to digital pixel signals, and the digital pixel signals of the test image or of individual test zones of the test image are processed for recovering information on the state of the particular needle being imaged. By using a coarse resolution image sensor it is possible to detect major faults during the normal operation of the knitting machine at full working speed. Needles exhibiting wear and minor faults can be detected by using a high resolution image sensor during special inspection times in which the knitting machine is operated at reduced speed.

Patent
11 Feb 1993
TL;DR: In this paper, a fast off-line image processing method for radiographic images is disclosed wherein an image is decomposed into detail image and multiple resolution levels and a residual image, detail images are modified up to a preset resolution level and a processed image is reconstructed by means of the modified detail images and the residual image.
Abstract: A fast off-line image processing method for radiographic images is disclosed wherein an image is decomposed into detail image and multiple resolution levels and a residual image, detail images are modified up to a preset resolution level and a processed image is reconstructed by means of the modified detail images and the residual image. Interactive processing is performed with different parameter settings.

01 Feb 1993
TL;DR: Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms.
Abstract: Methods for fusing two computer vision methods are discussed and several example algorithms are presented to illustrate the variational method of fusing algorithms. The example algorithms seek to determine planet topography given two images taken from two different locations with two different lighting conditions. The algorithms each employ a single cost function that combines the computer vision methods of shape-from-shading and stereo in different ways. The algorithms are closely coupled and take into account all the constraints of the photo-topography problem. The algorithms are run on four synthetic test image sets of varying difficulty.

Proceedings ArticleDOI
T. Tada1, Kohei Cho1, Haruhisa Shimoda1, Toshibumi Sakata1, Shinichi Sobue 
18 Aug 1993
TL;DR: It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality.
Abstract: Image compression is a key technology to realize on-line satellite image transmission economically and quickly Among various image compression algorithms, the JPEG algorithm is the international standard for still color image compression In this study, various kinds of satellite images were compressed with the JPEG algorithm The relation between compression ratio and image quality were evaluated As for the image quality evaluation, both subjective evaluation and objective evaluation were performed It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality The degradation of spatial distribution quality of the compressed images were evaluated using power spectrum of original and compressed images >

Book ChapterDOI
01 Jan 1993
TL;DR: In this chapter spatio-temporal subsampling will be discussed as a data reduction technique for standardized HDTV transmission systems MUSE and HD-MAC which are completely based on this technique.
Abstract: A digital image sequence consists of a set of pixels each describing the scene intensity at a specific location on a specific instance in time. In a natural scene these pixels are spatially and temporally correlated with each other. An image coding scheme utilizes these correlations in order to represent the image sequence more efficiently in this way allowing for a more cost-effective storage or transmission. In this chapter spatio-temporal subsampling will be discussed as a data reduction technique. Subsampling is in use as a data reduction method for the standardized HDTV transmission systems MUSE [1] and HD-MAC [2] which are completely based on this technique. Recent proposals use this technique in a combination with transform coding [3][4],

Patent
01 Mar 1993
TL;DR: In this paper, a sample of tone values from the image is used to estimate the exposure or central tendency of the recorded image, which is then used to select a tone correction function used to process the image prior to printing, transmission, or CRT display.
Abstract: In a digital image scanning system and method, a sample of tone values from the image is used to estimate the exposure or central tendency of the recorded image. The estimate is then used to select a tone correction function used to process the image prior to printing, transmission, or CRT display.

Proceedings ArticleDOI
23 May 1993
TL;DR: Subjective assessment test results indicate that the mean opinion score (MOS) linearly decreases with an increasing number of consecutive errored scanning lines in log-scale, and that the subjective quality of an English test image is less affected by the same scanning line errors than a Japanese image.
Abstract: Methods for measuring facsimile image quality are needed to evaluate facsimile quality of service. Three subjective evaluation measures, i.e., mean opinion score (MOS), readable rate and retransmission request rate, are examined. These measures assess high-resolution facsimile images degraded by scanning line errors as a function of image characteristics (the language and character size of the text). Subjective assessment test results indicate that the MOS linearly decreases with an increasing number of consecutive errored scanning lines in log-scale, that the MOS for images including three or more consecutive errored scanning lines tends to deteriorate considerably with an increase in error events per sheet, and that the subjective quality of an English test image is less affected by the same scanning line errors than a Japanese image. The relationships among test results are examined, and the application of the evaluation methods is discussed. >

Proceedings ArticleDOI
08 Sep 1993
TL;DR: In this paper, a debinarization technique is used to approximate the original continuous-tone image, and then the color components of the reconstructed image are then compressed using standard lossy compression techniques.
Abstract: Many digital display systems economize by rendering color images with the use of a limited palette. Palettized images differ from continuous-tone images in two important ways: they are less continuous due to their use of lookup table indices instead of physical intensity values, and pixel values may be dithered for better color rendition. These image characteristics reduce the spatial continuity of the image, leading to high bit rates and low image quality when compressing these images using a conventional lossy coder. We present an algorithm that uses a debinarization technique to approximate the original continuous-tone image, before palettization. The color components of the reconstructed image are then compressed using standard lossy compression techniques. The decoded images must be color quantized to obtain a palettized image. We compare our results with a second algorithm that applies a combination of lossy and lossless compression directly to the color quantized image in order to avoid color quantization after decoding.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

01 Dec 1993
TL;DR: In this paper, the authors investigated various parameters (metrics) that can be automatically extracted from a digital image and tested how well they correlated with image quality, including Fourier spectral and histogram shape parameters.
Abstract: : Digital image reconstruction tasks currently require human intervention for a subjective evaluation of image quality A method for unsupervised measurement of digital image quality is desired This research investigated various parameters (metrics) that can be automatically extracted from a digital image and tested how well they correlated with image quality Specifically, images of orbiting satellites captured by a partially compensated adaptive optics telescope were dealt with Two different types of quantities were investigated: (1) Fourier spectral parameters, based on the spatial- frequency sensitivities of the HVS; and (2) Histogram shape parameters (ie image statistical moments) giving quantitative insight into the structural content, information content, and brightness distribution of an image An atmospheric imaging simulator was used to generate a test database of images The use of simulated imagery allowed precise control of the imaging parameters directly relating to image quality: (1) Root Mean Square Error; (2) Seeing conditions (Fried Parameter, ro); and (3) Target magnitude This in turn allowed quantitative testing of candidate image quality metrics Metrics could also be tested against the user defined parameters of the reconstruction process, as a proof-of-concept for totally unsupervised image reconstruction Finally, based on this testing, two successful image quality metrics are recommended

Patent
20 Oct 1993
TL;DR: In this paper, the gray balance of a color copying apparatus is adjusted by generating three sets of test signals respectively representing red, green and blue components of a test image, and the test image is modified by operating on each set of signals with a respective characteristic line.
Abstract: The gray balance of a color copying apparatus is adjusted by generating three sets of test signals respectively representing red, green and blue components of a test image The test image is modified by operating on each set of signals with a respective characteristic line The characteristic lines are stored in look-up-tables constituting part of the apparatus Each characteristic line has a first point corresponding to a first location of the modified test image and a second point corresponding to a second location of the modified test image A first copy of the test image is made and the red, green and blue densities of the copy at the first location determined If any of these densities deviates from a reference value, the corresponding characteristic line is adjusted so that the respective density assumes the reference value A second copy of the test image is now made and the red, green and blue densities of the copy at the second location determined In the event that one or more of the densities at the second location deviates from a reference value, the associated characteristic line or lines are further adjusted in such a manner that the first point of each characteristic line remains unchanged

Patent
30 Aug 1993
TL;DR: In this paper, a plurality of successive pixels of a template and corresponding pixels of an input image inputted into a general purpose image processing hardware are calculated by the hardware between the corresponding pixels, and written into a work memory.
Abstract: A plurality of successive pixels of a template and corresponding pixels of an input image inputted into a general purpose image processing hardware. A summation of differences is calculated by the hardware between the corresponding pixels of template and input image, and written into a work memory. An image verification is performed by integrating successive results of the summation for the total area to be verified in high speed utilizing a general purpose image processing hardware, without using a special purpose image processing hardware.

Journal ArticleDOI
TL;DR: Two statistical methods called "runs test" and "join-count statistic" are used to measure the noise level in a digital image, which is limited by the noise it contains.
Abstract: The dynamic range of the gray level of a digital image is limited by the noise it contains. Two statistical methods called ‘‘runs test’’ and ‘‘join‐count statistic’’ are used to measure the noise level in a digital image. A residual image is formed by subtracting an original image from its smoothed version. Theoretically, the noise level in the residual image should be identical to that in the original image. The noise level is determined by examining each bit plane of the residual image individually starting from the least significant bit up to the bit plane whose statistic does not show a random pattern. Images from three digital modalities: computerized tomography,magnetic resonance, and computed radiography are used to evaluate the gray‐level dynamic range. Both methods are easy to implement and fast to perform.

Proceedings ArticleDOI
17 May 1993
TL;DR: Kohonen's self-organizing feature map (SOFM) has been used to compress several still monochrome images by a compression ratio of 16:1 at a compression rate of 0.5, while maintaining a peak signal-to-noise ratio (PSNR) of about 30 dB.
Abstract: The authors present a study and implementation of still image compression using learned vector quantization (LVQ). Kohonen's self-organizing feature map (SOFM) has been used to compress several still monochrome images by a compression ratio of 16:1 at a compression rate of 0.5, while maintaining a peak signal-to-noise ratio (PSNR) of about 30 dB. C programs were written to implement learning, compressing, decompressing, analyzing error, and others for the VQ method. These programs were run on the SUN SPARC Station 2. Methods for optimizing learning are presented. Given an image that is subjectively similar to the training image, and if the histogram of the test image is a subset of the histogram of the training image, then quantization of the test image will produce results comparable with the PSNR achieved by the training image. >

Patent
Kia Silverbrook1
28 Apr 1993
TL;DR: In this article, an image processing apparatus for creating a color image by combining object image data with scanned image data includes a scanner for inputting image data and an object image input device to input image data.
Abstract: An image processing apparatus for creating a color image by combining object image data with scanned image data includes a scanner for inputting image data and object image input device for inputting object image data. Host processor selects object image data from the input object image data and specifies parameters for editing the selected object image data to create edited object image data. Real-time processor edits the selected object image data according to the parameters specified by the host processor and combines the edited object imaged data with the scanned image data to create combined image data. An output device outputs the combined imaged data to form an image representing a combination of the scanned image data and the object image data.

Journal ArticleDOI
01 Apr 1993
TL;DR: This multiprocessor system performs convolution operations such as spatial filters, contrast enhancement, and binarization for gray-level images, thinning, thickening, pattern matching etc. for binary images, and image quality improvement for moving images such as T.V. images.
Abstract: This paper describes an image processing system using Image Signal Multiprocessors (ISMPs) adapted to gray-level image preprocessing for image analysis and image enhancement. It is composed of four ISMPs, five 1H-delay-lines, two 512×512×8-bit frame memories, a video timing controller (VTC), two 256-word ×8-bit ×8-table Look Up Tables (LUTs) and 80 nsec/sampling A/D and D/A converters. This multiprocessor system performs convolution operations such as spatial filters, contrast enhancement, and binarization for gray-level images, thinning, thickening, pattern matching etc. for binary images, and image quality improvement for moving images such as T.V. images. Otherwise, it performs feature extraction operations such as area calculations, fillet coordination, and moment calculations for objective image data. Moreover, this system is capable of applying color image processing by using a multiboard system.

Patent
16 Jul 1993
TL;DR: In this article, a synthesized reference image from image construction data stored in a database is presented, which process consists in acquiring a real image of an object to be modelled as well as converting (51) the image reconstruction data into a binary image.
Abstract: Process for producing a synthesised reference image from image construction data stored in a database, and means for processing these image construction data, which process consists in acquiring a real image of an object to be modelled as well as converting (51) the image construction data into a binary image. This process furthermore consists: - in processing (53) the binary image to produce a morphological image; - in processing (55) the said morphological image to obtain an image with levels, in which pixels of like luminous intensity and relating to the binary image are grouped into level curves; - in matching (57) the pixels of the image with levels with pixels from the real image of the object to be modelled; and - in sampling (59) the image with levels so as to obtain the synthesised reference image.