scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1989"


Patent
24 Apr 1989
TL;DR: In this article, a system for matching images in which characteristic points of an image to be tested for a match, such as a fingerprint, are compared with characteristic points from a master image by attempting to match the distances between pairs of master characteristic points with distances between live characteristic points, whereby the test image is not required to be aligned with the coordinate system of the master image, and the matching system can be implemented in an identification mode in which the live image is attempted to be matched with each of a number of master images.
Abstract: A system for matching images in which characteristic points of an image to be tested for a match, such as a fingerprint, are compared with characteristic points of a master image by attempting to match the distances between pairs of master characteristic points with distances between pairs of live characteristic points, whereby the coordinate system of the test image is not required to be aligned with the coordinate system of the master image. The matching system can be implemented in an identification mode in which the live image is attempted to be matched with each of a number of master images, or a verification mode in which the live image is attempted to be matched with a master image that is purported to be the same as the live image.

132 citations


Journal ArticleDOI
TL;DR: A geometrical approach based on mathematical morphology is proposed to remove speckle noise in coherent imagery, and two simple sequences based on the general theory of Alternating Sequential Filters are presented.

39 citations


Patent
14 Sep 1989
TL;DR: In this paper, a method and apparatus for generating a pattern for making rugs corresponding to a predefined image is presented, where a video camera is used to capture one frame of the video image and the captured image is isolated and compressed by a CPU into a compressed image and having a limited number of subareas, each subarea having a color.
Abstract: A method and apparatus for generating a pattern for making rugs corresponding to a predefined image. The predefined image is scanned by a video camera to generate a video image corresponding to the predefined image. A frame grabber captures one frame of the video image. The captured image is isolated and compressed by a CPU into a compressed image corrresponding to the video image and having a limited number of subareas, each subarea having a color. The CPU compares the color of each subarea to the colors in a look-up table to define a 48-color image. A printer creates from the 48-color image the pattern corresponding to the predefined image and an inventory identifying the numbers and colors of yarns needs to make a rug from the pattern.

35 citations


Patent
03 Nov 1989
TL;DR: In this article, a digital color copying machine comprising a test mode is disclosed, in which image data corresponding to a partial area of an original document indicated is stored in a RAM, and thereafter, the image data stored in the RAM is read out repeatedly, and plural test images for which the color correction is made with different color balances respectively are formed as mosaic monitor images on a recording medium.
Abstract: A digital color copying machine comprising a test mode is disclosed. In the test mode, image data corresponding to a partial area of an original document indicated is stored in a RAM, and thereafter, the image data stored in the RAM is read out repeatedly, and plural test images for which the color correction is made with different color balances respectively are formed as mosaic monitor images on a recording medium. Then, one of plural test images is selected, and a copy of document having a color balance of the selected test image is produced. The state of the selected color balance is displayed on a display section.

34 citations


Journal ArticleDOI
TL;DR: The supposed drawback of the two-pass algorithm can be nullified by near-perfect interpolation, at least in the case of rotation, while a major bonus is the greater ease with which interpolation by the FFT may be implemented, in theTwo-pass case, leading to the possibility of highly faithful geometric transformation in practice, aided by the increasing availability of fast DSP and FFT microcircuits.
Abstract: Two-pass image geometric transformation algorithms, in which an image is resampled first in one dimension, forming an intermediate image, then in the resulting orthogonal dimension, have many computational advantages over traditional, one-pass algorithms. For example, interpolation and anti-aliasing are easier to implement, being 1-dimensional operations; computer memory requirements are greatly reduced, with access to image data in external memory regularized; while pipelined parallel computation is greatly simplified. An apparent drawback of the two-pass algorithm which has tended to limit its universal adoption is a reported corruption at high spatial frequencies due to apparent undersampling, in certain cases, in the necessary intermediate image. This experimental study set out to resolve the question of possible corruption by computing the mean-square error when a sinusoidal grating test image is rotated, either by an efficient two-pass algorithm or by a traditional one-pass algorithm. It was found that the method used for interpolation has a major effect on the accuracy of the result, poorer methods accentuating differences between the two algorithms. A totally unexpected and fortuitous result is that, by using near-perfect interpolation (e.g., by the FFT), the two-pass algorithm is almost as accurate as one pleases, for rotations up to 45°, to very close to the Nyquist limit (as also is the one-pass algorithm, with near-perfect interpolation). For rotations of φ > 45°, the two-pass algorithm breaks down before the Nyquist limit, but these can be replaced by rotations of 90° - φ and transposition. Thus, the supposed drawback of the two-pass algorithm can be nullified by near-perfect interpolation, at least in the case of rotation, while a major bonus is the greater ease with which interpolation by the FFT may be implemented, in the two-pass case, leading to the possibility of highly faithful geometric transformation in practice, aided by the increasing availability of fast DSP and FFT microcircuits.

27 citations


Patent
Hiroyuki Ichikawa1
14 Apr 1989
TL;DR: In this article, an image processing apparatus consisting of an image scanner such as a CCD to digitally read out the image data from a document, a discriminating unit to discriminate whether the input image data is the halftone image data such as, a photograph or single density image data, and a magnification change unit to change the magnification of image data.
Abstract: There is an image processing apparatus such as a copying apparatus for processing an input image data. This apparatus comprises: an image scanner such as a CCD to digitally read out the image data from a document; a discriminating unit to discriminate whether the input image data is the halftone image data such as a photograph or single density image data such as characters or symbols; a magnification change unit to change the magnification of the image data; and a smoothing unit to smooth the image data in the case where the discriminating unit decides that the input image data is the halftone image data when the magnification change unit performs the magnification changing process. The discriminating unit executes the above discrimination on the basis of the density levels of or density difference between a target pixel and its peripheral pixels in the input image data. With this apparatus, the magnification of the halftone image can be smoothly changed.

23 citations


Barry G. Haskell1
01 Feb 1989
TL;DR: Integrated Services Digital Network (ISDN); coding for color TV, video conferencing, videoconferencing/telephone, and still color images; ISO color image coding standard; and ISO still picture standard are briefly discussed.
Abstract: Integrated Services Digital Network (ISDN); coding for color TV, video conferencing, video conferencing/telephone, and still color images; ISO color image coding standard; and ISO still picture standard are briefly discussed. This presentation is represented by viewgraphs only.

16 citations


Patent
07 Feb 1989
TL;DR: In this paper, a system for comparing a subject image against a reference image for determining the closeness of match, or against a plurality of reference images, was proposed, where the reference image which achieves a preselected minimum or maximum of transmitted light for each such comparison is selected as the image which most closely matches the subject image.
Abstract: A system for comparing a subject image against a reference image for determining the closeness of match, or against a plurality of reference images for determining the one of the reference images which corresponds to the closest match. The closest match is determined in response to the extrema of light transmitted through complementary versions of the images. More specifically, the present system compares the true subject image against the complement of each reference image, and the complement of the subject image against each true reference image. Alternatively, the true subject image is compared against each true reference image, and the complement of the subject image against each complement reference image. The particular reference image which achieves a preselected minimum or maximum of transmitted light for each such comparison is selected as the image which most closely matches the subject image. The comparison scheme is useful in comparing very large images by comparing only preselected image portions at one time. The comparisons can be achieved using transparencies of the subject and reference images, and their complements. Alternatively, the comparisons can be performed on a pixel-by-pixel basis, using arbitrarily small pixels, in a video embodiment. Any combination of the subject and the reference images can be generated as video images, either from a recording or in real time, using the outputs of video cameras.

16 citations


Patent
03 Mar 1989
TL;DR: An apparatus for correcting deviations in image linewidth in an image projection system (10) having a laser (12) and liquid crystal cell (16) wherein images are made on the liquide crystal cell by impinging the laser(12) on the cell as discussed by the authors.
Abstract: An apparatus for correcting deviations in image linewidth in an image projection system (10) having a laser (12) and liquid crystal cell (16) wherein images are made on the liquide crystal cell (16) by impinging the laser (12) on the cell (16). The apparatus also has apparatus for creating a test image having a plurality of lines (21) of specific width on a liquid crystal cell and calculating the difference between the linewidth of the plurality of lines of said test image as created and as expected. Furthermore, apparatus (34, 36, 38, 40) are provided for modifying the period of the laser light (42) based on said difference in linewidth between the plurality of lines (as created and as expected) to correct deviations in image linewidth, whereby deviations in image linewidth are substantially eliminated.

14 citations


Proceedings ArticleDOI
15 Aug 1989
TL;DR: A region-growing-based segmentation technique that incorporates human visual system properties is presented, and the use of this technique in image compression is described, and some experimental results are presented.
Abstract: Many image compression techniques involve segmentation of a gray level image. With such techniques, information is extracted that describes the regions in the segmented image, and this information is then used to form a coded version of the image. In this paper we present a region-growing-based segmentation technique that incorporates human visual system properties, and describe the use of this technique in image compression. We also discuss the effect of requantizing a segmented image. Requantization of a segmented image is useful because it can lead to a reduction in the number of bits required to code the description of the regions in the segmented image. This results in a lower data rate. We show that the number of gray levels in a segmented image can be reduced by a factor of at least twelve, without noticeable degradation in the quality of the segmented image. This result is attributable to human visual system properties having to do with contrast sensitivity, and to the fact that requantization of a segmented image does not usually reduce significantly the number of distinct segments in the image. In addition, in this paper we explore the relationship between the number of segments in an image, and the extent of requantization possible before noticeable degradation occurs in the image. Finally, we discuss the impact of the above results on image compression algorithms, and present some experimental results.

13 citations


Patent
07 Nov 1989
TL;DR: In this article, an image processing and display system employs a high speed image LAN (31) for transferring image data separate from control data, where images are initially provided to the system from an image archive in a compressed manner.
Abstract: An image processing and display system employs a high speed image LAN (31) for transferring image data separate from control data. A general purpose LAN (21) carries graphics data and control data of the system. One more image display controller (25) than display monitor (23) is used to provide off-line processing of images prior to display of the images. An analog cross-bar switch (45) povides the necessary connections between image display controllers (25) and the display monitors (23) to provide the desired views of selected images. Images are initially provided to the system from an image archive in a compressed manner. The images are decompressed and held in a local cache (35). With respect to each word of image data, different bytes of the word are compressed by different compression schemes to optimize transfer time of image data. A method is employed for transparently sharing a high performance image archive processor among various workstations of display monitors (23).

Proceedings ArticleDOI
08 May 1989
TL;DR: In this paper, a simple method has been developed to set and track the brightness of cathod ray tuned used to display radiological images in a hospital environment using a computer generated test image (SMPTE).
Abstract: The AT&T CommView® image management and communication system (IMACS) at Georgetown University supports multiple work stations for diagnosis, review, and research. Consistancy of the perceived brightness of displayed images throughout the network in the hospital is a critical issue. A simple method has been developed to set and track the brightness of cathod ray tuned used to display radiological images. A computer generated test image (SMPTE) is used. This consists of 11 brightness patches equally spaced over the dynamic range of CRT intensity driving levels. This test image is displayed and viewed in the same manner as patient images. An inexpensive hand-held photometer calibrated in ft-Lamberts is used to measure the brightness patches of the test images.

01 Feb 1989
TL;DR: In this article, an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data.
Abstract: A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

Journal ArticleDOI
TL;DR: A technique is presented for coding images that are bilevel in nature but have been captured in continuous-tone format, and compressed to about 0.1 to 0.2 b/pixel.
Abstract: A technique is presented for coding images that are bilevel in nature but have been captured in continuous-tone format. Following various stages of image processing, a three-level image is generated, and compressed to about 0.1 to 0.2 b/pixel. The technique has been implemented in the IBM freeze-frame videoconferencing system. >

Journal Article
TL;DR: Quality control in computed tomography should be limited to a few simple but essential checks for image quality and radiation burden, which can only be fulfilled in part, because of the complexity of CT systems.
Abstract: In comparison to the constancy checks in conventional roentgenography, quality control in computed tomography (CT) should be limited to a few simple but essential checks for image quality and radiation burden. These requirements, however, can only be fulfilled in part, because of the complexity of CT systems. Possible parameters are: water and air values, pixel noise/contrast resolution, spatial resolution, artifacts, homogeneity, contrast scale/tube voltage, slice thickness, positioning accuracy, image quality of the topogram, radiation dose and film imaging. With a simple test program, comprising 4 CT-scans and a camera test image that is presently being tested, most of the mentioned quantities can be checked.

Proceedings ArticleDOI
23 May 1989
TL;DR: The key performance parameters achieved in this design, in addition to a high packing density of sensing elements with a unique hexagonal shape, include high signal uniformity, low dark current, good light sensitivity, high blooming overload protection, and no image smear.
Abstract: A new device architecture was developed for building high-performance and high-resolution image sensors suitable for consumer TV camera applications. The sensor elements employed in this architecture are junction field-effect transistors that are organized into an array with their gates floating and capacitively coupled to common horizontal address lines. The photogenerated signal is sampled one line at a time, processed to remove the element-to-element nonuniformities, and stored in a buffer for subsequent readout. The described concept, which includes an intrinsic exposure control, is demonstrated on a test image sensor that has an 8-mm sensing area diagonal and 580(H) x 488(V) picture sensing elements. The key performance parameters achieved in this design, in addition to a high packing density of sensing elements with a unique hexagonal shape, include high signal uniformity, low dark current, good light sensitivity, high blooming overload protection, and no image smear.



Proceedings ArticleDOI
23 May 1989
TL;DR: A method of improving the image quality of video hard copies by performing bivariate quadratic spline interpolation on a digital image and using the impulse response of the interpolation to design a fast digital filter for its implementation.
Abstract: A method of improving the image quality of video hard copies is proposed. This method performs bivariate quadratic spline interpolation on a digital image. The impulse response of the interpolation is derived and used to design a fast digital filter for its implementation. Simulation for an input image with a resolution of 200 TV lines shows that the method produces an output that is psychovisually equivalent to one with a resolution of 300 TV lines. Thus the image impression can be improved to about 1.5 times that of the input image. It is noted that this method is as effective for image processing tasks such as enlargement in medical image diagnosis. >

Proceedings ArticleDOI
15 Aug 1989
TL;DR: It is shown that when mean-square error is used to determine the performance of image compression algorithms, in particular vector quantization algorithms, the meansquare error measurement is dependent upon the data type of the digitized images.
Abstract: We show that when mean-square error is used to determine the performance of image compression algorithms, in particular vector quantization algorithms, the meansquare error measurement is dependent upon the data type of the digitized images. When using vector quantization the possibility exists for encoding images of one type with code books of another type, we show that this cross-encoding has an adverse effect on performance. Thus, when making comparative evaluations of different vector quantization compression techniques one must be careful to document the data type used in both the code book and the test image data. We also show that when mean-square error measurements are made in the perceptual space of a human visual model, the distortion measurements correlate more with subjective image evaluation than when the distortions are calculated in other spaces. We use a monochrome visual model to improve the quality of vector quantized images, but our preliminary results indicate that in general, the performance of the model is dependent upon the type of data and the coding method used.

Proceedings ArticleDOI
27 Mar 1989
TL;DR: Preliminary results suggest that compression ratios of 10-15 to 1 were readily achievable for projection radiographs with no subjective loss of image quality and in some pixels it was found advantageous to code by retaining only the sign bit.
Abstract: The Hartley and Cas-Cas transforms which are discrete and real-to-real, are compared in terms of times taken, compression ratios and quality of the reconstructed image, for the purpose of comparing their suitability for transform domain compression applications. A thresholding method for quantization is introduced where threshold values are adaptively selected according to the correlation factor of each block of equally divided blocks in the image. Additionally in some pixels it was found advantageous to code by retaining only the sign bit. Preliminary results suggest that compression ratios of 10-15 to 1 were readily achievable for projection radiographs with no subjective loss of image quality. The fast Cas-Cas transform ran about 25% faster than the Hartley transform with equal output quality. >

Journal ArticleDOI
A. Gillies1
TL;DR: In this article, three types of criteria which may be used to measure the quality of an image are examined and, therefore, the performance of the image processing operators are evaluated. But, the results from a single test image are necessarily limited, but the need to assess precisely what is being measured in the performance is highlighted.
Abstract: The criteria which may be used to measure the quality of an image are examined and, therefore, the performance of the image processing operators. Three types of criteria are considered: quantitative, qualitative and a hybrid technique. Results are compared from a wider range of image processing operators than in a previous study. The quantitative criteria are compared with qualitative visual judgements and the correlation between the results discussed. In addition, the relative merits of visual analysis, such as Modes- tino and Fries,1 model boundary analysis, such as Peli and Malah4 and the current test image approach are examined. The results from a single test image are necessarily limited, but the work has highlighted the need to assess precisely what is being measured in the performance of image processing operators.

Proceedings ArticleDOI
05 Apr 1989
TL;DR: An adaptive discrete cosine transform (DCT) technique was selected in January 1988 for further refinement and enhancement and a draft standard is expected to be available during 1989.
Abstract: Members of the International. Standards Organization ISO/IEC JTC1/SC2 Working Group 8 (Coded Representation of Picture and Audio Information) and the Consultative Committee of the International Telegraph and Telephone (CCITT) Study Group VIII Special Rapporteur Group on New Forms of Image Communication have been working together during 1987 and 1988 in a Joint Photographic Experts Group (JPEG) for the purpose of developing an international standard for the compression and decompression of natural color images. The technique selected is required to allow for both progressive and sequential image buildup during decompression. Decompression is to be feasible in real time in the ISDN environment (64 Kbits compressed data per second). The final standard is expected to produce a recognizable image at under 0.25 bits/pixel, an excellent image around 0.75 bits/pixel, and an image visually indistinguishable from the original around 3 bits/pixel for original images of 16 bits/pixel. Exact (lossless) coding is also required. An adaptive discrete cosine transform (DCT) technique was selected in January 1988 for further refinement and enhancement. The definition of the inverse DCT, how to improve the low-bit-rate image quality, the choice of entropy coding technique, and the method of achieving graceful progression are being studied as part of the refinement and enhancement process before final selection. A draft standard is expected to be available during 1989. The status of the refinement process will be reviewed.

01 Jan 1989
TL;DR: An outline of the FOCAS software and algorithms is presented followed by a summary of the results and a description of the archive containing the detail analysis, to verify current and future distributions of FocAS and to compare against other image analysis systems which produce similar measurements.
Abstract: A set of standard test images has been analyzed using the Faint Object Classification and Analysis System (FOCAS). This paper presents an outline of the FOCAS software and algorithms followed by a summary of the results and a description of the archive containing the detail analysis. The archive is available on magnetic tape. The detailed results may be used to verify current and future distributions of FOCAS and to compare against other image analysis systems which produce similar measurements.

Proceedings ArticleDOI
21 Mar 1989
TL;DR: The image transformation shows that area correlation calculations can be performed using line integration in the transformed image, and implications with regard to rapid location and classification of image objects are discussed.
Abstract: An approach to the correlation of images with reference image templates is discussed The approach is based upon the use of image false contours generated via digital quantization coupled with special purpose digital processing It is shown that false contours within the image as well as appropriate image transformation and reference template preprocessing can significantly speed up the digital correlation process The image transformation, discussed from the viewpoint of Green's theorem, shows that area correlation calculations can be performed using line integration in the transformed image A timing analysis for the processing approach is presented using a general purpose 32 bit microprocessor common to computer workstations Implications with regard to rapid location and classification of image objects are discussed

Patent
15 Mar 1989
TL;DR: In this paper, the authors proposed a method to adjust a video signal based on the comparison result of the video signal obtained by picking up a test image displayed with the aid of the output of a reference signal generation means and a reference color signal and adjusting it.
Abstract: PURPOSE:To easily execute color adjustment by correcting a video signal based on the comparison result of the video signal obtained by picking up a test image displayed with the aid of the output of a reference signal generation means and a reference color signal and adjusting it CONSTITUTION:The reference color signal is displayed as the test image on a display unit 1 by the reference signal generation means 8 Then the video signal obtained by picking up the displayed test image and the reference color signal are compared by a signal comparison means 12 and the video signal is corrected and adjusted based on that comparison result by a signal adjustment means 7 Thus, as trouble at the time of adjustment or necessity such as an apparatus, time and skill is eliminated and a correction value based on the comparison result can be obtained, the video signal can be easily reversed to a newest adjusted state without adjusting it every time only by holding this correction value

Proceedings ArticleDOI
11 Sep 1989
TL;DR: The results indicate that data compression is possible by shifting one image horizontally and subtracting it from the corresponding area of the other by utilizing the correlation between two images, and makes possible low cost, more realistic 3D visual communications.
Abstract: Statistical characteristics of stereoscopic images and the possibility of stereoscopic image data compression utilizing the mutual correlation between right and left images are presented. First, the mutual (cross) correlation between right and left images and the autocorrelation of the left images are measured. Next, one image is divided into blocks of fixed size. Each block is shifted consecutively and then subtracted from the corresponding area of the other image to form a residual block at each displacement position. The block with the least residual among all translated blocks is determined. These least residual blocks are then assembled to form a least residual image. Finally, the statistical properties of residual images and block translation values are investigated. The results indicate that data compression is possible by shifting one image horizontally and subtracting it from the corresponding area of the other. This research allows the efficient image coding for stereoscopic images by utilizing the correlation between two images, and makes possible low cost, more realistic 3D visual communications.

Proceedings ArticleDOI
Yamaji, Yoshino, Ishitobi, Araki, Ikeda, Saitoh 
07 Jun 1989
TL;DR: Introduces a high performance color image scanner which has a variety of digital image processing capabilities such as smooth image reduction and color processing.
Abstract: Introduces a high performance color image scanner which has a variety of digital image processing capabilities such as smooth image reduction and color processing. >

Proceedings ArticleDOI
01 Jan 1989
TL;DR: The use of an image sequencer and an image-processing card in algorithm testing is discussed, and the use of critical test signals for human vision in algorithm design is described.
Abstract: The use of an image sequencer and an image-processing card in algorithm testing is discussed. The use of critical test signals for human vision in algorithm design is described. The basic approach for image sequence processing is to consider both still images and moving objects separately and test both cases in the sequencer. For still images this is necessary when the scan format is being changed. In both cases the result of processing one field or frame is first studied and then the frames are shown in sequence to see the overall performance. As an example, development of algorithms for scan rate conversion is presented. >