scispace - formally typeset
Search or ask a question

Showing papers on "Standard test image published in 1992"


Proceedings ArticleDOI
23 Mar 1992
TL;DR: The fidelity of the fractal method shows promise and its greater speed and simplicity compared to other fractal transforms suggest immediate applications such as interactive browsing of remote image archives or image representation in multimedia systems.
Abstract: A method for block coding of images based on a least squares fractal approximation by a self-affine system (SAS) is presented. The computational cost of the approximation is linear in the number of pixels in the image. The approximation to a rectangularly tiled block involves evaluating various low-order moments over the block, and solving a system of four linear equations for each tile. The method is applied to a standard test image and the effects of various optimizations are shown. A quantitative comparison with the adaptive discrete cosine transform at 8:1 compression is made. The fidelity of the fractal method shows promise and its greater speed and simplicity compared to other fractal transforms suggest immediate applications such as interactive browsing of remote image archives or image representation in multimedia systems. >

104 citations


Journal ArticleDOI
TL;DR: Higher-order neural networks (HONNs) need to be trained on just one view of each object, not numerous distorted views, reducing the training time significantly and 100% accuracy can be guaranteed for noise-free test images characterized by the built-in distortions.

51 citations


Journal ArticleDOI
TL;DR: A two-dimensional method which uses a full-planes image model to generate a more accurate filtered estimate of an image that has been corrupted by additive noise and full-plane blur is presented.
Abstract: A two-dimensional method which uses a full-plane image model to generate a more accurate filtered estimate of an image that has been corrupted by additive noise and full-plane blur is presented. Causality is maintained within the filtering process by using multiple concurrent block estimators. In addition, true state dynamics are preserved, resulting in an accurate Kalman gain matrix. Simulation results on a test image corrupted by additive white Gaussian noise are presented for various image models and compared to those of the previous block Kalman filtering methods. >

35 citations


Patent
17 Sep 1992
TL;DR: In this paper, a method and apparatus for detecting a defective printed matter in a printing press is presented, where each pixel data of a printed matter serving as a reference is stored as reference image data.
Abstract: According to a method and apparatus for detecting a defective printed matter in a printing press, each pixel data of a printed matter serving as a reference is stored as reference image data. Each pixel data of a printed matter serving as a test object is input as test image data. Corresponding pixel data of the reference image data and the test image data are compared with each other to detect a defective printed matter. A change in tone is recognized by accumulating and comparing the reference image data and the test image data in units of predetermined areas. The reference image data is automatically updated and stored on the basis of new test image data upon recognition of the change in tone.

35 citations


Patent
22 Oct 1992
TL;DR: In this paper, a method of quantitatively measuring fidelity of a reproduced image reconstructed from a compressed data representation of an original image is disclosed, which comprises, responsive to user selection, for establishing a global assessment mode or a local assessment mode.
Abstract: A method of quantitatively measuring fidelity of a reproduced image reconstructed from a compressed data representation of an original image is disclosed. The method comprises, responsive to user selection, for establishing a global assessment mode or a local assessment mode. In the global assessment mode changes in luminance of the reproduced image from the original image and changes in color in first and second color difference values of the reproduced image from the original image are used score fidelity. Changes in luminance are measured using a dynamic range, nonlinear transform equation. In the local assessment mode, and responsive to user selection, the reproduced image and the original image are segmented and corresponding pairs of segments from the reproduced image and the original image are identified. Scoring of fidelity of the reproduced image to the original image is done by comparing corresponding pairs of segments in color, luminance, shape, displacement and texture.

32 citations


Patent
Norman Dennis Richards1
14 Jul 1992
TL;DR: In this paper, the display apparatus includes circuitry for generating video signals for the display of still or motion pictures on a CRT or other display device, and an arrangement for generating a test signal to display a test image to facilitate adjustment of the brightness (black level) setting of the CRT.
Abstract: The display apparatus includes circuitry for generating video signals for the display of still or motion pictures on a CRT or other display device. The apparatus further includes an arrangement for generating a test signal to display a test image to facilitate adjustment of the brightness (black level) setting of the CRT. The test image includes adjoining regions (100,102) of black and below-black levels, to forming a first image feature which is invisible at the correct brightness setting. If the brightness setting is too high, the first image feature is visible and forms a symbolic instruction indicating the necessary corrective adjustment. The test signal may be stored with picture information on a storage device.

31 citations


Journal ArticleDOI
TL;DR: A new method for computing realistic 3D images of buildings or of complex objects from a set of real images and from the 3D model of the corresponding real scene is presented and a general scheme where these images are used to test Image Processing algorithms is proposed.

29 citations


01 Jan 1992
TL;DR: In this article, the authors introduce a data redistribution algorithm which aims at dynamically balancing the workload of image processing algorithms on distributed memory processors, and demonstrate the usefulness of their redistribution strategy by comparing the efficiency obtained with and without the elastic algorithm for a thinning algorithm for extracting the skeleton of a binary image.
Abstract: In this paper, we introduce a data redistribution algorithm which aims at dynamically balancing the workload of image processing algorithms on distributed memory processors. First we briefly review state-of-the-art techniques for load balancing application-specific algorithms. Then we describe the data redistribution technique, which we term “elastic load balancing” in a general framework. We demonstrate the usefulness of our redistribution strategy by comparing the efficiency obtained with and without the elastic algorithm for a thinning algorithm which aims at extracting the skeleton of a binary image. We report experimental results obtained with a Supernode machine, based upon reconfigurable networks of 32 Transputers [Nic]. We obtain a speedup of up to 28 over the sequential algorithm, using a Mandelbrot set as a test image. Note that the speedup with a static allocation of the picture was limited to 17 with the same test image, due to the load imbalance among the processors.

28 citations


Journal ArticleDOI
TL;DR: A quantitative evaluation of some noise reduction methods indicates that some new local noise reduction techniques have performance better than that of the standard existing techniques under certain criteria.

26 citations


Patent
28 Aug 1992
TL;DR: In this article, an image processing apparatus and method is provided which makes it possible to reproduce all information of digital image data having a wide dynamic range, and a thermal head is driven on the basis of the luminance image data generated and chrominance image data to print an image.
Abstract: An image processing apparatus and method is provided which makes it possible to reproduce all information of digital image data having a wide dynamic range. A high frequency component of luminance image data is removed by a digital filter. The luminance image data whose high frequency component is removed is divided into highlight image data and shadow image data. Level conversion processing is performed so that the luminance level of the blackest point of the highlight image and the luminance level of the whitest point of the shadow image coincide with each other. Luminance image data is generated by synthesizing the highlight image data, the shadow image data, and the high frequency component. A thermal head is driven on the basis of the luminance image data generated and chrominance image data, to print an image.

25 citations


Patent
12 Feb 1992
TL;DR: In this paper, an image data compression method is described, in which a sheet of image is divided into a plurality of partial images of small area, and in reconstructing an image, the fidelity to the original image is changed in accordance with the utility of the information contained in each partial image.
Abstract: An image data compression method is disclosed, in which a sheet of image is divided into a plurality of partial images of small area, and in reconstructing an image, the fidelity to the original image is changed in accordance with the utility of the information contained in each partial image, and the image data is compressed in accordance with the particular fidelity.

Patent
19 May 1992
TL;DR: In this article, a color test image is generated for a selected jet by successively varying the width of the pulse modulation from the minimum to the maximum, and this information is stored in a look-up table for use during the generation of a reproduced image.
Abstract: An automatic system for the spray painting of color images in which accurate color reproduction is achieved by a separate look-up table generated for each jet and stored in the computer memory. During the reproduction of an image, the computer uses the look-up tables to determine the pulse width of an air modulation stream to reproduce the precise color density called for by the input data. Because slight variations in the characteristics of two jets spraying the same color would produce objectionable lining in the image, a separate look-up table is provided for each jet. A color test image is generated for a selected jet by successively varying the width of the pulse modulation from the minimum to the maximum. For example, while the jet is making a single scan across the medium, the pulse width modulation of the air stream is varied in 256 separate steps to produce a test image that varies from the lightest to the darkest density of the particular color being sprayed. This pattern is scanned by a densitometer that delivers its output to the computer. The position of the densitometer along the color test image is correlated with the modulation pulse width that produced that particular color density and this information is stored in a look-up table for use during the generation of a reproduced image. A dark current is measured by the densitometer and subtracted from the calibration readings so that the calibration is independent of ambient illumination.

Patent
Hiroshi Arai1
18 Nov 1992
TL;DR: An image combining device as discussed by the authors scans an image on a large document and prints it, which is achieved quickly and efficiently by adjusting a magnification ratio applied to scanned image data, so as to allow the data to fit within the memory.
Abstract: An image combining device scans an image on a large document and prints it. The device includes a memory for storing image signals. In accordance with a first aspect of the device, a scanner scans areas of the document which are individually smaller than the area of the entire document, but combines the areas so as provide an eye-readable representation of the image which was on the large document. This is achieved quickly and efficiently by adjusting a magnification ratio applied to scanned image data, so as to allow the data to fit within the memory. In accordance with another aspect of the device, the combination of the images and the printing of the output are achieved by compressing image data using a binarization processor.

Proceedings ArticleDOI
23 Mar 1992
TL;DR: A design procedure for a vector quantizer which simultaneously performs halftoning and compression of sampled monochrome images is presented, based on the generalized Lloyd algorithm, which results in a quantizer that is locally optimal under a given weighted-squared-error distortion measure.
Abstract: A design procedure for a vector quantizer which simultaneously performs halftoning and compression of sampled monochrome images is presented. The design method is based on the generalized Lloyd algorithm which results in a quantizer that is locally optimal under a given weighted-squared-error distortion measure. The optimal system is approximated by means of a computationally efficient vector halftoning algorithm. A test image encoded by this method at 0.094 b/pixel is compared with images of the same rate produced by independent compression and halftoning steps. >

Proceedings ArticleDOI
01 Oct 1992
TL;DR: A Monte-carlo simulation of S-D image registration in two different modalities using external markers is presented and the 9 parameters of the transformation is obtained by optimizing the two matched sets of fiducial markers of the twomodalities using non-linear minimization methods.
Abstract: A Monte-carlo simulation of S-D image registration Crom two different modalities using external markers is presented. The head is assumed to be a sphere with a 10 cm radius and the tranformation that is used to correlate two volume images is constituted of a rotation (Euler Angles), scaling and a translation. The 9 parameters of the transformation is obtained by optimizing the two matched sets of fiducial markers of the two modalities using non-linear minimization methods. A test image is generated in each modality to study quantitatively the precision of the registration.

Patent
30 Oct 1992
TL;DR: In this paper, the paster portion of the web is recognized on the basis of a generation pattern of the defective pixels, which can be used to detect defective pixels on the web.
Abstract: According to a method and apparatus for recognizing a paster portion of a web, each pixel data of a printed matter serving as a reference is stored as reference image data. Each pixel data of a printed matter serving as a test target printed on the web is stored as test image data. The reference image data is compared with the test image data in units of pixels to detect defective pixels. The paster portion of the web is recognized on the basis of a generation pattern of the defective pixels.

Proceedings ArticleDOI
Armando Manduca1
01 May 1992
TL;DR: A software module has been created in the biomedical image analysis and display package ANALYZE which enables experimentation with 2-D and 3-D image compression based on wavelet transforms.
Abstract: A software module has been created in the biomedical image analysis and display package ANALYZE' which enables experimentation with 2-D and 3-D image compression based on wavelet transforms. In particular, this module allows the user to interactively determine the desired amount of compression (by viewing the results) and to interactively define specific subregions of the image that are to be preserved with full fidelity during the compression (again, viewing the results). Examples of the tool's operation and results on a variety of medical images will be presented.

Patent
18 Sep 1992
TL;DR: In this paper, the rough image data is generated from a multistage compression file to the image data which are displayed on a work file 51 and when an operator gives a display instruction for an image 53 of the medium magnification little larger than the standard level with a windowing operation, the display data is produced and displayed on the screen by means of the file 56 having the roughest image resolution corresponding to the displayed magnification.
Abstract: PURPOSE: To display the images at a high speed by producing the rough image data of the rough image resolution from the standard image data and then selecting and displaying the standard or rough image data in response to the instruction of an operator. CONSTITUTION: In the preprocessing of display carried out by an image display processing part 50, both rough image data files 55 and 56 and a standard image data file 57 are produced from a multistage compression file to the image data which are displayed on a work file 51. When an operator has a display instruction for a rough entire image 52, the display data is produced and displayed on a screen with use of the file 55 having the roughest image resolution among those evolved image data. When the operator gives a display instruction for an image 53 of the medium magnification little larger than the standard level with a windowing operation, the display data is produced and displayed on the screen by means of the file 56 having the rough image resolution corresponding to the displayed magnification. COPYRIGHT: (C)1994,JPO&Japio

Proceedings ArticleDOI
30 Apr 1992
TL;DR: The 031 chip set employs an algorithm for high quality compression of continuous-tone color or monochrome images, similar to the algorithm specified in the Joint Photographic Expert Group standard, targeted at cost-sensitive business and consumer applications.
Abstract: Image compression is used to handle large volume of digitized image data in order to minimize the time and cost required to store and transfer the digitized data. Image compression is one of the key components in emerging applications such as digital still video cameras, multimedia, color printers, video fax machines, and desktop publishing. This paper will describe the Zoran 031 image compression chip set. The chip set is comprised of the ZR36020 Discrete Cosine Transform (DCT) Processor and the ZR36031 Image Compression Coder/Decoder that work together to perform image compression and expansion. The chip set employs an algorithm for high quality compression of continuous-tone color or monochrome images, similar to the algorithm specified in the Joint Photographic Expert Group standard. The 031 chip set is targeted at cost-sensitive business and consumer applications such as digital still video cameras, color printers, color fax machines, and scanners. The architecture and the coding/decoding algorithm of the chip set as well as the add-in image compression PC board in which it is utilized will be discussed.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
10 Dec 1992
TL;DR: In this paper, an image data analysis section 7 performs a histogram analysis of image data to determine the type of several image processing which should be implemented with an Image Data Processing Section 6 and directs the image data processing section 6 to accomplish execution, and images obtained are displayed simultaneously on a screen of a monitor 8 or in equal magnification sequentially.
Abstract: PURPOSE:To achieve automation of altering work of image processing conditions by performing an image processing of an original image automatically on a plurality of image processing conditions to display a plurality of different images obtained simultaneously or sequentially on a screen of a monitor. CONSTITUTION:An image data analysis section 7 performs a histogram analysis of an image data to determine the type of several image processings which should be implemented with an image data processing section 6 and directs the image data processing section 6 to accomplish execution. The image data processing section 6 executes several type of image processing for an inputted image data directed from the image data analysis section 7 and images obtained are displayed simultaneously on a screen of a monitor 8 or in equal magnification sequentially. When the image meeting the requirement is selected from among the images, the image selected is displayed in equal magnification on the screen of the monitor 8 or an image data involved is written into a hard disc 10 and an external memory 9 while being outputted to an imager 11.

Tirso Alonso1
01 Jan 1992

Patent
23 Dec 1992
TL;DR: In this article, a test image is acquired for each type of illumination and one or more displacement images are produced for each test image by shifting the uniform fine structure by about one grid constant, or by an integral multiple thereof, in the direction corresp. to the grid constant.
Abstract: The object is illuminated from different directions and/or at different spatial angles with either time or spectral separation. A test image is acquired for each type of illumination.One or more displacement images is produced for each test image by shifting the uniform fine structure by about one grid constant, or by an integral multiple thereof, in the direction corresp. to the grid constant. The displacement images are compared with the preceding test images.

Proceedings ArticleDOI
11 Oct 1992
TL;DR: In this paper, the authors explored alternative techniques to be used instead of the usual side-by-side comparison, where the information contained in both images is perceived in a single image, preserving the context between compression errors and image structures.
Abstract: To evaluate the visual influence of irreversible compression on medical images, changes of the images have to be visualized. The authors have explored alternative techniques to be used instead of the usual side-by-side comparison, where the information contained in both images is perceived in a single image, preserving the context between compression errors and image structures. Thus fast and easy comparison can be done. These techniques make use of the human ability to perceive information also in the dimensions of color, space, and time. A study was performed with JPEG-compressed coronary angiographic images. Changes in the resulting images for six compression factors from 7 to 30 were scored by an observer. The results show that medically relevant changes, using the JPEG algorithm, appear between compression ratios of 7:1 and 10:1. >

07 Apr 1992
TL;DR: A new compression technique is presented which can be used to considerably reduce the amount of data that are required to digitally transmit or store a sequence of images.
Abstract: A new compression technique is presented which can be used to considerably reduce the amount of data that are required to digitally transmit or store a sequence of images. The method used to compress the data is based on image resampling, and describes the new frame as a set of geometric transformations of parts of the previous frame. A modular image processing system is also described which can be used to implement the encoder and decoder for this compression technique.

Proceedings ArticleDOI
01 Oct 1992
TL;DR: In this paper, the authors compared the performance of three noniterative and two iterative filters for the restoration of gated nuclear cardiac images by three different methods and compared the results with the constrained least squares filter and the Wiener filter.
Abstract: Restoration of gated nuclear cardiac images by three noniterative and two iterative filters is evaluated and compared. Lowest root mean square error for a test image set is obtained with the constrained least squares filter and the Wiener filter. A method of estimating the regularization parameter by minimizing the cross validatory objective function is also shown to be effective.

Patent
Norman Dennis Richards1
14 Jul 1992
TL;DR: In this article, the display apparatus includes means for generating video signals for the display of still or motion pictures on a CRT or other display device, and a test signal to display a test image to facilitate adjustment of the brightness setting of the CRT.
Abstract: The display apparatus includes means for generating video signals for the display of still or motion pictures on a CRT or other display device. The apparatus further includes means for generating a test signal to display a test image to facilitate adjustment of the brightness (black level) setting of the CRT. The test image includes adjoining regions (100,102) of black and below-black levels, to forming a first image feature which is invisible at the correct brightness setting. If the brightness setting is too high, the first image feature is visible and forms a symbolic instruction indicating the necessary corrective adjustment. The test signal may be stored with picture information on a storage device.

Patent
Norman Dennis Richards1
14 Jul 1992
TL;DR: In this article, the display apparatus includes means for generating video signals for the display of still or motion pictures on a CRT or other display device, and a test signal to display a test image to facilitate adjustment of the brightness setting of the CRT.
Abstract: The display apparatus includes means for generating video signals for the display of still or motion pictures on a CRT or other display device. The apparatus further includes means for generating a test signal to display a test image to facilitate adjustment of the brightness (black level) setting of the CRT. The test image includes adjoining regions (100,102) of black and below-black levels, to forming a first image feature which is invisible at the correct brightness setting. If the brightness setting is too high, the first image feature is visible and forms a symbolic instruction indicating the necessary corrective adjustment. The test signal may be stored with picture information on a storage device.

Proceedings ArticleDOI
Kazuhiro Suzuki1, Yutaka Koshi1, Syunichi Kimura1, Setsu Kunitake1, Koh Kamizawa1 
01 Nov 1992
TL;DR: From the result of the feasibility study, the segmented image formatting criterion provides a well-designed quantization-table set for DCT coding scheme, which shows the better SNR and the perceptually better image quality than that of the conventional scheme.
Abstract: Distributed office systems employ the image exchange facilities such as image input/output, handling a softcopy, image filing service and so on. This paper presents a study of the segmented image exchange approach suit well for block based image coding and image processing schemes. From the result of the feasibility study, the segmented image formatting criterion provides a well-designed quantization-table set for DCT coding scheme. The 400 spi full-tone image coding simulation of the segmented adaptive quantization for DCT coding shows the better SNR and the perceptually better image quality than that of the conventional scheme. To select the quantization-table in the decoder, this scheme requires 3-bit header for every coded block. However, it enables another image processing features in a reconstructed image.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: In this article, the authors present a method for computing realistic 3D images of buildings or complex objects from a set of real images and from the 3D model of the corresponding real scene.
Abstract: This paper illustrates the cooperation between Image Processing and Computer Graphics. We present a new method for computing realistic 3D images of buildings or of complex objects from a set of real images and from the 3D model of the corresponding real scene. We show also how to remove the real shadows from these images and how to simulate new lighting. Our system can be used to generate synthetic images, with total control over the position of the camera, over the features of the optical system and over the solar lighting. We propose several methods for avoiding most of the artifacts that could be produced by a direct application of our approach. Finally, we propose a general scheme where these images are used to test Image Processing algorithms; a long time before that the first physical prototype is built.

Proceedings ArticleDOI
14 Jun 1992
TL;DR: In this article, a novel image data compression technique which can be used to transmit or store a sequence of moving digital images is presented, which uses second-order geometric transformations to describe changes that occur in the images from one frame to the next.
Abstract: A novel image data compression technique which can be used to transmit or store a sequence of moving digital images is presented. This technique uses second-order geometric transformations to describe changes that occur in the images from one frame to the next. It is also shown how a modular image processing system can be used to implement these operations as both the encoder and decoder in a system. The hardware implementation is described. >