scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 1989"


Patent
13 Nov 1989
TL;DR: In this paper, an interpolation filter is used in television standards conversion to decimate an input sequence of higher definition signals into an output sequence of lower definition signals, and the filter is partitioned into a plurality of computational stages.
Abstract: An interpolation filter is used in television standards conversion to decimate an input sequence of higher definition signals into an output sequence of lower definition signals. The filter is partitioned into a plurality of computational stages. Within each stage, the decimation coefficients are stored in a random access coefficient memory and applied to a multiplier to generate the product of a digital input signal and a stored coefficient. The RAM is operable in two modes: a first mode in which new sets of coefficients are serially input to the RAM during the field blanking period and a second mode in which different stored coefficients are output to the multiplier for consecutive digital signals to effect a non-integer decimation ratio.

51 citations


Journal ArticleDOI
TL;DR: The supposed drawback of the two-pass algorithm can be nullified by near-perfect interpolation, at least in the case of rotation, while a major bonus is the greater ease with which interpolation by the FFT may be implemented, in theTwo-pass case, leading to the possibility of highly faithful geometric transformation in practice, aided by the increasing availability of fast DSP and FFT microcircuits.
Abstract: Two-pass image geometric transformation algorithms, in which an image is resampled first in one dimension, forming an intermediate image, then in the resulting orthogonal dimension, have many computational advantages over traditional, one-pass algorithms. For example, interpolation and anti-aliasing are easier to implement, being 1-dimensional operations; computer memory requirements are greatly reduced, with access to image data in external memory regularized; while pipelined parallel computation is greatly simplified. An apparent drawback of the two-pass algorithm which has tended to limit its universal adoption is a reported corruption at high spatial frequencies due to apparent undersampling, in certain cases, in the necessary intermediate image. This experimental study set out to resolve the question of possible corruption by computing the mean-square error when a sinusoidal grating test image is rotated, either by an efficient two-pass algorithm or by a traditional one-pass algorithm. It was found that the method used for interpolation has a major effect on the accuracy of the result, poorer methods accentuating differences between the two algorithms. A totally unexpected and fortuitous result is that, by using near-perfect interpolation (e.g., by the FFT), the two-pass algorithm is almost as accurate as one pleases, for rotations up to 45°, to very close to the Nyquist limit (as also is the one-pass algorithm, with near-perfect interpolation). For rotations of φ > 45°, the two-pass algorithm breaks down before the Nyquist limit, but these can be replaced by rotations of 90° - φ and transposition. Thus, the supposed drawback of the two-pass algorithm can be nullified by near-perfect interpolation, at least in the case of rotation, while a major bonus is the greater ease with which interpolation by the FFT may be implemented, in the two-pass case, leading to the possibility of highly faithful geometric transformation in practice, aided by the increasing availability of fast DSP and FFT microcircuits.

27 citations


Patent
Jorge Gonzalez-Lopez1
06 Oct 1989
TL;DR: In this paper, an image interpolator implements an interpolation function providing real-time, continuous zoom capability to an image display system, where the coefficients required for the interpolation are generated in real time avoiding the need for time consuming table lookup.
Abstract: An image interpolator implements an interpolation function providing real time, continuous zoom capability to an image display system. Output image pixels are obtained by interpolating the values of the colour or intensity of the 2x2 matrix of pixels surrounding the point on the input image. The disclosed arrangement employs a bi-linear interpolation algorithm implemented in the form of cascaded one-dimensional interpolation circuits. Magnification control is established so that a unit increment of the zoom controller, such as a cursor on a tablet, results in a constant increase in the degree of magnification. The coefficients required for the interpolation are generated in real time avoiding the need for time consuming table look-ups.

24 citations


Proceedings ArticleDOI
05 Apr 1989
TL;DR: In this paper, a comparison of common kernels is made to determine the interpolation function that gives the most visually appealing images, and an analysis of the errors introduced with this resampling method is presented.
Abstract: Resampling an image will be described as a low-pass filtering operation followed by sampling to a new coordinate system. To determine the interpolation function that gives the most visually appealing images, a comparison of common kernels is made. Linear, cubic, and windowed sinc functions are compared in terms of frequency response and with prints of images resized using separable extensions of these functions. While the windowed sinc gives the best approximation to an ideal low-pass filter, using this kernel results in objectionable ringing and jaggedness around edges in the image. Cubic interpolation is shown to provide the best compromise between image sharpness and these edge artifacts. For image rotation, and resizing by an arbitrary factor, the filter coefficients (samples of the interpolation function) need to be computed for each pixel of the new image. Alternatively, significant computation can be saved by dividing the distance between pixels of the original image into a number of intervals and precomputing a set of coefficients for each interval. Each new pixel is then computed by finding the interval in which it falls and using the corresponding set of coefficients. An analysis of the errors introduced with this resampling method is presented. This analysis shows the number of intervals required to produce high-quality resampled images.

18 citations


Patent
10 Oct 1989
TL;DR: An image data processor including various subprocessors arranged in a pipeline for carrying out functions such as pixel normalization, background suppression, spot and void removal, image scaling, and size detection as discussed by the authors.
Abstract: An image data processor including various subprocessors arranged in a pipeline for carrying out functions such as pixel normalization, background suppression, spot and void removal, image scaling, and size detection.

16 citations


Proceedings ArticleDOI
27 Nov 1989
TL;DR: A simple image coding algorithm using VQ (vector quantization) and clustering interpolation, which avoids an explicit interpolation operation by employing different but related codebooks at the encoder and the decoder of a vector quantizer, is proposed.
Abstract: The authors propose a simple image coding algorithm using VQ (vector quantization) and clustering interpolation, which avoids an explicit interpolation operation by employing different but related codebooks at the encoder and the decoder of a vector quantizer. Computer simulations indicated that images reconstructed with clustering interpolation had smaller mean square errors than those obtained with linear or contour-based interpolation. In order to reduce coding artifacts and to improve reconstruction quality at low bit rates, an adaptive corrector stage with block classification is also included. Computer simulations demonstrate the effectiveness of the algorithm at about 0.3 b/pixel. >

9 citations


Proceedings Article
18 Jul 1989
TL;DR: A new method for expanding digital images is explored, which involves the identification of crack edges in a digital image, which are then merged into connected sequences called strokes that are used to control the interpolation of grey levels to fill in the gaps in the expanded image.
Abstract: A new method for expanding digital images is explored. It involves the identification of crack edges in a digital image, which are then merged into connected sequences called strokes. A heuristic method is applied to compute these strokes and to map their expansion onto a larger image. Finally, the modified strokes are used to control the interpolation of grey levels to fill in the gaps in the expanded image. >

8 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: Two implementation techniques for building a high-performance image-resampler VLSI chip are considered, including a modified two-pass resampling scheme that can provide a throughput of one pixel in a clock period smaller than that for an adder.
Abstract: The authors consider two implementation techniques for building a high-performance image-resampler VLSI chip. First, a two-level pipelined systolic array is designed for image resampling to give high parallelism in computation and high feasibility for VLSI implementation. Second, a modified two-pass resampling scheme is used to decrease the amount of required storage and increase the concurrency between two resampling passes. With the two techniques, the system can provide a throughput of one pixel in a clock period smaller than that for an adder. >

6 citations


Journal ArticleDOI
TL;DR: The local bandwidth estimation is derived from the Bernstein inequality and is used to achieve an adaptive image resampling and the results are compared with those of more usual coding processes.
Abstract: We propose a spatial frequency bandwidth estimation applied to perform an adaptive picture coding. The local bandwidth estimation is derived from the Bernstein inequality and is used to achieve an adaptive image resampling. The picture is then reconstructed through interpolation and the results compared with those of more usual coding processes.

5 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: A method of improving the image quality of video hard copies by performing bivariate quadratic spline interpolation on a digital image and using the impulse response of the interpolation to design a fast digital filter for its implementation.
Abstract: A method of improving the image quality of video hard copies is proposed. This method performs bivariate quadratic spline interpolation on a digital image. The impulse response of the interpolation is derived and used to design a fast digital filter for its implementation. Simulation for an input image with a resolution of 200 TV lines shows that the method produces an output that is psychovisually equivalent to one with a resolution of 300 TV lines. Thus the image impression can be improved to about 1.5 times that of the input image. It is noted that this method is as effective for image processing tasks such as enlargement in medical image diagnosis. >

3 citations


Proceedings ArticleDOI
11 Jun 1989
TL;DR: A novel image-coding scheme, adaptive interleaved vector quantization (AIVQ), has been devised for still-image transmission which provides an efficient coding scheme which delivers constant image quality with variable transmission rate, as needed in a packet-switched network.
Abstract: A novel image-coding scheme, adaptive interleaved vector quantization (AIVQ), has been devised for still-image transmission. Interleaved vector quantization begins by decomposing the image into several interleaved subimages. These subimages are then coded separately using vector quantization. Since strong correlation exists between them, adaptation can be implemented by coding only one or two subimages in the low-detail areas. Since the remaining packets contain only noncontiguous pixels, their values can be reconstructed well with linear interpolation, AIVQ is a variable-bit-rate coding scheme which allocates bits according to the local detail level in the various regions of the image. In other words, this algorithm assigns more bits to the high-detail than to the low-detail areas of the image. Therefore, AIVQ provides an efficient coding scheme which delivers constant image quality with variable transmission rate, as needed in a packet-switched network. Experimental results demonstrate that excellent reconstructed images are obtained at bit rates in the range of 0.3-0.5 b/pixel. >

Patent
25 Jul 1989
TL;DR: In this paper, the authors proposed to obtain a smoothly enlarged image where sharpness is preserved by obtaining interpolation data based on an addition rate corresponding to the edge quantity of an original image.
Abstract: PURPOSE:To obtain a smoothly enlarged image where sharpness is preserved by obtaining interpolation data based on an addition rate corresponding to the edge quantity of an original image. CONSTITUTION:An output signal from a memory part 100 is inputted to a density difference detection part 101 and an auxiliary scan direction interpolation processing part 102, and the density difference of respective corresponding picture elements of data for two lines, which the inputted in the density difference detection part 101, is detected. The output image signal of a main scan direction interpolation processing part 102 is next inputted to a density detection part 103 and a main scan direction interpolation processing part 104, and the density difference with picture element data prior to one picture element in the same line as image data inputted in the density detection part 103 is detected. Interpolation data is obtained based on the addition rate corresponding to the edge quantity of the original image. Thus, the smoothly enlarged image where sharpness is preserved can be obtained.

Proceedings Article
18 Jul 1989
TL;DR: In this paper, a new method for digital image interpolation is discussed, which takes account of the spatial characteristic of the image sensor and can be used to generate Laplacian-of-Gaussian and directional derivatives of Gaussian images.
Abstract: A new method for digital image interpolation is discussed. This interpolation method takes account of the spatial characteristic of the image sensor. The general application of the algorithm has been described by Oakley and Cunningham (1989). Here, it is shown how the method can be used to generate Laplacian-of-Gaussian and directional derivatives of Gaussian images. >

Proceedings ArticleDOI
09 Nov 1989
TL;DR: In this paper, the chip-Z transform is used to generate transformed projection data in a concentric square grid to eliminate part of the one-dimensional interpolation currently needed for the processing.
Abstract: The use of digital filtering is explored as a means of minimizing artifacts in a general class of image reconstruction approaches commonly referred to as direct Fourier reconstruction methods. In the polar-to-Cartesian conversion, applications of windowing operations for the two-dimensional interpolation-type methods are seen to reduce artifacts according to a number of error measures. For the one-dimensional case, the use of a high-resolution spline interpolation gives the lowest error measures. The use of the chip-Z transform is proposed as a way to generate transformed projection data in a concentric square grid to eliminate part of the one-dimensional interpolation currently needed for the processing. >