scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 1988"


Patent
02 Mar 1988
TL;DR: In this paper, a scan-line interface (40) and an alignment circuit (45) are used to assemble sets of the original pixel values into submatrices and apply them to a convolution engine (46), which computes new pixel values from them.
Abstract: A device for re-scaling an image to change the resolution with which discrete pixel values represent it includes a scan-line interface (40) and an alignment circuit (45) that assemble sets of the original pixel values into submatrices and apply them to a convolution engine (46), which computes new pixel values from them. X and Y scaling engines (58 and 60) control the re-scaling without having to generate original-pixel address at the rate at which new pixels are generated. The X scaling engine (58) simply indicates whether an output of the currently supplied input data should be used to generate an output intensity value and, if so, whether it should be retained for generation of the subsequent intensity value, too. Similarly, the Y scaling engine (60) indicates whether the next scan line to be received should be used for generation of a scan line of output intensity values and, if so, whether it should be retained for generation of a subsequent scan line of output intensity values. In generating the new pixel values, the convolution engine (46) converts from the binary, black-and-white levels of the original pixels to gray-scale values for the new pixels to reduce the jaggedness that scale changes can cause.

46 citations


Patent
Jorge Gonzalez-Lopez1
31 Oct 1988
TL;DR: In this paper, a bi-linear interpolation algorithm implemented in the form of cascaded one-dimensional interpolation circuits is used for real-time, continuous zoom capability to an image dispaly system.
Abstract: An image interpolator implements an interpolation function providing real time, continuous zoom capability to an image dispaly system. Output image pixels are obtained by interpolating the values of the color or intensity of the 2×2 matrix of pixels surrounding the point on the input image. The preferred embodiment employs a bi-linear interpolation algorithm implemented in the form of cascaded one-dimensional interpolation circuits. Magnification control is established so that a unit increment of the zoom controller, such as a cursor on a tablet, results in a constant increase in the degree of magnification. The coefficients required for the interpolation are generated in real time avoiding the need for time consuming table look-ups.

38 citations


Proceedings ArticleDOI
28 Nov 1988
TL;DR: A directional technique that follows edge contours to interpolate pixels along and near contours more accurately and yields fairly good reconstructed images at about 0.27 bit/pixel with a relatively simple vector coder structure.
Abstract: In order to obtain perceptually enhanced interpolation, the authors propose a directional technique that follows edge contours to interpolate pixels along and near contours more accurately. This technique, referred to as variable rate contour based interpolative vector quantization (VQ), uses the directional information inherent in local gray level transitions and consists of an orientation estimator followed by directional interpolators. By combining a powerful VQ technique with decimation and directional interpolation, a very efficient image representation is obtained. An adaptive corrector stage using variable-rate VQ is added as a final stage of coding to improve the quality of the reconstructed image. Some relatively large amplitude errors tend to persist along image edges, and the adaptive scheme helps to correct the remaining errors in those regions selectively using an edge detector. Contour-based interpolative VQ yields fairly good reconstructed images at about 0.27 bit/pixel with a relatively simple vector coder structure. >

16 citations


Proceedings ArticleDOI
20 Mar 1988
TL;DR: A generalized approach based on parallel row and column operations on image data is presented and analysed and Ramifications of the decomposition, particularly with respect to interpolation, along with simulation results are presented.
Abstract: The decomposition and consequent implementation of an image geometric transformation algorithm are considered. In particular, a generalized approach based on parallel row and column operations on image data is presented and analysed. Ramifications of the decomposition, particularly with respect to interpolation, along with simulation results are presented. Further research directions are indicated. >

4 citations


Book ChapterDOI
24 Mar 1988
TL;DR: In this paper, a look-ahead algorithm is proposed to avoid repeating the same ray for several frames by considering coherence over time, where a ray remains constant over l frames and does not need more than O(log l) queries into the scene, compared to l queries for the frame-by-frame approach.
Abstract: Raytracing is a powerful, but relatively time consuming technique for realistic image synthesis in computer graphics. Two algorithms are presented accelerating raytracing of animations by considering coherence over time. The look-ahead algorithm avoids repeatingly tracing the same ray for several frames. More precisely, if a ray remains constant over l frames, the look-ahead algorithm does not need more than O(log l) queries into the scene, compared to l queries for the frame-by-frame approach. This type of coherence particularily occurs for a fixed camera, and scenes only partly changing over time. The preprocessing time is increased by a factor 2 compared to the frame-by-frame approach, while space requirements grow by a factor of log f, f the number of frames to be calculated. The frame interpolation algorithm is based on image interpolation using knowledge about the scene. The central algorithmic task is the comparison of two transformed pixel grids which is solved by a modified version of the plane sweep algorithm for line segment intersection reporting. The frame interpolation algorithm is not restricted to fixed cameras, but may lose information against the frame-by-frame approach.

3 citations


Proceedings ArticleDOI
11 Apr 1988
TL;DR: This paper proposes some image transformation algorithms using spline function interpolation for image restoring, geometrical transformations or spatial distorsion processing and proposes an edge detection method using this kind of functions.
Abstract: In many image processing, the image function values located between sucessive points are needed to be know with a sufficient precision. Indeed, in many applications such as robotics, biomedical imagery or process control, images need to be transformed before being processed. Such transformations allow to make original images closer to reference images. In this paper we propose some image transformation algorithms using spline function interpolation. For the main point, their application domains are image restoring, geometrical transformations or spatial distorsion processing. Comparisons are made among different spline functions to determine which of them is the best solution considering the computation complexity and the processing method efficiency. We also propose an edge detection method using this kind of functions.

1 citations


Patent
24 Nov 1988
TL;DR: In this paper, a system for displaying three-dimensional surface structures according to computer graphics methods extracts a surface definition from a tomographic array of data using interpolation of the data for smooth, high resolution images.
Abstract: A system for displaying three-dimensional surface structures according to computer graphics methods extracts a surface definition from a tomographic array of data using interpolation of the data for smooth, high resolution images. Interpolation can be performed to a degree where artifact-free images are produced for all viewing orientations. Data-processing capacity and time requirements can be reduced with less interpolation while image quality is maintained for all viewing orientations by inspecting the viewing orientation and appropriately scaling the image.

1 citations


Journal ArticleDOI
01 Jul 1988
TL;DR: A hardware solution capable of processing many millions of pixels per second was developed for Reos, and the mesh offset algorithm which combines image resampling with convolution, was selected for its performance and ease of mapping onto silicon.
Abstract: Reos is a document image processing system which stores images of paper documents as a bit map on optical discs. These images are displayed on a CRT at a lower resolution compared to the stored data. The conversion and display of the stored information must be performed quickly to achieve an acceptable system response time; since even highly optimised software could not process more than a few thousand pixels per second, a hardware solution capable of processing many millions of pixels per second was developed. The mesh offset algorithm which combines image resampling with convolution, was selected for its performance and ease of mapping onto silicon. The hardware of this algorithm option is suitable for more general purpose image processing tasks such as filtering to reduce noise, or contrast enhancement. A high level description of the system has been mapped onto silicon using a number of design options to provide the best balance between speed of design, performance and die area. The resolution converter in 2.5 μm CMOS double layer metal technology supports scan path test and signature analysis for production test purposes.

1 citations


Journal ArticleDOI
TL;DR: This paper presents a direct interpolation method (Fast Direct Interpolation) which attains sufficient precision with fixed point arithmetic which is suitable for the high-speed processing demands of computer graphics.
Abstract: Interpolation by the widely used spline function method requires excessive computation time because floating point arithmetic is used whenever a polynomial coefficient is determined. This paper presents a direct interpolation method (Fast Direct Interpolation) which attains sufficient precision with fixed point arithmetic. The processing speed and accuracy of the FDI method are also investigated. Since the FDI method can easily be implemented in hardware and adapted to parallel processing, it is suitable for the high-speed processing demands of computer graphics. An optimum FDI interpolation parameter is determined by considering various practical applications. Finally, continuity is proved for results obtained from fast direct interpolation.

Proceedings ArticleDOI
M. S. Krishnan1
19 Feb 1988
TL;DR: This paper presents a versatile chip set that can realize signal/image processing algorithms used in several important image processing applications, including template-processing, spatial filtering and image scaling, which is superior in versatility, programmability and modularity to several schemes proposed in the literature.
Abstract: This paper presents a versatile chip set that can realize signal/image processing algorithms used in several important image processing applications, including template-processing, spatial filtering and image scaling. This chip set architecture is superior in versatility, programmability and modularity to several schemes proposed in the literature. The first chip, called the Template Processor, can perform a variety of template functions on a pixel stream using a set of threshold matrices that can be modified or switched in real-time as a function of the image being processed. This chip can also be used to perform data scaling and image biasing. The second chip, called the Filter/Scaler chip, can perform two major functions. The first is a transversal filter function where the number of sample points is modularly extendable and the coefficients are programmable. The second major function performed by this chip is the interpolation function. Linear or cubic B-spline interpolation algorithms can be implemented by programming the coefficients appropriately. The essential features of these two basic building block processors and their significance in template-based computations, filtering, data-scaling and half-tone applications are discussed. Structured, testable implementations of these processors in VLSI technology and extensions to higher performance systems are presented.