scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 1993"


Journal ArticleDOI
TL;DR: Two methods for shape-based interpolation that offer an improvement to linear interpolation are presented and tests with 3-D images of the coronary arterial tree demonstrate the efficacy of the methods.
Abstract: Many three-dimensional (3-D) medical images have lower resolution in the z direction than in the x or y directions. Before extracting and displaying objects in such images, an interpolated 3-D gray-scale image is usually generated via a technique such as linear interpolation to fill in the missing slices. Unfortunately, when objects are extracted and displayed from the interpolated image, they often exhibit a blocky and generally unsatisfactory appearance, a problem that is particularly acute for thin treelike structures such as the coronary arteries. Two methods for shape-based interpolation that offer an improvement to linear interpolation are presented. In shape-based interpolation, the object of interest is first segmented (extracted) from the initial 3-D image to produce a low-z-resolution binary-valued image, and the segmented image is interpolated to produce a high-resolution binary-valued 3-D image. The first method incorporates geometrical constraints and takes as input a segmented version of the original 3-D image. The second method builds on the first in that it also uses the original gray-scale image as a second input. Tests with 3-D images of the coronary arterial tree demonstrate the efficacy of the methods. >

91 citations


Proceedings ArticleDOI
Seong-Won Lee1, Joonki Paik1
27 Apr 1993
TL;DR: The proposed adaptive version of a B-spline interpolation algorithm exhibits significant improvements in image quality compared with the conventional B- Spline type for algorithm, especially with high magnification ratio, such as four times or more.
Abstract: An adaptive version of a B-spline interpolation algorithm is proposed. Adaptivity is used in two different phases: (1) adaptive zero order interpolation is realized by considering directional edge information, and (2) adaptive length of the moving average filter in four directions is obtained by computing the local image statistics. The proposed algorithm exhibits significant improvements in image quality compared with the conventional B-spline type for algorithm, especially with high magnification ratio, such as four times or more. Another advantage of the proposed algorithm is its simplicity in both computation and implementations. >

75 citations


Patent
David L. Sprague1
30 Jun 1993
TL;DR: In this article, a method and pixel interpolation system for non-linear interpolation of images having a plurality of input pixels and pixel positions is presented, where a sequence of interpolation weights is applied to the one-dimensional interpolator.
Abstract: A method and pixel interpolation system for non-linear interpolation of images having a plurality of input pixels and pixel positions. According to a preferred embodiment of the invention, a plurality of pairs of input pixels and a sequence of corresponding interpolation weights are received with a one-dimensional interpolator. A plurality of sequential weighted sums of the pairs of input pixels are provided at a plurality of the pixel positions in accordance with the interpolation weights. The sequence of interpolation weights is provided, where differences between pairs of successive interpolation weights of the sequence of interpolation weights differ. The sequence of interpolation weights is applied to the one-dimensional interpolator. The present invention may be utilized, for example, for performing the operations of shifting two-dimensional video images where the shifting operations are performed with non-uniform scaling in at least one of the dimensions, and may also be used when shifting an image with non-uniform scaling by a fractional pixel distance.

74 citations


Patent
22 Feb 1993
TL;DR: In this article, a digital image processing apparatus for interpolating a digital input image into an interpolated output image, in one embodiment of the invention, comprises an input buffer (12) for accommodating pixel data of the input image and a coefficient buffer (14) for storing precalculated interpolation weighting coefficients prior to real time image interpolation.
Abstract: Digital image processing apparatus for interpolating a digital input image into an interpolated output image, in one embodiment of the invention, comprises an input buffer (12) for accommodating pixel data of the input image and a coefficient buffer (14) for storing precalculated interpolation weighting coefficients prior to real time image interpolation. The coefficient buffer (14) comprises a first memory segment (22) for containing a set of precalculated sharp interpolating weighting coefficients obtained by using a sharp interpolating algorithm, a second memory segment (24) for containing a set of precalculated soft weighting coefficients obtained by using a soft interpolating algorithm and a third memory segment (26) for containing a set of precalculated weighting coefficients representative of a predetermined characteristic of the input image, such as contrast or density. The processing apparatus further comprises interpolation logic (16) for calculating a sharp interpolated output image pixel, a soft interpolated output image pixel, and a value for the image characteristic, using the input pixel data and the precalculated weighting coefficients, and an algorithm implementation controller (18) for calculating a resultant output pixel by combining the sharp and soft interpolated pixel values as a function of the image characteristic.

48 citations


Patent
02 Jul 1993
TL;DR: In this paper, scattered data interpolation is employed to provide fluent animation of still images, and discontinuities such as cuts and holes are established within an image, to limit the range over which interpolation can be carried out for a given deformation of a feature in the image, and thereby refine the control exerted by animated features.
Abstract: An animation system employs scattered data interpolation to provide fluent animation of still images. Discontinuities, such as cuts and holes, can be established within an image, to limit the range over which interpolation is carried out for a given deformation of a feature in the image, and thereby refine the control exerted by animated features. The amount of computational time and effort required to interpolate the change from one frame of an animation to the next is reduced by concentrating computation of the deformed image on areas of change. Computational requirements are further reduced by taking advantage of frame-to-frame coherence.

39 citations


Book ChapterDOI
01 Jan 1993
TL;DR: An alteration of the frame rate is required in a wide number of applications, such as for standard conversion (NTSC to PAL and viceversa), conversion from interlaced to progressive image format and slow play of sequences for more precise event understanding.
Abstract: In many video coding algorithms a subsampling of the input image sequence is considered. In fact to obtain low output data rates, with respect to the desired image quality, it is necessary to skip images at the transmitter, images that must be reconstructed at the receiver end. For example, this situation can occur when video services are provided over an ISDN network at rates under 384 Kbit/s, in the video-coding algorithm for interactive applications with digital storage media like the CD-ROM, proposed by the Moving Picture Experts Group (MPEG) [1] and for the image sequence transmission over ATM networks [2]. Moreover an alteration of the frame rate is required in a wide number of applications, such as for standard conversion (NTSC to PAL and viceversa), conversion from interlaced to progressive image format and slow play of sequences for more precise event understanding.

30 citations


Patent
24 Jun 1993
TL;DR: The motion compensated temporal interpolator as discussed by the authors is capable of determining when a background scene is covered or uncovered by a foreground object by projecting motion vectors determined for the output field onto temporally adjacent input fields (i/p1, i/p2) and detecting the number of times each input pixel is used as a source for output pixel.
Abstract: The motion compensated temporal interpolator is capable of determining when a background scene is covered or uncovered by a foreground object by projecting motion vectors determined for the output field onto temporally adjacent input fields (i/p1, i/p2) and detecting the number of times each input pixel is used as a source for the output pixel - covering corresponds to multiple pixel use in the following field while uncovering corresponds to multiple pixel use in the preceding field. The motion vectors for output pixels in a covered area can then be corrected by forward projecting vectors from the preceding frame pair whereas output pixels in an uncovered area are corrected by backward projecting vectors from the following pair.

25 citations


Journal ArticleDOI
TL;DR: The proposed approach called truncated overlap-add with compensation (TOAC) technique provides cost effective solution to resampling problems and can be implemented on a single VLSI chip together with inverse block transform operations.
Abstract: A new efficient method for interpolation and decimation of images by arbitrary ratio using block transform coefficients such as discrete cosine transform (DCT) is obtained. Due to multiple standards in image/video coding schemes, it is expected that decoders as well as display or recording devices need to convert the received signal from one format to another. It is essential that high quality resampling be done in lowest hardware complexity since such processing normally requires a large amount of computations. In this paper, a block based non-integer ratio resampling algorithm is developed which can be implemented very efficiently without significant increase of system complexity. For the implementation of the proposed approach, an inverse block transform (for example, inverse DCT) and resampling process are combined into one process so that no additional processing stage is required. The proposed approach called truncated overlap-add with compensation (TOAC) technique provides cost effective solution to resampling problems. It can be implemented on a single VLSI chip together with inverse block transform operations.

12 citations


Journal ArticleDOI
TL;DR: This work proposes a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking eftects, and defines the modified fast lapped transform (MFLT), a modified form of the LOT that entirely eliminates blocking artifacts in the reconstructed data.
Abstract: Many conventional video coding schemes, such as the CCITT H.261 recommendation, are based on the independent processing of nonoverlapping image blocks. An important disadvantage with this approach is that blocking artifacts may be visible in the decoded frames. We propose a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking eftects. Motion estimation and, in part, compensation are performed in the frequency domain using a complex lapped transform (CLT), which can be viewed as a complex extension of the lapped orthogonal transform (LOT). The motion compensation algorithm is equivalent to overlapped compensation in the spatial domain, but also allows image interpolation for subpixel displacements and sophisticated loop filters to be conveniently applied in the frequency domain. For inter- and intraframe coding, we define the modified fast lapped transform (MFLT). This is a modified form of the LOT that entirely eliminates blocking artifacts in the reconstructed data. The transform is applied in a hierarchical structure, and performs better than the discrete cosine transform (DCT) for both coding modes. The proposed coder is compared with the H.261 scheme and is found to have significantly improved performance.

10 citations


Journal ArticleDOI
01 Apr 1993
TL;DR: It is shown that near linear speedup is achieved for such iterative image processing algorithms when the processing array is relatively small and the performance is evaluated in terms of the absolute processing time.
Abstract: Many low-level image processing algorithms which are posed as variational problems can be numerically solved using local and iterative relaxation algorithms Because of the structure of these algorithms, processing time will decrease nearly linearly with the addition of processing nodes working in parallel on the problem In this article, we discuss the implementation of a particular application from this class of algorithms on the 8×8 processing array of the AT&T Pixel system In particular, a case study for a image interpolation algorithm is presented The performance of the implementation is evaluated in terms of the absolute processing time We show that near linear speedup is achieved for such iterative image processing algorithms when the processing array is relatively small

9 citations


Proceedings ArticleDOI
22 Oct 1993
TL;DR: A novel method for image interpolation which adapts to the local characteristics of the image in order to facilitate perfectly smooth edges and yields more pleasing images than comparable algorithms.
Abstract: We propose a novel method for image interpolation which adapts to the local characteristics of the image in order to facilitate perfectly smooth edges. Features are classified into three categories (constant, oriented, and irregular). For each class we use a different zooming method that interpolates this feature in a visually optimized manner. Furthermore, we employ a nonlinear image enhancement which extracts perceptually important details from the original image and uses these in order to improve the visual impression of the zoomed images. Our results compare favorably to standard lowpass interpolation algorithms like bilinear, diamond- filter, or B-spline interpolation. Edges and details are much sharper and aliasing effects are eliminated. In the frequency domain we can clearly see that our adaptive algorithm not only suppresses the undesired spectral components that are folded down in the upsampling process. It is also capable of replacing them with new estimates, which accounts for the increased image sharpness. One application of this interpolation method is spatial interlaced-to- progressive conversion. Here, it yields again more pleasing images than comparable algorithms.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book ChapterDOI
01 Jan 1993
TL;DR: A fast algorithm for scaling digital images by decomposing the overall scale transformation into a cascade of smaller scale operations that accelerates convolution and greatly extends the range of filters that may be feasibly applied for image scaling.
Abstract: This paper describes a fast algorithm for scaling digital images. Large performance gains are realized by reducing the number of convolution operations, and optimizing the evaluation of those that remain. We achieve this by decomposing the overall scale transformation into a cascade of smaller scale operations. As an image is progressively scaled towards the desired resolution, a multi-stage filter with kernels of varying size is applied. We show that this results in a significant reduction in the number of convolution operations. Furthermore, by constraining the manner in which the transformation is decomposed, we are able to derive optimal kernels and implement efficient convolvers. The convolvers are optimized in the sense that they require no multiplication; only lookup table and addition operations are necessary. This accelerates convolution and greatly extends the range of filters that may be feasibly applied for image scaling. The algorithm readily lends itself to efficient software and hardware implementation.

Proceedings ArticleDOI
22 Oct 1993
TL;DR: This paper introduces a representation scheme for images and video sequences using nonuniform samples embedded in a mesh structure that retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction.
Abstract: This paper introduces a representation scheme for images and video sequences using nonuniform samples embedded in a mesh structure. It describes a video sequence by the nodal positions and colors in a starting frame, followed by the nodal displacements in the following frames. The nodal points are more densely distributed in regions containing interesting features such as edges and corners, and are dynamically updated to follow the same features in successive frames. They are determined automatically by maximizing feature (e.g., gradient) magnitudes at nodal points, while minimizing interpolation errors within individual elements, and matching errors between corresponding elements. In order to avoid the mesh elements becoming overly deformed, a penalty term is also incorporated which measures the irregularity of the mesh structure. The notions of shape functions and master elements commonly used in the finite element method have been employed to simplify the numerical calculation of the energy functions and their gradients. The proposed representation is motivated by the active contour or snake model proposed by Kass, Witkin, and Terzopoulos. The current representation retains the salient merit of the original model as a feature tracker based on local and collective information, while facilitating more accurate image interpolation and prediction.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
14 Sep 1993
TL;DR: The authors conclude that appropriate resampling and enhancement of densely sampled freehand US image data provide high quality 3D US images which can be used in a medical diagnostic setting.
Abstract: The objective of the current research was to determine whether a freehand, or six degree-of-freedom (6 DOF), scanning method of ultrasonic image acquisition with real-time transducer localization would provide three-dimensionally reconstructed ultrasonic (US) images comparable to those reported using mechanical or electronic scanning systems. A standard ultrasound machine and a 6 DOF multi-channel tracking system were used to localize the position and orientation of the US transducer. All data were digitized and stored in real-time for image processing, reconstruction, and volume visualization. Image resampling algorithms were developed along with image preprocessing and postprocessing methods. From the results of this research, the authors conclude that appropriate resampling and enhancement of densely sampled freehand US image data provide high quality 3D US images which can be used in a medical diagnostic setting. >

Proceedings Article
01 Jan 1993

Proceedings ArticleDOI
19 May 1993
TL;DR: In this article, a predictive interpolation method which can predict the intensity of a center unknown pixel solely from the intensities of its adjacent known pixels is proposed to obtain a smooth and reasonable high resolution image from a low resolution image and consider the appropriateness of the application of vector quantization to prediction.
Abstract: The authors propose a predictive interpolation method which can predict the intensity of a center unknown pixel solely from intensities of its adjacent known pixels. They obtain a smooth and reasonable high-resolution image from a low-resolution image and consider the appropriateness of the application of vector quantization to prediction in this study. >

Patent
30 Mar 1993
TL;DR: In this paper, a temporal interpolation of two pixels at the same position in space and temporarily adjacent is first carried out with the aid of a median filter for each pixel to be determined.
Abstract: In image processing, there are various methods for converting from interlace to progressive, for example line doubling and motion-adaptive interpolation. In the first case, the hardware outlay is relatively low but the resultant image quality is not optimum for certain image contents. In the second case, the image quality is indeed higher but the hardware outlay is as well. According to the invention, a temporal interpolation of two pixels at the same position in space and temporarily adjacent is first carried out with the aid of a median filter for each pixel to be determined. The result is corrected by further median filtering of the vertically adjacent pixels. Advantageously, no motion detection is required for this double median (DM) interpolation. In addition, the DM interpolation is noise-insensitive.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: Simulations show that in order to obtain an effective semantic segmentation it is necessary to use a very accurate model of the camera lens system, and the estimation of the global motion parameters is shown to be very accurate.
Abstract: An algorithm for image interpolation that takes into account both the motion of the camera and the motion of the imaged objects is described. First the estimated global motion parameters are used to compensate for the camera motion. The missing images are then interpolated by using the local motion field. Finally, these images are reconstructed in their correct dimension, considering the global motion information. Experiments have been carried out to extend the concept of object-background segmentation, developed for fixed camera sequences, to the case of camera zoom or pan. Simulations show that in order to obtain an effective semantic segmentation it is necessary to use a very accurate model of the camera lens system. The estimation of the global motion parameters is shown to be very accurate. For the interpolated images, an average gain of 4-5 dB with respect to the case of standard motion-compensated interpolation has been obtained. >


Patent
05 Mar 1993
TL;DR: In this article, the authors proposed a method to enable the input of a highly precise still picture without providing a special scanning mechanism and a very small displacement mechanism by performing an image interpolation according to the detection result of the motion of the still picture signal recorded in an input memory and an output memory.
Abstract: PURPOSE:To enable the input of a highly precise still picture without providing a special scanning mechanism and a very small displacement mechanism by performing an image interpolation according to the detection result of the motion of the still- picture signal recorded in an input memory and an output memory CONSTITUTION:An A/D conversion part 5 performs an A/D conversion for the analog image signal outputted from a CCD 2 and outputs the signal as a digital image signal In an input image memory part 6, the digital image signal corresponding to one frame which is outputted from the conversion part 5 is expanded to N times of the number of picture element and interpolated by an image processing part 10, and is recorded In an output image memory part 7, a more highly precise still picture image for which a picture element interpolation is performed by the processing part 10 according to the detection result of a motion detection part 9 from the still-picture corresponding to one frame recorded in the memory part 6 and the still-picture before one frame which is preliminarily recorded in the memory part 7 is recorded A finder 8 displays the still-picture image recorded in the memory part 7 and a photographer looks at the image displayed on the finder 8 and confirms an image to be fetched

Proceedings ArticleDOI
01 Jun 1993
TL;DR: The technique devised decomposes a standard cubic interpolation into a set of two more cheap bilinear interpolations using the two windows surrounding any pixel in the interpolated image to compute a local contrast by measuring the decrease or increase of the luminance in a given neighbourhood.
Abstract: This paper describes the development of a transputer based system for interpolating 2D images using a modified cubic interpolation technique. The technique devised decomposes a standard cubic interpolation into a set of two more cheap bilinear interpolations using the two windows surrounding any pixel in the interpolated image. The two bilinear interpolation results are then used to compute a local contrast by measuring the decrease or increase of the luminance in a given neighbourhood. A local contrast enhancement is then used to compute and enhance the value of the interpolated pixel. Implementation on a network of transputers using 'image parallelism' has been carried out and a significant speedup factor obtained. >

Journal ArticleDOI
TL;DR: In this article, the effective enlargement scale of digital satellite images for extracting detailed land cover information by image interpretation was examined, which can be defined mainly from the human eye resolution and the sensor IFOV.
Abstract: High resolution satellite images, such as Landsat TM or SPOT HRV, allow us to extract detailed land cover information. However, if we enlarge those images too much, each pixel appear like a tile and disturbe image interpretation.In this paper, authors have examined the effective enlargement scale of digital satellite images for extracting detailed land cover information by image interpretation. The effective scale can be defined mainly from the human eye resolution and the sensor IFOV. The authors derived a equation which suggests a appropriate scale for patiqular IFOV image. For example, if the IFOV of a satellite image is lOm, we recommend 1/50, 000 scale for enlargement. If the image is enlarged more than this scale, the image quality suddenly decreases.Authors also examined the effect of image interpolation for image enlargement. When one want to enlarge a satellite image more than the above scale, interporation such as Bi-linear or Nearest Neighbor methods are useful for recovering the image quality reduction due to the image enlargement.

Proceedings ArticleDOI
14 Sep 1993
TL;DR: In this article, a pre-transform is introduced into the 2D interpolation scheme through a DFT to take advantage of the periodicity of the DFT, which is superior to the window technique.
Abstract: In many in digital signal processing systems, it is required to change the sampling rate of a digital signal. The interpolation of an image provides an approach to first sample an image at a low rate for transmission or storage, and then increase the sampling rate later. In this paper, a pre-transform is introduced into the 2D interpolation scheme through a DFT. This pre-transform takes advantage of the periodicity of the DFT. Example images show this technique to be superior to the window technique, and the improvement is remarkable. >