scispace - formally typeset
Search or ask a question

Showing papers on "Image scaling published in 1997"


Journal ArticleDOI
TL;DR: A family of quadratic functions is derived, and the interpolating member of this family has visual quality close to that of the Catmull-Rom cubic, yet requires only 60% of the computation time.
Abstract: Nearest-neighbor, linear, and various cubic interpolation functions are frequently used in image resampling. Quadratic functions have been disregarded largely because they have been thought to introduce phase distortions. This is shown not to be the case, and a family of quadratic functions is derived. The interpolating member of this family has visual quality close to that of the Catmull-Rom cubic, yet requires only 60% of the computation time.

227 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: A set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations are proposed.
Abstract: We discuss possible algorithms for interpolating data given in a set of curves and/or points in the plane. We propose a set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations. The absolute minimal Lipschitz extension model (AMLE) is singled out and studied in more detail. We show experiments suggesting a possible application, the restoration of images with poor dynamic range.

106 citations


Journal ArticleDOI
01 Aug 1997
TL;DR: The authors present an adaptive resampling algorithm for zooming up images that is based on analysing the local structure of the image and applying a near optimal and least time-consuming resamplings function that will preserve edge locations and their contrast.
Abstract: Applying an interpolation function indiscriminately to an image, to resample it, will generally result in aliasing, edge blurring and other artefacts. The authors present an adaptive resampling algorithm for zooming up images. The algorithm is based on analysing the local structure of the image and applying a near optimal and least time-consuming resampling function that will preserve edge locations and their contrast. This is done by segmenting the image dynamically into homogeneous areas, as it is scanned or received. Based on the location of the pixel to be computed (whether it is within a homogenous area, is on its edge or is an isolated feature), interpolation, extrapolation or pixel replication is chosen. Algorithm performance, from both a quality and a computational complexity aspect, are compared to different methods and functions previously reported in the literature. The advantage of the algorithm is quite apparent at edges and for large zooming factors.

47 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: A wavelet-based interpolation method that imposes no continuity constraints is introduced and produces noticeably sharper edges than traditional techniques and exhibits an average PSNR improvement of 2.5 dB over bilinear and bicubic techniques.
Abstract: Common image interpolation methods assume that the underlying signal is continuous and may require that it possess one or more continuous derivatives. These assumptions are not generally true of natural images, most of which have instantaneous luminance transitions at the boundaries between objects. Continuity requirements on the interpolating function produce interpolated images with oversmoothed edges. To avoid this effect, a wavelet-based interpolation method that imposes no continuity constraints is introduced. The algorithm estimates the regularity of edges by measuring the decay of wavelet transform coefficients across scales and attempts to preserve the underlying regularity by extrapolating a new subband to be used in image resynthesis. The algorithm produces noticeably sharper edges than traditional techniques and exhibits an average PSNR improvement of 2.5 dB over bilinear and bicubic techniques.

41 citations


Patent
13 Nov 1997
TL;DR: In this article, an adaptive interpolation circuit is proposed to calculate correlation degrees in a vertical direction, a horizontal direction, and diagonal directions of an R-G image in the direction having the greatest correlation degree, so that no resolution deterioration is caused in a direction which orthogonally intersects the aforementioned direction.
Abstract: The present invention enables to obtain an image of a high resolution as reasonable costs by using an adaptive interpolation circuit which is supplied by image data R, G, and B which have been subjected to a white balance adjustment in a DSP and an R-G image is combined in an internal memory. The adaptive interpolation circuit calculates correlation degrees in a vertical direction, a horizontal direction, and diagonal directions. If an interpolation is executed according to the R-G image in the direction having the greatest correlation degree, no LPF processing is executed, i.e., no resolution deterioration is caused in a direction which orthogonally intersects the aforementioned direction. That is, the adaptive interpolation circuit enables to enhance a resolution by executing an interpolation according to the correlation of the image data around a portion to be interpolated.

39 citations


Journal ArticleDOI
TL;DR: Simulations show that the proposed motion estimation tool provides higher PSNR than the classical block matching algorithm and may achieve the optimal sharing between motion and error information encoding.
Abstract: This paper introduces a motion estimation tool based on triangular active mesh. This tool can be used to model the deformation of various kinds of objects, especially frames and arbitrarily shaped regions known as video object planes (VOPs) in the MPEG-4 context. In the latter case, a polygon approximation of the region is performed in order to define border nodes and to triangulate the whole considered domain. Object motion is represented by a piecewise affine transformation whose coefficients are estimated by means of motion estimation of triangle vertices. Within the context of very low bit-rate coding, this tool appears to be useful for image prediction, temporal interpolation and may achieve the optimal sharing between motion and error information encoding. Simulations show that the proposed motion estimation tool provides higher PSNR than the classical block matching algorithm.

37 citations


Journal ArticleDOI
TL;DR: In this paper, the Fourier transform shift theorem is used to determine the optimal resolution to see small vessels in high-resolution magnetic resonance angiography (MRA) images.
Abstract: The role of high-resolution imaging has generally been limited because of the associated loss of signal-to-noise ratio (SNR) as voxel size decreases and imaging time increases. Despite these truths, we show that high-resolution imaging methods can be used to perform better magnetic resonance angiography (MRA), enhance visibility of small structures, and allow better image interpolation. Specifically, we show that very small vessels can be seen with conventional MRA methods, and small lesions on the order of a few cubic millimeters can be seen with a single dose of gadolinium diethyltriaminepentaacetic acid, and structures such as the hippocampal formation are best depicted when a high-resolution three-dimensional (3D) imaging method is used. We also show that image interpolation for the 3D visualization of structures with complicated geometry is best accomplished with a fractional voxel evaluation using the Fourier transform shift theorem on high-resolution images. We demonstrate that the expression for visibility, CNR √p, can be used to establish the optimal resolution to see a given structure. CNR refers to the contrast-to-noise ratio and p is the number of voxels occupied by the object in the image. The optimal resolution is determined from theoretical curves of visibility as a function of voxel size relative to object size. We also demonstrate the enhancement of small vessel visibility on individual images and maximum-intensity projection images with voxel sizes as small as 0.29 mm using 1024 sampled points in the readout direction. Using 3D visibility arguments, it is predicted that under the right conditions, objects of interest much smaller than the voxel size can be seen on conventional MR images. © 1997 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 8, 529–543, 1997

35 citations


Patent
14 Mar 1997
TL;DR: In this paper, a judgment is made as to whether an interpolation point in an original image is or is not located at an image edge portion, which is made by using a threshold value having been set such that, as a desired level of sharpness of interpolation image obtained from interpolation processing becomes low, the threshold value may become large.
Abstract: A judgment is made as to whether an interpolation point in an original image is or is not located at an image edge portion. The judgment is made by using a threshold value having been set such that, as a desired level of sharpness of an interpolation image obtained from interpolation processing becomes low, the threshold value may become large. When an interpolation point is judged as being located at an image edge portion, an interpolated image signal component corresponding to the interpolation point is calculated with interpolation processing capable of keeping the image edge portion sharp. When an interpolation point is judged as not being located at an image edge portion, an interpolated image signal component corresponding to the interpolation point is calculated with the interpolation processing, which is carried out by combining interpolation processing for obtaining an interpolation image, which has a comparatively high sharpness, and interpolation processing for obtaining an interpolation image, which has a low sharpness.

26 citations


Patent
Reiko Ide1, Hiroshi Ota1
05 Jun 1997
TL;DR: In this article, the notion of the image is accelerated at the starting key frame and breaks the motion at an and key frame without adding manual key frames. But the idea of the key frame shape decision is not considered.
Abstract: An image editing apparatus having an input for key frame information and a key frame storage means. A first interpolation control point calculates means for receiving information from key frame storage and calculates control points for curve interpolation. A key frame shape decision means receives the information from the storage means, decides the shape of a key frame, and outputs the information. A second interpolation control point calculates control points by a different method than the first interpolation. Curve interpolation means receive output from both interpolation means and replaces the first interpolation with the second, and performs the curve interpolation using the second replacement control points. Storage means receive the timing information and output an image to a display unit. Thus, the notion of the image is accelerated at the starting key frame and breaks the motion at an and key frame without adding manual key frames.

25 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper proposes an algorithm for locating the incorrect pixels of an image, assuming only partial knowledge of its Fourier transform, and shows that the positions can be evaluated in O( n/sup 2/) or even O(n log n) flops by solving a set of n linear equations and computing an FFT.
Abstract: Most image interpolation or extrapolation algorithms assume that the locations of the unknown pixels are known. In this paper we attempt to remove this restriction. More precisely, we propose an algorithm for locating the incorrect pixels of an image, assuming only partial knowledge of its Fourier transform. Note that this is a nonlinear problem: the unknown quantities are the positions and values of the (say) n erroneous pixels. We show that the positions can be evaluated in O(n/sup 2/) or even O(n log n) flops by solving a set of n linear equations and computing an FFT. The determination of n is part of the algorithm, whose stability is also discussed. The values of the n incorrect pixels can then be estimated using any of the interpolation methods known.

24 citations


Journal ArticleDOI
TL;DR: A noniterative method for computing the six parameters of egomotion from this visual input, which is initially tested in a ray-traced environment to show proof of concept and to explore factors that influence its performance.
Abstract: The motion of an imaging device relative to the environment can, theoretically, be determined from the spatiotemporal intensity changes induced on the image plane of the device. We present a noniterative method for computing the six parameters of egomotion (three translatory and three rotational) from this visual input. The scheme is initially tested in a ray-traced environment to show proof of concept and to explore factors that influence its performance. We then demonstrate its performance on a multilobed camera, which is moved by arbitrary amounts in space. We also discuss and describe some practical implementations.

Patent
11 Sep 1997
TL;DR: In this article, an image scaling system using N-point forward and inverse one-dimensionial scaled discrete cosine transforms (DCTs) is presented. But the system is not suitable for image compression.
Abstract: An image scaling system that enlarges images during JPEG encoding (400) and reduces images during JPEG decoding (500). The image scaling system uses N-point forward and inverse one-dimensionial scaled discrete cosine transforms (DCTs), where N is selected from among 1, 2, 3, 4, and 6. When encoding a source image (110) to an enlarged JPEG image (422), the system partitions the source image into N×N blocks. Each N×N block is transformed using the N-point scaled DCT (410). The system modifies quantization tables (418) to account for the scale of the transform and the increase in size of the image and quantizes (412) the blocks of N×N scaled cosine coefficients using the modified quantization tables. The resulting N×N blocks are enlarged (414) to 8×8 blocks by padding each block with coefficients having values of zero. When decoding a JPEG image (510) to a reduced output image (518), the system retrieves 8×8 blocks of quantized cosine coefficients from the JPEG image. The system reduces (512) each block to an N×N block of quantized cosine coefficients. The system modifies the quantization tables (520) retrieved from the JPEG image to account for the scale factor and the decrease in size of the image, then quantizes (514) the N×N blocks to produce N×N blocks of scaled cosine coefficients. The system performs an N-point scaled inverse DCT (516) on each block of scaled cosine coefficients. The results of the inverse DCT form the reduced output image (518). The forward and inverse DCTs are performed using efficient processes that require relatively few calculations to achieve the desired result.

Book ChapterDOI
17 Sep 1997
TL;DR: The idea underlying this work is to estimate missing frequencies from the original low resolution image and to synthesize them using sub-pixel edge estimation and a polynomial interpolation step.
Abstract: The main limitation of current magnifying techniques is that they do not introduce any new information into the original image. This lack of information is responsible for the perceived degradation of the enlarged image. The idea underlying this work is to estimate missing frequencies from the original low resolution image and to synthesize them. Sub-pixel edge estimation and a polynomial interpolation step are the key techniques of the proposed method. Furthermore, a new extension to color images is presented. Results are encouraging even if they suggest that further effort should be spent in improving edge localization accuracy.

Patent
Dae-Sung Cho1, Jae-Seob Shin1
23 Dec 1997
TL;DR: In this article, the ambiguity between the interpolation value and the threshold value is removed by using the context (state value of the reference pixels around the interpolated pixel), thereby reducing the blocking and smoothing phenomena in the restored binary image.
Abstract: An improved interpolation method in which a threshold value used for determining a pixel value of a pixel generated by interpolation according to a context (state value of adjacent pixels). In the interpolation method, the ambiguity between the interpolation value and the threshold value is removed by using the context (state value of the reference pixels around the interpolated pixel), thereby reducing the blocking and smoothing phenomena in the restored binary image.

Journal ArticleDOI
TL;DR: An improved version of the Hierarchical INTerpolation (HINT) algorithm is proposed for multi-resolution reversible compression of still images by splitting the non-separable interpolation process into two cascaded directional steps interleaved with encoding.

Journal ArticleDOI
TL;DR: This paper presents a method based in mathematical morphology to enlarge images that preserves both slow variations and sharp edges and can be applied to other image processing problems, such as edge enhancement or motion vector estimation, where there is an image and confidence information about each pixel.

Proceedings ArticleDOI
22 Dec 1997
TL;DR: In this article, a unified treatment of speckling and image segmentation is proposed within the framework of probabilistic inference, which is based on statistical properties of the speckle noise and the SAR image formation equations.
Abstract: SAR systems, like any coherent imaging system, are subject to (I) speckling effects, which considerably reduce the useful detail within the acquired scenes and (II), strong geometric distortions. Furthermore, the resolution of SAR systems is comparable to the size of many of the objects of interest in the scene. Our paper proposes a unified treatment of these problems within the framework of probabilistic inference. Despeckling and segmentation are the main objectives only in the first case. In the second case, due to the strong geometric aberrations introduced by the SAR image formation system, the emphasis is on image resampling, with speckle reduction and image segmentation as collateral, but strongly related issues. In both cases, the model is built upon the statistical properties of the speckle noise and the SAR image formation equations.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This method is based on the observation that the sampling process of a CCD is not a point sampling process, but instead can be modelled as an averaging filter followed by sampling.
Abstract: We describe a method for scaling images which have been acquired with a charge coupled device (CCD) sensor. This method produces enlarged images which are sharper than those produced by bilinear interpolation, for comparable complexity. It is based on the observation that the sampling process of a CCD is not a point sampling process, but instead can be modelled as an averaging filter followed by sampling.

Journal ArticleDOI
TL;DR: This work proposes a method based on view interpolation that displays a sequence of images from a subset of key views based on the principles of the ray-casting method to interpolate images for specular as well as diffuse environments without approximating the camera motion.
Abstract: Computing a sequence of images to display views of a 3D model has remained time consuming, especially when the images are computed frame by frame. We propose a method based on view interpolation that displays a sequence of images from a subset of key views. These key views are computed with a ray caster, and the interpolation method is based on the principles of the ray-casting method. Our method is an image-space interpolation, and some additional 3D information has to be stored while the key views are being computed. This allows us to interpolate images for specular as well as diffuse environments without approximating the camera motion.

Patent
15 Jan 1997
TL;DR: In this paper, a variable scaling section of the image processing apparatus is computed according to an equation in which density of a plurality of adjacent picture elements in the vicinity of the target picture element is inputted, and weights of the density of each adjacent picture element in the equation is adjusted based upon the result detected by the region segmentation means.
Abstract: In an image processing method, an image is read by an image processing apparatus such as a digital copying machine and the read image is divided into blocks composed of a plurality of picture elements. Thereafter, interpolation is performed on a target picture element so that the image is scaled. Then, in the above image processing method, region segmentation data, which represent possibilities of characters, photographs and mesh dots of the target picture element of the image, are detected in a region segmentation section of the image processing apparatus, and the interpolated picture element data of the target picture element are computed by a variable scaling section according to an equation in which density of a plurality of adjacent picture elements in the vicinity of the target picture element are inputted. At this time, a weights of the density of each adjacent picture element in the equation is adjusted based upon the result detected by the region segmentation means. As a result, even if characters, photographs and mesh dots coexist in an image read by a scanner, the image is scaled according to the characters, photographs and mesh dots, thereby preventing deterioration in image quality.

Proceedings ArticleDOI
01 Jan 1997
TL;DR: A polyphase implementation of thedigital filter for the video scalar is presented and some factors determining the choice of the digital filter are discussed.
Abstract: Image scaling is important in the conversion between different formats such as NTSC, PAL, HDTV and between the CCIR 601 video resolution to the various sizes included in MPEG coding. The change of the resolution proves specially beneficial for improving the coding efficiency. The scaling operation can be generalized by decimation by a factor of M followed by filtering and then interpolation by a factor of L where M and L are integers. The choice of the digital filter depends on the values of L and M. For certain resolution changes, M and L can be rather large integers. Conventional implementation of the filter may result in huge memory and computational requirements. To reduce this factor for practical applications, a polyphase implementation of the digital filter for the video scalar is presented. Some factors determining the choice of the digital filter are discussed. Finally, examples are shown for different resolution scaling of an image.

Patent
30 Sep 1997
TL;DR: In this paper, the arithmetic mean of pixels neighboring the interpolation target pixel on both sides thereof is calculated and corrective data Δ 3 is added to the data representing the arithmetic means.
Abstract: A method of preventing the occurrence of false colors produced at the edge of an image of interest when the image is displayed upon being read by a scanner. When image data representing an interpolation target pixel at which image data is missing is produced, the arithmetic mean of pixels (e.g. R 1 and R 5 ) neighboring the interpolation target pixel on both sides thereof is calculated [(R 1 +R 5 )/2] and corrective data Δ 3 is added to the data representing the arithmetic mean. This makes it possible to prevent the occurrence of false colors at the edge of the image.

Proceedings ArticleDOI
10 Jan 1997
TL;DR: In this article, the authors present a combination of there steps to code a disparity map for 3D teleconferencing applications, which has a very low inherent redundancy, and an algorithm for image interpolation in absence of occlusion information is presented.
Abstract: In this paper we present a combination of there steps to code a disparity map for 3D teleconferencing applications. First we introduce a new disparity map format, the chain map, which has a very low inherent redundancy. Additional advantages of this map are: one single bidirectional map instead of the usual two unidirectional vector fields, explicit indication of occlusions, no upper or lower bound on disparity values, no disparity offset, easy generation by disparity estimators and easy interpretation by image interpolators. In a second step, we apply data reduction on the chain map. The reduction is a factor too, thereby losing explicit information about the position of occlusion areas. An algorithm for image interpolation in absence of occlusion information is presented. The third step involves entropy coding, both lossless and lossy. A scheme specially suited for the chain map has been developed. Although the codec is based on a simple prediction process without motion compensation, compression ratios of 20 to 80 can be achieved with typical teleconferencing images. These results are comparable to those obtained by complex schemes based on 2D/3D motion compensation using disparity vector fields.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: This method provides more meaningful correspondence relationships by warping regions in images into similar shapes before resampling to account for significant shape differences and to improve the efficiency for calculating the image warp in the field morphing process.
Abstract: An interpolation method using contours of organs as the control parameters is proposed to recover the intensity information in the physical gaps of serial cross‐sectional images. In our method, contour models are used to generate the control lines required for the warping algorithm. Contour information derived from this contour model‐based segmentation process is processed and used as the control parameters to warp the corresponding regions in both input images into compatible shapes. In this way, the reliability of establishing the correspondence among different segments of the same organs is improved and the intensity information for the interpolated intermediate slices can be derived more faithfully. To improve the efficiency for calculating the image warp in the field morphing process, a hierarchic decomposition process is proposed to localize the influence of each control line segment. In comparison with the existing intensity interpolation algorithms that only search for corresponding points in a small physical neighborhood, this method provides more meaningful correspondence relationships by warping regions in images into similar shapes before resampling to account for significant shape differences. Several sets of experimental result are presented to show that this method generates more realistic and less blurred interpolated images, especially when the shape difference of corresponding contours is significant. © 1997 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 8, 480–490, 1997

Patent
06 Mar 1997
TL;DR: In this paper, a decoder produces a signal for interpolation based on the input of an interpolation coefficient and the absolute value of the interpolation coefficients which is output from an absolute value detector.
Abstract: A video display monitor employs interpolation which maintains sharpness in images which have large differences in signal levels of picture elements and assuring smoothness of images such as lamps which have small differences in signal levels. The difference in the signal level differences of respective adjacent picture elements is detected by delay and subtraction. A decoder produces a signal for interpolation based on the input of an interpolation coefficient and the absolute value of the interpolation coefficient which is output from an absolute value detector.

Journal ArticleDOI
TL;DR: This paper presents a disparity estimation algorithm called hierarchical block correlation, which is a hybrid of the hierarchical estimation approach with the correlation matching criterion and can give better image quality for applications such as low-bit-rate coding and image interpolation.
Abstract: Disparity compensation is widely used to remove the spatial redundancies in stereoscopic image coding. In this paper, we present a disparity estimation algorithm called hierarchical block correlation, which is a hybrid of the hierarchical estimation approach with the correlation matching criterion. To assess the performance of the proposed algorithm, a coding error-based measure is used. The proposed algorithm is compared with two conventional block matching algorithms. The simulation results indicate that the proposed algorithm reduces disparity vector entropy by 10% to 30% and gives comparable or smaller coding errors. Furthermore, the proposed algorithm is superior with regard to the reliability of disparity vectors, and thus can give better image quality for applications such as low-bit-rate coding and image interpolation.

Proceedings ArticleDOI
21 Jul 1997
TL;DR: In this article, a series of non-linear filters is developed based on the concepts of Volterra series and these are applied to image interpolation problems, more explicitly the aim is to interpolate one field of a frame of a television picture to form an estimate of the second field.
Abstract: Linear filter theory based on Wiener filtering is well understood and used widely in many fields of image and signal processing. However, the use of linear filters is generally associated with implicit approximations. Therefore, in this work a series of non-linear filters is developed based on the concepts of Volterra series and these are applied to image interpolation problems. More explicitly the aim is to interpolate one field of a frame of a television picture to form an estimate of the second field. This is known as de-interlacing and is useful in many areas of video processing, for example standards conversion. Conventional de-interlacing systems use a fixed linear combination of the pixels in the aperture. In this paper we consider the extension of these methods to allow estimators based non-linear combinations of pixel values.

Patent
11 Jul 1997
TL;DR: In this paper, the motion vector information included in the MPEG2 video data is received from a motion vector decoder, and the still and moving image interpolation scanning lines are mixed together in a variable ratio in response to the movement of an image to obtain the interpolation output of the line sequential scan.
Abstract: PROBLEM TO BE SOLVED: To suppress the interlacing disturbance and to obtain an image of high quality by performing the interlacing/line sequential conversion processing by making reference to the motion vector information on MPEG2. SOLUTION: An MPEG2 video decoder 5 decodes the video data separated by a demultiplexer 4 and produces the video signals of interlacing scanning equal to a standard TV broadcast. A scan conversion means 11 produces the interpolation scanning lines from the video signals of interlacing scanning and then produces the video signals of line sequential scanning to output them to a video output terminal 8. A 1-field past signal is used as a still image interpolation scanning line against a still image signal. An intra-field average between the upper and lower scanning lines is used as a moving image interpolation scanning line against a moving image. Then the motion vector information included in the MPEG2 video data is received from a motion vector decoder 10, and the still and moving image interpolation scanning lines are mixed together in a variable ratio in response to the movement of an image to obtain the interpolation output of the line sequential scan.

Patent
25 Mar 1997
TL;DR: In this paper, a moving image interpolation device is provided with a transmitter that consists of a frame interleave section 11 conducting frame interleaving in matching with a transmission rate and a frame interpolation capability of a receiver, of a motion detection section 12 obtaining a motion vector from interleaved image data, and of a coder 13 applying compression coding to moving image information.
Abstract: PROBLEM TO BE SOLVED: To realize a moving image interpolation device at a receiver side by which a missing frame is accurately recovered. SOLUTION: A moving image interpolation device is provided with a transmitter 1 that consists of a frame interleave section 11 conducting frame interleaving in matching with a transmission rate and a frame interpolation capability of a receiver, of a motion detection section 12 obtaining a motion vector from interleaved image data, of a coder 13 applying compression coding to moving image information, and of a buffer 14 storing image data and moving vector data, and with a receiver 2 that consists of a buffer 21, of a decoder 22 decoding compressed image data, and of a frame interpolation section 23 interpolating an interpolation frame with a received motion vector. Then the frame interpolation section 23 re-configures an proper interpolation frame between adjacent transmission frames from a motion vector to conduct interpolation processing to the frame.

Proceedings ArticleDOI
09 Nov 1997
TL;DR: An efficient architecture of ultrasonic scan conversion for implementing the cubic convolution interpolation and some parameters needed to calculate in interpolation processes can be previously designed in ROMs (Read Only Memory) to improve computational efficiency.
Abstract: The nearest neighbor and the bilinear interpolation methods are widely adopted to perform ultrasonic scan conversion in commercial ultrasound systems. The reason is that they are easy to implement and can satisfy real-time requirement. As the hardware technology advances in the recent decade, more complex yet more eligible interpolation methods can be realized. However, few articles in the literature concern details about the reconstruction of ultrasonic images. In this paper the authors propose an efficient architecture of ultrasonic scan conversion for implementing the cubic convolution interpolation. Some parameters needed to calculate in interpolation processes can be previously designed in ROMs (Read Only Memory) to improve computational efficiency. The system requirement analysis and computer simulations have shown the appealing feasibility in real-time practice and superior reconstruction quality to conventional interpolation methods.