scispace - formally typeset
Search or ask a question

Showing papers on "Bicubic interpolation published in 2001"


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.
Abstract: This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the interpolation at a higher resolution based on the geometric duality between the low-resolution covariance and the high-resolution covariance. The edge-directed property of covariance-based adaptation attributes to its capability of tuning the interpolation coefficients to match an arbitrarily oriented step edge. A hybrid approach of switching between bilinear interpolation and covariance-based adaptive interpolation is proposed to reduce the overall computational complexity. Two important applications of the new interpolation algorithm are studied: resolution enhancement of grayscale images and reconstruction of color images from CCD samples. Simulation results demonstrate that our new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.

1,933 citations


Journal ArticleDOI
TL;DR: An evaluation of convolution-based interpolation methods and rigid transformations for the specific task of applying geometrical transformations to medical images shows that spline interpolation is to be preferred over all other methods, both for its accuracy and its relatively low computational cost.

298 citations


Journal ArticleDOI
TL;DR: It is shown that high-degree B-spline interpolation has superior Fourier properties, smallest interpolation error, and reasonable computing times, and therefore, high- Degree B- Splines are preferable interpolators for numerous applications in medical image processing, particularly if high precision is required.
Abstract: Analyzes B-spline interpolation techniques of degree 2, 4, and 5 with respect to all criteria that have been applied to evaluate various interpolation schemes in a recently published survey on image interpolation in medical imaging (Lehmann et al., 1999). It is shown that high-degree B-spline interpolation has superior Fourier properties, smallest interpolation error, and reasonable computing times. Therefore, high-degree B-splines are preferable interpolators for numerous applications in medical image processing, particularly if high precision is required. If no aliasing occurs, this result neither depends on the geometric transform applied for the tests nor the actual content of images.

157 citations


Journal ArticleDOI
TL;DR: The quality of polynomial interpolation approximations over the sphere Sr−1⊂Rr in the uniform norm is explored, principally for r=3, and empirical evidence suggests that for points obtained by maximizing λmin , the growth in ‖Λn‖ is approximately n+1 for n<30.
Abstract: This paper explores the quality of polynomial interpolation approximations over the sphere S r−1⊂R r in the uniform norm, principally for r=3. Reimer [17] has shown there exist fundamental systems for which the norm ‖Λ n ‖ of the interpolation operator Λ n , considered as a map from C(S r−1) to C(S r−1), is bounded by d n , where d n is the dimension of the space of all spherical polynomials of degree at most n. Another bound is d n 1/2(λavg/λmin )1/2, where λavg and λmin are the average and minimum eigenvalues of a matrix G determined by the fundamental system of interpolation points. For r=3 these bounds are (n+1)2 and (n+1)(λavg/λmin )1/2, respectively. In a different direction, recent work by Sloan and Womersley [24] has shown that for r=3 and under a mild regularity assumption, the norm of the hyperinterpolation operator (which needs more point values than interpolation) is bounded by O(n 1/2), which is optimal among all linear projections. How much can the gap between interpolation and hyperinterpolation be closed? For interpolation the quality of the polynomial interpolant is critically dependent on the choice of interpolation points. Empirical evidence in this paper suggests that for points obtained by maximizing λmin , the growth in ‖Λ n ‖ is approximately n+1 for n<30. This choice of points also has the effect of reducing the condition number of the linear system to be solved for the interpolation weights. Choosing the points to minimize the norm directly produces fundamental systems for which the norm grows approximately as 0.7n+1.8 for n<30. On the other hand, ‘minimum energy points’, obtained by minimizing the potential energy of a set of (n+1)2 points on S 2, turn out empirically to be very bad as interpolation points. This paper also presents numerical results on uniform errors for approximating a set of test functions, by both interpolation and hyperinterpolation, as well as by non-polynomial interpolation with certain global basis functions.

122 citations


Proceedings ArticleDOI
07 May 2001
TL;DR: The interpolation algorithm was found to produce noticeably sharper images with PSNR values which outperform many other interpolation techniques on a variety of images.
Abstract: Hidden Markov trees in the wavelet domain are capable of accurately modeling the statistical behavior of real world signals by exploiting relationships between coefficients in different scales. The model is used to interpolate images by predicting coefficients at finer scales. Various optimizations and post-processing steps are also investigated to determine their effect on the performance of the interpolation. The interpolation algorithm was found to produce noticeably sharper images with PSNR values which outperform many other interpolation techniques on a variety of images.

121 citations


Journal ArticleDOI
TL;DR: In this article, the cubic spline interpolator is proposed to fit cubic polynomials to adjacent pairs of points and choose the values of the two remaining parameters associated with each polynomial such that the polynoms covering adjacent intervals agree with one another in both slope and curvature at their common endpoint.
Abstract: The need to interpolate is widespread, and the approaches to interpolation are just as widely varied. For example, sampling a signal via a sample and-hold circuit at uniform, T-second intervals produces an output signal that is a piecewise-constant (or zero-order) interpolation of the signal samples. Similarly, a digital-to-analog (D/A) converter that incorporates no further post-filtering produces an output signal that is (ideally) piecewise-constant. One very effective, well-behaved, computationally efficient interpolator is the cubic spline. The approach is to fit cubic polynomials to adjacent pairs of points and choose the values of the two remaining parameters associated with each polynomial such that the polynomials covering adjacent intervals agree with one another in both slope and curvature at their common endpoint. The piecewise-cubic interpolating function g(x) that results is twice continuously differentiable. We develop the basic algorithm for cubic-spline interpolation.

98 citations


Journal ArticleDOI
TL;DR: A generalized interpolation scheme for image expansion and generation of super-resolution images is presented and shown to be useful in perceptually based high-resolution representation of images where interpolation is done on individual groups as per the perceptual necessity.

71 citations


Journal ArticleDOI
TL;DR: The pycnophylactic interpolation method computes a continuous surface from polygon-based data and simultaneously enforces volume preservation in the polygons, which is extended to surface representations based on an irregular triangular network (TIN).
Abstract: The interpolation of continuous surfaces from discrete points is supported by most GIS software packages. Some packages provide additional options for the interpolation from 3D line objects, for example surface-specific lines, or contour lines digitized from topographic maps. Demographic, social and economic data can also be used to construct and display smooth surfaces. The variables are usually published as sums for polygonal units, such as the number of inhabitants in communities or counties. In the case of point and line objects the geometric properties have to be maintained in the interpolated surface. For polygon-based data the geometric properties of the polygon boundary and the volume should be preserved, avoiding redistribution of parts of the volume to neighboring units during interpolation. The pycnophylactic interpolation method computes a continuous surface from polygon-based data and simultaneously enforces volume preservation in the polygons. The original procedure using a regular grid is extended to surface representations based on an irregular triangular network (TIN).

49 citations


Patent
Yong In Han1, Hwe-ihn Chung1
01 May 2001
TL;DR: In this paper, a 2D non-linear interpolation method based on edge information was proposed, which includes an edge detector, an edge direction modifier, a near-edge coefficient generator, a filter coefficient generator and a nonlinear interpolative unit, where the edge detector detects edge information among pixels from a video signal applied through an input terminal.
Abstract: A 2-dimensional non-linear interpolation system and method based on edge information includes an edge detector, an edge direction modifier, a near-edge coefficient generator, a filter coefficient generator and a non-linear interpolation unit. The edge detector detects edge information among pixels from a video signal applied through an input terminal. The edge direction modifier converts the edge information detected by the edge detector on the basis of a center point among peripheral pixels of an interpolation position and outputs modified edge information. The near-edge coefficient generator converts the coordinates of the interpolation position based on the modified edge information to generate a converted interpolation position, generates edge patterns corresponding to the converted interpolation position, and generates a plurality of 2-dimensional interpolation coefficients in response to predetermined one-dimensional non-linear interpolation filter coefficients. The filter coefficient generator generates the one-dimensional non-linear interpolation filter coefficients in response to the coordinates of the converted interpolation position, the edge patterns and predetermined one-dimensional filter coefficients. The non-linear interpolation unit multiplies data values associated with the peripheral pixels by the plurality of 2-dimensional non-linear interpolation coefficients to perform non-linear interpolation. Accordingly, even when a video image is magnified using non-linear interpolation, the resolution of a text or graphic image can be maintained without distortion of edges and aliasing.

42 citations


Journal ArticleDOI
TL;DR: Extensions to interpolation of regularly spaced and scattered bi- and multivariate data by cubic and higher-degree surfaces/hypersurfaces on regular and irregular rectangular/quadrilateral/hexahedral and triangular/tetrahedral grids are outlined.

37 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: This work considers the problem of image interpolation from an adaptive optimal recovery point of view as well as introducing a broader, more general and systematic approach to image interpolations using adaptive optimal recover.
Abstract: We consider the problem of image interpolation from an adaptive optimal recovery point of view. Many different standard interpolation approaches may be viewed through the prism of optimal recovery. We review some standard image interpolation methods and how they relate to optimal recovery as well as introduce a broader, more general and systematic approach to image interpolation using adaptive optimal recovery.

Journal ArticleDOI
01 Sep 2001
TL;DR: This paper considers the interpolation of fuzzy data by fuzzy-valued complete splines by giving the numerical solutions of the illustrative examples.
Abstract: In this paper, we will consider the interpolation of fuzzy data by fuzzy-valued complete splines. Finally, we will give the numerical solutions of the illustrative examples.

Journal ArticleDOI
TL;DR: In terms of the average PSNRp (peak signal-to-noise ratio) in dB and subjective measure of the quality of the interpolated images, the interpolation results by the proposed approach are better than that by three existing interpolation approaches for comparison.

Patent
09 Aug 2001
TL;DR: In this article, a method and system for image resample by spatial interpolation is proposed, which allows more than simple angle interpolation by allowing spatial interpolations to be performed on small angle edges.
Abstract: A method and system for image resample by spatial interpolation The method and system allow more than simple angle interpolation by allowing spatial interpolation to be performed on small angle edges Multiple interpolation directions are established Once an interpolation direction is selected, verifications are performed on the selected interpolation direction in order to rule out erroneous selection If the selected interpolation direction passes all verification, then spatial interpolation will be performed along the selected interpolation direction Otherwise, a default interpolation direction is used as the interpolation direction

Proceedings Article
01 Jan 2001
TL;DR: In this article, Lagragne interpolation schemes are constructed based on C 1 cubic splines on certain triangulations obtained from checkerboard quadrangulations, and the splines are used to compute the Lagrangians.
Abstract: Lagragne interpolation schemes are constructed based on C1 cubic splines on certain triangulations obtained from checkerboard quadrangulations.

Proceedings ArticleDOI
03 Jul 2001
TL;DR: It is concluded that the proposed method can be used to produce a desirable illumination- normalized image, from which region segmentation can be made easier and more accurate.
Abstract: Blood vessels in retinal images are often spread wildly across the image surface. By using this feature, this paper presents a novel approach for illumination normalization of retinal images. With the assumption that the reflectance of the vessels (including both major and small vessels) is a constant, it was found in our study that the illumination distribution of a retinal image can be estimated based on the locations of the vessel pixels and their intensity values. The procedures for estimating the illumination consists of two steps: (1) obtain the vessel map of the retinal image, and (2) estimate the illumination function (IF) of the image by interpolating the intensity values (luminance) of non-vessel pixels using a bicubic model function based on the locations of the vessel pixels and their intensity values. The illumination-normalized image can then be obtained by subtracting the original image from the estimated IF.20 non-uniformly illuminated sample retinal images that were tested using the proposed method. The results showed that the over-all standard deviation of the illumination for the image background reduced by 56.8% from 19.82 to 8.56, and the signal-to-noise ratio of the normalized images was greatly improved in the application of the global thresholding for image/region segmentation. Furthermore, when measured by the local luminosity histograms, the contrast of regions with low illumination containing features that are normally difficult to detect (such as small lesions and vessels) was also enhanced significantly. Therefore, it is concluded that the proposed method can be used to produce a desirable illumination- normalized image, from which region segmentation can be made easier and more accurate.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
07 May 2001
TL;DR: This work adaptively estimates the local quadratic signal class of the image pixels and uses optimal recovery to estimate the missing local samples based on this quadratics signal class.
Abstract: We consider the problem of image interpolation using adaptive optimal recovery. We adaptively estimate the local quadratic signal class of our image pixels. We then use optimal recovery to estimate the missing local samples based on this quadratic signal class. This approach tends to preserve edges, interpolating along edges and not across them.

Journal ArticleDOI
TL;DR: In this article, the authors present an automatic algorithm for the construction of local shape-preserving interpolating splines in R 3, which satisfy the convexity and torsion criteria relative to the polygonal line connecting interpolation points.
Abstract: In this paper we present an automatic algorithm for the construction of local shape-preserving interpolating splines in R 3 . These splines satisfy the convexity and torsion criteria relative to the polygonal line connecting interpolation points. The resulting curve has continuous curvature and continuous torsion and is obtained via piecewise polynomials of degree 6.

Journal ArticleDOI
Jong-Ki Han1, Hyung-Myung Kim2
TL;DR: An adaptive version of cubic convolution interpolation for the enlargement or reduction of digital images by arbi- trary scaling factors that exhibits significant improvement in the mini- mization of information loss when compared with the conventional interpolation algorithms.
Abstract: The authors derive an adaptive version of cubic convolution interpolation for the enlargement or reduction of digital images by arbi- trary scaling factors. The adaptation is performed in each subblock (typi- cally L3L rectangular) of an image. It consists of three phases: two scaling procedures (i.e., forward and backward interpolation) and an op- timization of the interpolation kernel. In the forward interpolation phase, from the sampled data with the original resolution, we generate scaled data with different (higher or lower) resolution. The backward interpola- tion produces new discrete data by applying another interpolation to the scaled one. The phases are based on a cubic convolution interpolation whose kernel is modified to adapt to local properties of the data. During the optimization phase, we modify the parameter values to decrease the disparity between the original data and those resulting from another in- terpolation on the different-resolution output of the forward interpolating phase. The overall process is repeated iteratively. We show experimen- tal results that demonstrate the effectiveness of the proposed interpola- tion method. The algorithm exhibits significant improvement in the mini- mization of information loss when compared with the conventional interpolation algorithms. © 2001 Society of Photo-Optical Instrumentation Engineers.

Proceedings ArticleDOI
TL;DR: A new algorithm for the interpolation of temporal intermediate images using polyphase weighted median filters which are able to achieve a correct positioning of moving edges in the interpolated image, even if the estimated vector differs from the true motion vector up to a certain degree.
Abstract: A new algorithm for the interpolation of temporal intermediate images using polyphase weighted median filters is proposed in this paper. To achieve a good interpolation quality not only in still but also in moving areas of the image, vector based interpolation techniques have to be used. However, motion estimation on natural image scenes always suffers from errors in the estimated motion vector field. Therefore it is of great importance that the interpolation algorithm possesses a sufficient robustness against vector errors. Depending on the input and output frame repetition rate, different cyclically repeated interpolation phases can be distinguished. The new interpolation algorithm uses dedicated weighted median filters for each interpolation phase (polyphase weighted median filters) which are (due to their shift property) able to achieve a correct positioning of moving edges in the interpolated image, even if the estimated vector differs from the true motion vector up to a certain degree. A new design method for these dedicated error tolerant weighted median filters is presented in the paper. Other aspects like e.g. the preservation of fine image details can also be regarded in the design process. The results of the new algorithm are compared to other existing interpolation algorithms.

Proceedings ArticleDOI
07 May 2001
TL;DR: An edge-preserving method for image resizing (decimation and interpolation) is proposed, considering the strongest edges as step edges, and a segmentation procedure preceding the decimation leads to resized images with clearly outlined borders.
Abstract: An edge-preserving method for image resizing (decimation and interpolation) is proposed. The decimation is considered as an orthogonal projection with respect to the chosen interpolation basis. The latter one is formed in a spline-like manner as a linear combination of B-splines of different degrees. This combination is optimized in such a way that the small image details are preserved. Considering the strongest edges as step edges, a segmentation procedure preceding the decimation is proposed. It leads to resized images with clearly outlined borders.

01 Jan 2001
TL;DR: In this article, the authors outline the mathematics necessary to understand the smooth interpolation of zero curves, and describe two useful methods: cubic-spline interpolation and smoothest forward-rate interpolation.
Abstract: Smoothness is a desirable characteristic of interpolated zero curves; not only is it intuitively appealing, but there is some evidence that it provides more accurate pricing of securities. This paper outlines the mathematics necessary to understand the smooth interpolation of zero curves, and describes two useful methods: cubic-spline interpolation—which guarantees the smoothest interpolation of continuously compounded zero rates—and smoothest forwardrate interpolation—which guarantees the smoothest interpolation of the continuously compounded instantaneous forward rates. Since the theory of spline interpolation is explained in many textbooks on numerical methods, this paper focuses on a careful explanation of smoothest forward-rate interpolation.

Book ChapterDOI
TL;DR: In this article, a numerical approximation of the continuous convolution integral that can be calculated as a discrete convolution sum is obtained, based on the interpolation technique, which is more accurate for small scales, especially for Gaussian derivative convolutions.
Abstract: Gaussian convolutions are perhaps the most often used image operators in low-level computer vision tasks. Surprisingly though, there are precious few articles that describe efficient and accurate implementations of these operators.In this paper we describe numerical approximations of Gaussian convolutions based on interpolation. We start with the continuous convolution integral and use an interpolation technique to approximate the continuous image f from its sampled version F.Based on the interpolation a numerical approximation of the continuous convolution integral that can be calculated as a discrete convolution sum is obtained. The discrete convolution kernel is not equal to the sampled version of the continuous convolution kernel. Instead the convolution of the continuous kernel and the interpolation kernel has to be sampled to serve as the discrete convolution kernel.Some preliminary experiments are shown based on zero order (nearest neighbor) interpolation, first order (linear) interpolation, third order (cubic) interpolations and sinc-interpolation. These experiments show that the proposed algorithm is more accurate for small scales, especially for Gaussian derivative convolutions when compared to the classical way of discretizing the Gaussian convolution.

Patent
Jun Hoshii1, Yoshihiro Nakami1
24 Apr 2001
TL;DR: In this paper, a blending ratio between the pixels interpolated separately in two modes of interpolation can be determined, according to the image attribute, with high percentages assigned to more suitable interpolation processing.
Abstract: Generally, it is not easy to judge automatically and correctly whether the attribute of an image is the one belonging to logos and illustrations or the one belonging to natural pictures. Due to incorrect judgment, sometimes, unsuitable interpolation execution has occurred. Pixels interpolated by the first interpolation processing and pixels interpolated by the second interpolation processing are blended, based on a predetermined evaluation function, and placed on a source image. Because the evaluation function depends on the attribute of the image, a blending ratio between the pixels interpolated separately in two modes of interpolation can be determined, according to the image attribute, with high percentages assigned to more suitable interpolation processing. The merit of each mode of interpolation processing becomes more noticeable, whereas the demerit of each becomes mild. Consequently, the invention can prevent an error in selecting an interpolation method, based on the appraised attribute of the image for which interpolation is executed.

Journal ArticleDOI
TL;DR: An algorithm for computing the cubic spline interpolation coefficients without solving the matrix equation involved is presented, which requires only O(n) multiplication or division operations for Computing the inverse of the matrix.


Journal ArticleDOI
TL;DR: A new method for the generation of a surface interpolating a mesh of control points based on a generalisation of Kochanek–Bartels splines, which provides the surface designer with a set of flexible user handles for shaping the surface locally.

Patent
Walter Demmer1
04 Dec 2001
TL;DR: In this article, a low-order polyphase interpolation filter is proposed to facilitate sample rate conversion from a first rate to a second rate, which can be greater or less than the sample rate of the input signal.
Abstract: A low-order polyphase interpolation filter, such as for decoding video and image signals, employs interpolation to facilitate sample rate conversion from a first rate to a second rate, which can be greater or less than the sample rate of the input signal. The interpolation applies interpolation coefficients, which are non-linear with respect to an associated positioning vector, to a set of input samples to provide desired scaling and/or conversion of the input sample into the desired output sample.

01 Jan 2001
TL;DR: In this paper, the autocorrelation function of the image at sub-pixel distances or the form of the spectrum near the Nyquist frequency is used to derive a linear interpolant with minimum variance which takes account of the effects of aliasing in the sampled image.
Abstract: The need for interpolation between pixels arises in many contexts. Kriging provides a general theory for optimal linear interpolation, which can be implemented and interpreted in either spatial or frequency domains. Of critical importance is knowledge of the autocorrelation function of the image at sub-pixel distances or, equivalently, the form of the spectrum near the Nyquist frequency. Although neither of these will typically be known, in many applications the point spread function of the imaging sensor is either known or can be estimated. We show how this knowledge can be combined with an assumption that the true scene is a Matern process, to derive a linear interpolant with minimum variance which takes account of the effects of aliasing in the sampled image. We apply the new method to both simulated and X-ray computed tomography images, and show it to be superior to bicubic and sinc interpolation for images that are not band-limited at the Nyquist frequency.

Patent
Jong-Ki Han1
29 Mar 2001
TL;DR: In this paper, a cubic convolution interpolator is proposed to minimize the quantity of information loss in a scaled or resampled image signal by optimizing a parameter which determines the interpolation coefficients according to the local property of an image signal.
Abstract: A cubic convolution interpolating apparatus and method for performing interpolation by optimizing a parameter which determines the interpolation coefficients according to the local property of an image signal, which can minimize the quantity of information loss in a scaled or resampled image signal. The cubic convolution interpolating apparatus includes an image signal divider dividing an image signal into a plurality of subblocks, and a block generating parameters which determine cubic convolution interpolation coefficients in units of subblocks, and perform cubic convolution interpolation. The cubic convolution interpolating block includes a forward scaling processor sampling a cubic convolution interpolated continuous function of original image data transmitted from the divider using a first scaling factor and scaling the original image data, a backward scaling processor sampling a backward cubic convolution interpolated continuous function of the scaled data output from the forward scaling processor using a second scaling factor and restoring the scaled data into the original image data, and a parameter optimizer optimizing the parameter using the original image data and the data restored into the original image data output from the backward scaling processor, and transferring the optimized parameter to the forward scaling processor and the backward scaling processor, respectively. Therefore, even if an image includes various spatial frequency components, the quantity of lost information due to a change in the local property of the spatial frequencies can be minimized.