scispace - formally typeset
Search or ask a question

Showing papers on "Bicubic interpolation published in 1998"


Journal ArticleDOI
TL;DR: The authors proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data, and showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods.
Abstract: To aid in the display, manipulation, and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater (typically 4-10/spl times/) amount of data. To mitigate this problem, a method called shape-based interpolation of binary data was developed. Resides significantly reducing user time, this method has been shown to provide more accurate results than grey-level interpolation. The authors proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data. This method has characteristics similar to those of the binary shape-based method. In particular, the authors showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentrating on the three-dimensional (3-D) interpolation problem, the authors compare statistically the accuracy of 8 different methods: nearest-neighbor, linear grey-level, grey-level cubic spline, grey-level modified cubic spline, Goshtasby et al. (1992), and 3 methods from the grey-level shape-based class. A population of patient magnetic resonance and computed tomography images, corresponding to different parts of the human anatomy, coming from different 3-D imaging applications, are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location using 3 measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically significantly outperformed all other methods in all measures in all applications considered here with a statistical relevance ranging from 10% to 32% (mean=15%) for mean-squared difference.

199 citations


Journal ArticleDOI
TL;DR: In this paper, a technique for interpolation and gridding in one, two, and three dimensions using Green's functions for splines in tension is presented, which is superior to conventional finite-difference methods because both data values and directional gradients can be used to constrain the model surface, and noise can be suppressed easily by seeking a least-squares fit rather than exact interpolation.
Abstract: Interpolation and gridding of data are procedures in the physical sciences and are accomplished typically using an averaging or finite difference scheme on an equidistant grid. Cubic splines are popular because of their smooth appearances; however, these functions can have undesirable oscillations between data points. Adding tension to the spline overcomes this deficiency. Here, we derive a technique for interpolation and gridding in one, two, and three dimensions using Green's functions for splines in tension and examine some of the properties of these functions. For moderate amounts of data, the Green's function technique is superior to conventional finite-difference methods because (1) both data values and directional gradients can be used to constrain the model surface, (2) noise can be suppressed easily by seeking a least-squares fit rather than exact interpolation, and (3) the model can be evaluated at arbitrary locations rather than only on a rectangular grid. We also show that the inclusion of tension greatly improves the stability of the method relative to gridding without tension. Moreover, the one-dimensional situation can be extended easily to handle parametric curve fitting in the plane and in space. Finally, we demonstrate the new method on both synthetic and real data and discuss the merits and drawbacks of the Green's function technique.

122 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: This paper presents a method for geometry-based interpolation that smoothly fits the isophote (intensity level curve) contours at all points in the image rather than just at selected contours by using level set methods for curve evolution.
Abstract: Standard methods for image interpolation are based on smoothly fitting the image intensity surface. Previous edge-directed interpolation methods add limited geometric information (edge maps) to build more accurate and visually appealing interpolations at key contours in the image. This paper presents a method for geometry-based interpolation that smoothly fits the isophote (intensity level curve) contours at all points in the image rather than just at selected contours. By using level set methods for curve evolution, no explicit extraction or representation of these contours is required (unlike earlier edge-directed methods). The method uses existing interpolation techniques as an initial approximation and then iteratively reconstructs the isophotes using constrained smoothing. Results show that the technique produces results that are more visually realistic than standard function-fitting methods.

98 citations


Journal ArticleDOI
TL;DR: In this article, a new method for expressing a molecular potential energy surface (PES) as an interpolation of local Taylor expansions is presented, which avoids redundancy problems associated with the use of internal coordinates.
Abstract: We present a new method for expressing a molecular potential energy surface (PES) as an interpolation of local Taylor expansions. By using only Cartesian coordinates for the atomic positions, this method avoids redundancy problems associated with the use of internal coordinates. The correct translation, rotation, inversion, and permutation invariance are incorporated in the PES via the interpolation method itself. The method is most readily employed for bound molecules or clusters and is demonstrated by application to the vibrational motion of acetylene.

86 citations


Patent
08 Oct 1998
TL;DR: In this article, an interpolation direction is derived for each additional pixel from a weighted combination of a vertical direction and a best-choice diagonal direction, if a potential interpolation condition cannot be determined with a high level of confidence.
Abstract: A method or device for increasing the resolution of an image generates additional pixels in the image using interpolation. Various tests are performed for each additional pixel to determine whether conditions render the interpolation direction ambiguous or uncertain. In a television image, for example, an interpolation direction is derived for each additional pixel from a weighted combination of a vertical direction and a best-choice diagonal direction. If a potential interpolation condition cannot be determined with a high level of confidence, the weighted combination favors the vertical direction.

67 citations


Book ChapterDOI
01 Jan 1998
TL;DR: This paper constructs a subdivision algorithm with some negative weights producing G 2-surfaces, which are piecewise bicubic and are flat at their extraordinary points.
Abstract: In this paper we present a method to optimize the smoothness order of subdivision algorithms generating surfaces of arbitrary topology. In particular we construct a subdivision algorithm with some negative weights producing G 2-surfaces. These surfaces are piecewise bicubic and are flat at their extraordinary points. The underlying ideas can also be used to improve the smoothness order of subdivision algorithms for surfaces of higher degree or triangular nets.

56 citations


Patent
Andrew Gordon Neil Walter1
07 Nov 1998
TL;DR: In this article, a method for reducing the computational overhead of bicubic interpolation while still providing a similar level of accuracy was proposed, taking into account the fact that sampled points surrounding a point whose value is to be determined have respective first, second and third order effects on the calculated value.
Abstract: The invention relates to a method for reducing the computational overhead of bicubic interpolation while still providing a similar level of accuracy. The invention takes into account the fact that sampled points surrounding a point whose value is to be determined have respective first, second and third order effects on the calculated value. The invention combines linear interpolation, ignoring points having a third order effect, with cubic interpolation of points having a first and second order effect to derive the value.

50 citations


OtherDOI
01 Jul 1998
Abstract: Image-space simplifications have been used to accelerate the calculation of computer graphic images since the dawn of visual simulation. Texture mapping has been used to provide a means by which images may themselves be used as display primitives. The work reported by this paper endeavors to carry this concept to its logical extreme by using interpolated images to portray three-dimensional scenes. The special-effects technique of morphing, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images. If the images are a structured set of views of a 3D object or scene, intermediate frames derived by morphing can be used to approximate intermediate 3D transformations of the object or scene. Using the view interpolation approach to synthesize 3D scenes has two main advantages. First, the 3D representation of the scene may be replaced with images. Second, the image synthesis time is independent of the scene complexity. The correspondence between images, required for the morphing method, can be predetermined automatically using the range data associated with the images. The method is further accelerated by a quadtree decomposition and a view-independent visible priority. Our experiments have shown that the morphing can be performed at interactive rates on today’s high-end personal computers. Potential applications of the method include virtual holograms, a walkthrough in a virtual environment, image-based primitives and incremental rendering. The method also can be used to greatly accelerate the computation of motion blur and soft shadows cast by area light sources. CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional Keywords: image morphing, interpolation, virtual reality, motion blur, shadow, incremental rendering, real-time display, virtual holography, motion compensation.

43 citations


Journal ArticleDOI
01 Aug 1998
TL;DR: In this paper, a regularized iterative image interpolation algorithm was proposed to restore high frequency details in the original high resolution image, where the regularization approach was applied to the interpolation procedure.
Abstract: This paper presents a regularized iterative image interpolation algorithm, which can restore high frequency details in the original high resolution image In order to apply the regularization approach to the interpolation procedure, we first present a two-dimensional separable image degradation model for a low resolution imaging system According to the model, we propose a regularization based spatial image sequence interpolation algorithm and apply the proposed algorithm to a spatially scalable coding

38 citations


Journal ArticleDOI
01 May 1998
TL;DR: This paper considers the interpolation of fuzzy data by fuzzy-valued natural splines by giving the numerical solutions of the illustrative examples.
Abstract: In this paper, we will consider the interpolation of fuzzy data by fuzzy-valued natural splines. Finally, we will give the numerical solutions of the illustrative examples.

36 citations


01 Jan 1998
TL;DR: This thesis introduces two new approaches to optimal image interpolation which are based on the idea that image data falls into different categories or classes, such as edges of different orientation and smoother gradients, and demonstrates that RS can be trained for high-quality interpolation of images which are free of artifacts.
Abstract: Atkins, C. Brian. Ph.D., Purdue University, December 1998. Classification-Based Methods in Optimal Image Interpolation. Major Professors: Charles A. Bouman and Jan P. Allebach. In this thesis, we introduce two new approaches to optimal image interpolation which are based on the idea that image data falls into different categories or classes, such as edges of different orientation and smoother gradients. Both these methods work by first classifying the image data in a window around the pixel being interpolated, and then using an interpolation filter designed for the selected class. The first method, which we call Resolution Synthesis (RS), performs the classification by computing probabilities of class membership in a Gaussian mixture model. The second method, which we call Tree-based Resolution Synthesis (TRS), uses a regression tree. Both of these methods are based on stochastic models for image data whose parameters must have been estimated beforehand, by training on sample images. We demonstrate that under some assumptions, both of these methods are actually optimal in the sense that they yield minimum mean-squared error (MMSE) estimates of the target-resolution image, given the source image. We also introduce Enhanced Tree-based RS, which consists of TRS interpolation followed by an enhancement stage. During the enhancement stage, we recursively add adjustments to the pixels in the interpolated image. This has the dual effect of reducing interpolation artifacts while imparting additional sharpening. We present results of the above methods for interpolating images which are free of artifacts. In addition, we present results which demonstrate that RS can be trained for high-quality interpolation of images which

01 Jan 1998
TL;DR: In this article, two geometrical and two advection-equivalent spatial interpolation schemes were tested in providing lateral boundary conditions to a nested grid domain, where the test problem involves an initially cone-shaped distribution of a scalar advected from a coarse to a fine grid.
Abstract: Two geometrical and two advection-equivalent spatial interpolation schemes were tested in providing lateral boundary conditions to a nested grid domain. Geometric interpolation schemes used in this study are a zerothorder and a quadratic scheme, while the two advection-equivalent interpolation schemes were based on upwind and Bott’s advection schemes. The test problem involves an initially cone-shaped distribution of a scalar advected from a coarse to a fine grid. Simulation results were compared to the exact solution to study magnitude and phase characteristics of each scheme. Results indicated that Bott’s advection-equivalent interpolation scheme provided better interface conditions and, consequently, a more accurate transition of the signal from a coarse to a fine grid.

Patent
Benzler Ulrich1, Werner Oliver1
11 Jul 1998
TL;DR: In this paper, an exact pixel determination of a moving vector is initially carried out, followed by a two-step interpolation filtering at exact subpixel resolution, with the purpose of reducing aliasing.
Abstract: In order to generate an improved image signal in motion assessment, an exact pixel determination of a moving vector is initially carried out, followed by a two-step interpolation filtering at exact subpixel resolution The interpolation coefficients are chosen with the purpose of reducing aliasing A larger number of adjacent pixels are used in comparison with conventional interpolation methods The quality of the prediction signal for moving images can be thus improved, thereby enhancing coding efficiency

Journal ArticleDOI
TL;DR: In this article, two geometrical and two advection-equivalent spatial interpolation schemes were tested in providing lateral boundary conditions to a nested grid domain, where the test problem involves an initially cone-shaped distribution of a scalar advected from a coarse to a fine grid.
Abstract: Two geometrical and two advection-equivalent spatial interpolation schemes were tested in providing lateral boundary conditions to a nested grid domain. Geometric interpolation schemes used in this study are a zeroth- order and a quadratic scheme, while the two advection-equivalent interpolation schemes were based on upwind and Bott’s advection schemes. The test problem involves an initially cone-shaped distribution of a scalar advected from a coarse to a fine grid. Simulation results were compared to the exact solution to study magnitude and phase characteristics of each scheme. Results indicated that Bott’s advection-equivalent interpolation scheme provided better interface conditions and, consequently, a more accurate transition of the signal from a coarse to a fine grid.

Journal ArticleDOI
TL;DR: In this paper, necessary and sufficient conditions for mean convergence of Lagrange interpolation at zeros of orthogonal polynomials for weights on (1 1) were obtained.
Abstract: We obtain necessary and sufficient conditions for mean convergence of Lagrange interpolation at zeros of orthogonal polynomials for weights on ( 1 1), such as w(x) = exp (1 x 2 ) 0 or w(x) = exp expk(1 x 2 ) k 1 0

PatentDOI
TL;DR: In this article, a comb filter is used within the fractional interpolation branch to attenuate imaging tones within the baseband of interest, and an interpolation rate change switch is used by the comb filter to move the imaging tones further away from the BS so that minimum imaging noise is introduced within the BS.
Abstract: A voice or data transmission system and specifically an interpolation filter used within the transmission system is provided for producing either fractional or integer interpolation ratios. The digital signal resulting from the interpolation filter has a relatively high signal to noise ratio whenever fractional interpolation is needed. The interpolation filter includes multiple stages coupled in series, and an integer interpolation branch switched in parallel with a fractional interpolation branch. A controller determines whether the integer or fractional interpolation ratio is needed based on maintaining a fixed oversampling data rate from the interpolation filter given a changing incoming sampling rate. If the incoming sampling rate should require fractional interpolation, then a branch implementing fractional interpolation ratio is used in lieu of the integer interpolation ratio. A comb filter is preferably introduced within the fractional interpolation branch to attenuate imaging tones within the baseband of interest. An interpolation rate change switch used by the comb filter beneficially moves the imaging tones further away from the baseband so that minimum imaging noise is introduced within the baseband by the fractional oversampling ratio.


Journal ArticleDOI
TL;DR: An abstract version of the Kowalewski–Ciarlet–Wagschalmultipoint Taylor formula for representing the pointwise error in multivariate Lagrange interpolation and several applications are given in the paper.
Abstract: The main result of this paper is an abstract version of the Kowalewski–Ciarlet–Wagschal multipoint Taylor formula for representing the pointwise error in multivariate Lagrange interpolation. Several applications of this result are given in the paper. The most important of these is the construction of a multipoint Taylor error formula for a general finite element, together with the corresponding $L_p$ –error bounds. Another application is the construction of a family of error formulae for linear interpolation (indexed by real measures of unit mass) which includes some recently obtained formulae. It is also shown how the problem of constructing an error formula for Lagrange interpolation from a D–invariant space of polynomials with the property that it involves only derivatives which annihilate the interpolating space can be reduced to the problem of finding such a formula for a ‘simpler’ one–point interpolation map.

Journal ArticleDOI
TL;DR: In this paper, results from barotropic transport simulations on the sphere are presented, using either bicubic Lagrangian or spline SL discretization, and the spline-based scheme is shown to generate excessively noisy fields in these simulations.
Abstract: The accuracy of interpolating semi-Lagrangian (SL) discretization methods depends on the choice of the interpolating function. Results from barotropic transport simulations on the sphere are presented, using either bicubic Lagrangian or bicubic spline SL discretization. The spline-based scheme is shown to generate excessively noisy fields in these simulations. The two methods are then tested in a one-dimensional advection problem. The damping and dispersion relations for the schemes are examined. The analysis and numerical experiments suggest that the excessive noise found in the spline-based simulations is a consequence of insufficient damping of the small scales for small and near-integer values of the Courant number. Inspection of the local Courant number for the two-dimensional spline-based simulation confirms this hypothesis. This noise can be controlled by adding a scale-selective diffusion term to the spline-based scheme, while retaining its excellent dispersion characteristics.

Proceedings ArticleDOI
24 Jun 1998
TL;DR: This work establishes that zero-padding interpolation of periodic functions that are sampled in accordance with the Nyquist criterion--precisely the sort of function encountered in the angular dimension of the polar grid--is exact and equivalent to circular sampling theorem interpolation.
Abstract: The speed and accuracy of Direct Fourier image reconstruction methods have long been hampered by the need to interpolate between the polar grid of Fourier data that is obtained from the measured projection data and the Cartesian grid of Fourier data that is needed to recover an image using the 2D FFT. Fast but crude interpolation schemes such as bilinear interpolation often lead to unacceptable image artifacts, while more sophisticated but computationally intense techniques such as circular sampling theorem (CST) interpolation negate the speed advantages afforded by the use of the 2D FFT. One technique that has been found to yield high-quality images without much computational penalty is a hybrid one in which zero-padding interpolation is first used to increase the density of samples on the polar grid after which bilinear interpolation onto the Cartesian grid is performed. In this work, we attempt to account for the success of this approach relative to the CST approach in three ways. First and more importantly, we establish that zero-padding interpolation of periodic functions that are sampled in accordance with the Nyquist criterion--precisely the sort of function encountered in the angular dimension of the polar grid--is exact and equivalent to circular sampling theorem interpolation. Second, we point out that both approaches make comparable approximations in interpolating in the radial direction. Finally, we indicate that the error introduced by the bilinear interpolation step in the zero- padding approach can be minimized by choosing sufficiently large zero-padding factors.

Journal ArticleDOI
TL;DR: A family of iterative interpolation algorithm that uses splines iteratively and preserves certain polynomials is introduced and compared with cubic convolution, cubic spline, Daubechies' wavelet and FFT-based interpolations is made.

Proceedings ArticleDOI
Larry Seller1
21 Jul 1998
TL;DR: A simple way to set up coefficients to implement quadratic shading in rendering hardware, a second-order shading function to provide quality comparable to per-pixel Phong shading at a fraction of the complexity and with no restrictions on the number of light sources or the type of lighting is described.
Abstract: sketch describes a simple way to set up coefficients to implement quadratic shading in rendering hardware. Quadratic shading interpolates a second-order shading function to provide quality comparable to per-pixel Phong shading, 1 at a fraction of the complexity and with no restrictions on the number of light sources or the type of lighting. Simple adaptive algorithms allow software and hardware to fall back to Gouraud shading. Triangle Preparation Fig. 1 shows a triangle with lighting computed at six positions: vertex colors L, M, and N, and edge midpoint colors Q, R, and S. Any lighting algorithm may be used to produce these colors. Edge midpoint colors need not be computed or transmitted to the rendering hardware if they are similar to the values produced by Gouraud shading. This may be determined by existing adaptive algorithms for subdividing Gouraud shaded triangles. 2 Fig. 1 also defines the M and N vertex coordinates (x1, y1) and (x2, y2), as well as terms i = x2/x1, j = y1/y2, and k = 1–| i j |. The vertex with color L is translated to the origin and must be at a corner of the triangle's bounding box. As in the figure, assign the vertices such that i and j are no greater than 2, which is possible except for co-incident vertices. This reduces the numerical precision required for quadratic shading. Quadratic Function Coefficients We want to define a quadratic shading function I (x, y) = ax 2 + bxy + cy 2 + dx + ey + f that evaluates to the specified color values at the six sample points. Equations (1) define five intermediate color values. Equations (2) define the six coefficients. T/2, U/2, and (T + U-2V)/2 measure the worst-case difference between Gouraud shading and quadratic shading on the three edges, which occurs at edge midpoints. Rendering hardware may compare these to a threshold to select whether to fall back to Gouraud shading. (2) a = 3D 2(T + Uj 2-2Vj) / (k 2 x1 2) b = 3D 4(2V-Ti-Uj-Vk) / (k 2 x1 y2) c = 3D 2(U + Ti 2-2Vi) / (k 2 y2 2) d = 3D (G-Hj) / (k x1) e = 3D (H-Gi) / (k y2) f = 3D L The quadratic shading function may be easily computed using forward differencing. A scanline rendering algorithm requires two additions to compute each intensity. It can be defined …

Patent
23 Jun 1998
TL;DR: In this article, a path through a multi-dimensional hypercube from a base vertex to an opposite corner or vertex of the hypercube is selected by ranking the fractional components of the point or value to be interpolated.
Abstract: The invention involves selecting a path through a multi-dimensional hyper-cube from a base vertex to an opposite corner or vertex of the hyper-cube. The path is selected by ranking the fractional components of the point or value to be interpolated. An N-dimensional interpolation is performed according to this sequence. During an interpolation a base vertex for an input color value is determined. The output value for the base vertex is accumulated. The fractional values of the input color value are sorted and ranked according to magnitude to produce an interpolation sequence. The interpolations are performed for each axis of the N-dimensions by selecting an axis for interpolation based on the order, performing an interpolation corresponding to the selected axis producing an interpolation result, and accumulating the interpolation result.

Journal ArticleDOI
TL;DR: Nested spaces of multivariate periodic functions forming a non-stationary multiresolution analysis are investigated and the approach based on Boolean sums leads to sample and wavelet spaces of significantly lower dimension and good approximation order.
Abstract: Nested spaces of multivariate periodic functions forming a non-stationary multiresolution analysis are investigated. The scaling functions of these spaces are fundamental polynomials of Lagrange interpolation on a sparse grid. The approach based on Boolean sums leads to sample and wavelet spaces of significantly lower dimension and good approximation order. The algorithms for complete decomposition and reconstruction are of simple structure and low complexity.

Patent
05 Nov 1998
TL;DR: In this paper, an edge processing circuit is used to prevent noise from being generated near an edge, even if interpolated pixel data obtained through interpolating are prepared, and an image generation means for generating an image on the basis of the interpolation pixel data.
Abstract: PROBLEM TO BE SOLVED: To prevent noise from being generated near an edge, even if interpolated pixel data obtained through interpolating are prepared. SOLUTION: This processor is equipped with a horizontal direction interpolation circuit 15a and a vertical direction interpolation circuit 15b that interpolate from two directions and generate interpolated pixel data in each direction on the basis of the pixel data in a position and/or surrounding of the pixel data generated based on an image-pickup signal from solid-state image-pickup element to which an image-pickup light enters by way of a color filter, an edge processing circuit 15c for calculating a limiting value with regard to each of the interpolation pixel data on the basis of the pixel data surrounding the interpolated pixel data in each of the directions, a correlation detection part 16 for detecting a correlation value respectively, which indicates the degree of correlation in each of the directions of the interpolation pixel data, a weighting and adding circuit 22 for generating the interpolation pixel data by weighting each of the interpolation pixel data on the basis of the correlation value in each of the directions, and an image generation means for generating an image on the basis of the interpolation pixel data. Then, the horizontal interpolation circuit 15a and the vertical interpolation circuit 15b generate the interpolation pixel data regarding the pixel data, on the basis of the limiting value. COPYRIGHT: (C)1999,JPO

Journal ArticleDOI
TL;DR: The RKHS-based optimal image interpolation method, presented by Chen and de Figueiredo (1993), is applied to scattered potential field measurements and is compared to bicubic spline interpolation, and is found to yield vastly superior results.
Abstract: The RKHS-based optimal image interpolation method, presented by Chen and de Figueiredo (1993), is applied to scattered potential field measurements. The RKHS which admits only interpolants consistent with Laplace's equation is defined and its kernel, derived. The algorithm is compared to bicubic spline interpolation, and is found to yield vastly superior results.

Patent
07 Aug 1998
TL;DR: In this article, an aliasing-reducing interpolation filtration with sub-pel precision is used in the motion-compensated prediction of moving image sequences, but this process requires less memory bandwidth in accessing reference images.
Abstract: For the determination of motion vectors, an aliasing-reducing interpolation filtration with a sub-pel precision is used in the motion-compensated prediction of moving image sequences. More adjacent pixels are accessed in this interpolation filtration than in known bilinear interpolation. Asymmetrical filtration or reflection of pixels inside a reference image block is used to assure that maximally, a block of the reference image containing (M+1)×(M+1) pixels for interpolation filtration is accessed. However this process requires less memory bandwidth in accessing reference images.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: In this article, a method for constructing fractal interpolation surfaces and volumes through points sampled on rectangular lattices is presented, which uses rectangular rather than triangular tilings, halving the number of required parameters.
Abstract: We present a method for constructing fractal interpolation surfaces and volumes through points sampled on rectangular lattices. Unlike other surface constructions ours uses rectangular rather than triangular tilings, halving the number of required parameters. This method is no more complex than previous constructions and yet does not suffer from their limitations. Additionally, our construction extends easily to volumetric interpolation, for which there were no previous (continuous) constructions. In addition to an example with synthetic data, a real image is interpolated using a fractal surface. Limitations and possible improvements are mentioned.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: In this paper, a maximum likelihood estimation based on the covariance properties (Kriging) was proposed to compute an interpolated dense displacement map, which is more expedient than Gaussian interpolation or Tikhonov (1977) regularizations.
Abstract: Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov (1977) regularizations, also including scale-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: An optimal (MMSE) prefilter for image interpolation is derived based upon a model of the sensor used to capture the image and results indicate that prefiltering generally improves the quality of the interpolated images.
Abstract: In this paper we derive an optimal (MMSE) prefilter for image interpolation. This derivation is based upon a model of the sensor used to capture the image. To employ this model, we restate the interpolation problem in an intuitive, reconstruction-like fashion. Using a simple CCD sensor model, an example prefilter is derived. Simulations with this prefilter are performed using linear and cubic interpolation as well as an ad hoc, directional interpolation scheme. Quantitative and subjective results indicate that prefiltering generally improves the quality of the interpolated images.