scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 1988"


Journal Article
TL;DR: The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed, leading naturally to a picture of 'generalization in terms of interpolation between known data points and suggests a rational approach to the theory of such networks.
Abstract: : The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed. This leads naturally to a picture of 'generalization in terms of interpolation between known data points and suggests a rational approach to the theory of such networks. A class of adaptive networks is identified which makes the interpolation scheme explicit. This class has the property that learning is equivalent to the solution of a set of linear equations. These networks thus represent nonlinear relationships while having a guaranteed learning rule. Great Britain.

3,538 citations


Journal ArticleDOI
TL;DR: A well-posed variational formulation results from the use of a controlled-continuity surface model, and Finite-element shape primitives yield a local discretization of the variational principle, which is an efficient algorithm for visible-surface reconstruction.
Abstract: A computational theory of visible-surface representations is developed. The visible-surface reconstruction process that computes these quantitative representations unifies formal solutions to the key problems of: (1) integrating multiscale constraints on surface depth and orientation from multiple-visual sources; (2) interpolating dense, piecewise-smooth surfaces from these constraints; (3) detecting surface depth and orientation discontinuities to apply boundary conditions on interpolation; and (4) structuring large-scale, distributed-surface representations to achieve computational efficiency. Visible-surface reconstruction is an inverse problem. A well-posed variational formulation results from the use of a controlled-continuity surface model. Discontinuity detection amounts to the identification of this generic model's distributed parameters from the data. Finite-element shape primitives yield a local discretization of the variational principle. The result is an efficient algorithm for visible-surface reconstruction. >

520 citations


Proceedings ArticleDOI
25 Oct 1988
TL;DR: A hierarchical blockmatching algorithm for the estimation of displacement vector fields in digital television sequences is presented, which yields reliable and homogeneous displacement vector - fields, which are close to the true displacements.
Abstract: A hierarchical blockmatching algorithm for the estimation of displacement vector fields in digital television sequences is presented. Known blockmatching techniques fail frequently as a result of using a fixed measurement window size. Using distinct sizes of measurement windows at different levels of a hierarchy, the presented blockmatching technique yields reliable and homogeneous displacement vector - fields, which are close to the true displacements, rather than only a match in the sense of a minimum mean absolute luminance difference. In the environment of a low bit rate hybrid coder for image sequences, the hierarchical blockmatching algorithm is well suited for both, motion compensating prediction, and motion compensating interpolation. Compared to other high sophisticated displacement estimation techniques, the computational effort is decreased drastically. Due to the regularity and the very small number of necessary operations, the presented hierarchical blockmatching algorithm can be implemented in hardware very easily.

460 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: An efficient deterministic polynomial time algorithm is developed for the sparsePolynomial interpolation problem and has a simple NC implementation.
Abstract: An efficient deterministic polynomial time algorithm is developed for the sparse polynomial interpolation problem. The number of evaluations needed by this algorithm is very small. The algorithm also has a simple NC implementation.

370 citations


Journal ArticleDOI
TL;DR: A modified Shepard's method for fitting a surface to data values at scattered points in the plane is described that has accuracy comparable to other local methods and computational efficiency is improved by using a cell method for nearest-neighbor searching.
Abstract: This paper presents a method of constructing a smooth function of two or more variables that interpolates data values at arbitrarily distributed points. Shepard's method for fitting a surface to data values at scattered points in the plane has the advantages of a small storage requirement and an easy generalization to more than two independent variables, but suffers from low accuracy and a high computational cost relative to some alternative methods. Localizations of this method have reasonably low computational costs, but remain relatively inaccurate. We describe a modified Shepard's method that, without sacrificing the advantages, has accuracy comparable to other local methods. Computational efficiency is also improved by using a cell method for nearest-neighbor searching. Test results for two and three independent variables are presented.

362 citations


Journal ArticleDOI
TL;DR: In this paper, the role of underreLAXATION in MOMENTUM INTERPOLATION for CALCULATION OF FLOW with non-staggered GRIDS is discussed.
Abstract: (1988). ROLE OF UNDERRELAXATION IN MOMENTUM INTERPOLATION FOR CALCULATION OF FLOW WITH NONSTAGGERED GRIDS. Numerical Heat Transfer: Vol. 13, No. 1, pp. 125-132.

351 citations


Journal ArticleDOI
TL;DR: In this paper, a particle tracking algorithm is developed to extract accurate Lagrangian statistics from numerically calculated velocity fields, such as velocity autocorrelations, structure functions, and frequency spectra.

341 citations


Journal ArticleDOI
E. Maeland1
TL;DR: A study of different cubic interpolation kernels in the frequency domain is presented that reveals novel aspects of both cubic spline and cubic convolution interpolation.
Abstract: A study of different cubic interpolation kernels in the frequency domain is presented that reveals novel aspects of both cubic spline and cubic convolution interpolation. The kernel used in cubic convolution is of finite support and depends on a parameter to be chosen at will. At the Nyquist frequency, the spectrum attains a value that is independent of this parameter. Exactly the same value is found at the Nyquist frequency in the cubic spline interpolation. If a strictly positive interpolation kernel is of importance in applications, cubic convolution with the parameter value zero is recommended. >

267 citations


Journal ArticleDOI
TL;DR: A technique for registration of images with geometric distortions is described, which uses two surface splines to represent the X-component and the Y-component of a mapping function.
Abstract: A technique for registration of images with geometric distortions is described. This technique uses two surface splines to represent the X-component and the Y-component of a mapping function. A mapping function is described in such a way that it would map corresponding control points in the image exactly on top of each other and map other points in the image by interpolation using information and local geometric distortion between the images. >

261 citations


Journal ArticleDOI
TL;DR: In this paper, a short characteristic method based on parabolic approximation of the source function is developed and applied to the solution of the two-dimensional radiative transfer problem on Cartesian meshes.
Abstract: A short characteristic method based on parabolic approximation of the source function is developed and applied to the solution of the two-dimensional radiative transfer problem on Cartesian meshes. The method is significantly faster for the evaluation of multidimensional radiation fields than those currently in use. Convergence as a functional of the grid resolution is discussed and linear and parabolic upwind interpolation are compared.

220 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed study of the pressure-weighted interpolation method (PWIM) using a non-staggered grid proposed by Rhie and Chow [7] was conducted.
Abstract: A detailed study of the pressure-weighted interpolation method (PWIM) using a non-staggered grid proposed by Rhie and Chow [7] was conducted. Its implementation in the SIMPLEC algorithm in order to obtain results independent of relaxation factor is described. A comparison of predicted results for two test cases, one a flow in a shear-driven cavity and the other a laminar contraction flow, was made using both staggered and nonstaggered grids. Both hybrid and QUICK differencing schemes were used. QUICK differencing with a nonstaggered grid yielded results in closest agreement with experimental and numerical data. It was also found that in regions of very rapidly varying pressure gradients, PWIM can predict physically unrealistic convective velocities

Book ChapterDOI
04 Jul 1988
TL;DR: This work considers the problem of interpolating sparse multivariate polynomials from their values and presents efficient algorithms for finding the rank of certain special Toeplitz systems arising in the Ben-Or and Tiwari algorithm and for solving transposed Vandermonde systems of equations.
Abstract: We consider the problem of interpolating sparse multivariate polynomials from their values. We discuss two algorithms for sparse interpolation, one due to Ben-Or and Tiwari (1988) and the other due to Zippel (1988). We present efficient algorithms for finding the rank of certain special Toeplitz systems arising in the Ben-Or and Tiwari algorithm and for solving transposed Vandermonde systems of equations, the use of which greatly improves the time complexities of the two interpolation algorithms.

Journal ArticleDOI
TL;DR: In this paper, rational functions which guarantee well-conditioned interpolation on a real interval or a circle and cannot have any poles there are presented, which can be evaluated at least as efficiently as the corresponding interpolation polynomials and the accuracy of their approximation to a given function often compares favorably with that of spline interpolants.
Abstract: Polynomial interpolation is known to be ill-conditioned if the interpolating points are not chosen in special ways; classical rational interpolation can give better results, but does not work in all cases and the corresponding functions can show poles in the interval of interpolation. We present here rational functions which guarantee well-conditioned interpolation on a real interval or a circle and cannot have any poles there. They can be evaluated at least as efficiently as the corresponding interpolation polynomials and the accuracy of their approximation to a given function often compares favorably with that of spline interpolants.

Journal ArticleDOI
TL;DR: In this paper, a criterion for the positivity of a cubic polynomial on a given interval is derived, and a necessary and sufficient condition is given under which cubicC 1-spline interpolants are nonnegative.
Abstract: A criterion for the positivity of a cubic polynomial on a given interval is derived. By means of this result a necessary and sufficient condition is given under which cubicC 1-spline interpolants are nonnegative. Further, since such interpolants are not uniquely determined, for selecting one of them the geometric curvature is minimized. The arising optimization problem is solved numerically via dualization.

Journal ArticleDOI
TL;DR: In this paper, the application of the maximum entropy method to select a realistic size distribution is discussed and the results show that it is very useful in problems of linear inverse problems and that the resulting solution is the most uniform consistent with the data.
Abstract: The extraction of particle size distributions from small-angle neutron scattering data is an example of a practical linear inverse problem. Additional assumptions are necessary to obtain a unique solution. The application of the maximum entropy method to select a realistic size distribution is discussed. Principal features of the method include a proper treatment of experimental errors, no interpolation or smoothing of data, no fitting to empirical models, and guaranteed positivity of the solution everywhere in spite of statistical noise in the data. The resulting solution is the most uniform consistent with the data. Model data results are presented to show that the maximum entropy criterion proves very useful in problems of this type.

Proceedings ArticleDOI
14 Nov 1988
TL;DR: A method is presented for finding a threshold surface which involves the ideas used in other methods but attempts to overcome some of their disadvantages, and the latter is shown to give better results, matching human performance quite well.
Abstract: A method is presented for finding a threshold surface which involves the ideas used in other methods but attempts to overcome some of their disadvantages. The method uses the gradient map of the image to point at well-defined portions of object boundaries in it. Both the location and gray levels at these boundary points make them good choices for local thresholds. These point values are then interpolated, yielding the threshold surface. A method for fitting a surface to this set of points, which are scattered in a manner unknown in advance, becomes necessary. Several possible approaches are discussed, and the implementation of one of them is described in detail. Two versions of the C.K. Chow and T. Kaneko algorithm (1972) and the present algorithm are applied to a few images, and the latter is shown to give better results, matching human performance quite well. >

Journal ArticleDOI
TL;DR: An 8-bit 100-MHz full-Nyquist analog-to-digital (A/D) converter using a folding and interpolation architecture is presented and a high-level model describing distortion caused by timing errors is presented.
Abstract: An 8-bit 100-MHz full-Nyquist analog-to-digital (A/D) converter using a folding and interpolation architecture is presented. In a folding system a multiple use of comparator stages is implemented. A reduction in the number of comparators, equal to the number of times the signal is folded, is obtained. However, every quantization level requires a folding stage, thus no reduction in input circuitry is found. Interpolation between the outputs of the folding stages generates additional folding signals without the need for input stages. A reduction in input circuitry equal to the number of interpolations is obtained. The converter is implemented in an oxide-isolated bipolar process, requiring 800 mW from a single 5.2-V supply. A high-level model describing distortion caused by timing errors is presented. Considering clock timing accuracies needed to obtain the speed requirement, this distortion is thought to be the main speed limitation. >

Journal ArticleDOI
01 Jun 1988
TL;DR: This work shows how smooth and natural looking interpolations can be obtained by minimizing a combination of the control energy and the roughness of the trajectory of the objects in 3D-space.
Abstract: Motion Interpolation, which arises in many situations such as Keyframe Animation, is the synthesis of a sequence of images portraying continuous motion by interpolating between a set of keyframes. If the keyframes are specified by parameters of moving objects at several instants of time, (e.g., position, orientation, velocity) then the goal is to find their values at the intermediate instants of time. Previous approaches to this problem have been to construct these intermediate, or in-between, frames by interpolating each of the motion parameters independently. This often produces unnatural motion since the physics of the problem is not considered and each parameter is obtained independently. Our approach models the motion of objects and their environment by differential equations obtained from classical mechanics. In order to satisfy the constraints imposed by the keyframes we apply external control. We show how smooth and natural looking interpolations can be obtained by minimizing a combination of the control energy and the roughness of the trajectory of the objects in 3D-space. A general formulation is presented which allows several trade-offs between various parameters that control motion. Although optimal parameter values resulting in the best subjectively looking motion are not yet known, our simulations have produced smooth and natural motion that is subjectively better than that produced by other interpolation methods, such as the cubic splines.

Journal ArticleDOI
TL;DR: A polynomial displacement basis for the three-node plate bending element (Zienkiewicz-triangle) is developed from a relaxed C1-continuity requirement called the interpolation test, which provides a general convergence criterion for non-conforming shape functions and a practical guideline to select a proper displacement basis.
Abstract: A polynomial displacement basis for the three-node plate bending element (Zienkiewicz-triangle) is developed from a relaxed C1-continuity requirement called the interpolation test. The test provides a general convergence criterion for non-conforming shape functions and a practical guideline to select a proper displacement basis. The resulting simple displacement type element passes the patch test. Several reduced numerical integration schemes are discussed and numerical testing provides a comparison with the standard element formulated by Zienkiewicz.

Journal ArticleDOI
TL;DR: An interpolation method is proposed for generating the intermediate contours between a start contour and a goal contour, which provides a powerful tool for reconstructing the 3D object from serial cross sections.
Abstract: An interpolation method is proposed for generating the intermediate contours between a start contour and a goal contour. Coupled with the display method for voxel-based objects, it provides a powerful tool for reconstructing the 3D object from serial cross sections. The method tries to fill in the lost information between two slices, assuming that there is smooth change between them. This is a reasonable assumption provided that the sampling is at least twice the Nyquist rate, in which case the result of the interpolation is expected to be very close to reality. One of the major advantages of this approach is its ability to handle the branching problem. Another major advantage is that after each intermediate contour is generated and sent to display device, there is no need to keep it in the memory unless the solid model will be used for further processing. Thus, the space complexity of this algorithm is relatively low. >

Journal ArticleDOI
TL;DR: In this paper, a unified approach to the construction of confidence bands in nonparametric density estimation and regression is described, based on interpolation formulae in numerical differentiation, and their arguments generate a variety of bands depending on the assumptions one is prepared to make about derivatives of the unknown function.

Journal ArticleDOI
01 Aug 1988-Tellus A
TL;DR: In this paper, a new analytical technique is proposed to interpolate data between the lowest model level and the earth's surface in numerical weather prediction systems, which can be applied either to the verification of forecasts or to the determination of “observation minus guess” increments in data assimilation.
Abstract: A new analytical technique is proposed to interpolate data between the lowest model level and the earth's surface in numerical weather prediction systems. The data can be interpolated to any height and especially to the one of routine measurements. The method can therefore be applied either to the verification of forecasts or to the determination of “observation minus guess” increments in data assimilation. The procedure assumes that a Monin-Obukhov-type flux calculation has been previously performed when integrating the model. DOI: 10.1111/j.1600-0870.1988.tb00352.x

Journal ArticleDOI
TL;DR: A new algorithm for scattered data interpolation is described that achieves cubic precision and continuity at very little additional cost and is among the most accurate available.
Abstract: We describe a new algorithm for scattered data interpolation. The method is similar to that of Algorithm 660 but achieves cubic precision and C2 continuity at very little additional cost. An accompanying article presents test results that show the method to be among the most accurate available.

Journal ArticleDOI
TL;DR: In this article, the authors derive upper bounds for the McMillan degree of all H∞-optimal controllers associated with design problems which may be embedded in a certain generalized regular configuration.

Journal ArticleDOI
Timothy H. Keho1, Wafik B. Beydoun1
TL;DR: In this paper, a rapid nonrecursive prestack Kirchhoff migration is implemented by computing the Green's functions (both traveltimes and amplitudes) in variable velocity media with the paraxial ray method.
Abstract: A rapid nonrecursive prestack Kirchhoff migration is implemented (for 2-D or 2.5-D media) by computing the Green’s functions (both traveltimes and amplitudes) in variable velocity media with the paraxial ray method. Since the paraxial ray method allows the Green’s functions to be determined at points which do not lie on the ray, two‐point ray tracing is not required. The Green’s functions between a source or receiver location and a dense grid of thousands of image points can be estimated to a desired accuracy by shooting a sufficiently dense fan of rays. For a given grid of image points, the paraxial ray method reduces computation time by one order of magnitude compared with interpolation schemes. The method is illustrated using synthetic data generated by acoustic ray tracing. Application to VSP data collected in a borehole adjacent to a reef in Michigan produces an image that clearly shows the location of the reef.

Patent
20 May 1988
TL;DR: A motion compensated interpolator for digital television images comprises a three-dimensional variable separable finite impulse response interpolation filter operating in the horizontal, vertical and temporal domains as discussed by the authors, which can be used to interpolate video images.
Abstract: A motion compensated interpolator for digital television images comprises a three-dimensional variable separable finite impulse response interpolation filter operating in the horizontal, vertical and temporal domains.

Proceedings ArticleDOI
T.P. Bronez1
11 Apr 1988
TL;DR: The utility of the method is demonstrated through a design example and simulation using a circular array and an eigenvector-based bearing estimator for linear, uniformly-sampled arrays.
Abstract: Bearing estimation is a fundamental array processing task for which attractive algorithms exist when the array is linear and uniformly-sampled. Since circumstances often require the use of an irregular two-dimensional array, the problem of interpolating a synthetic linear, uniformly sampled array from the real array is considered. Accurate interpolation is achieved by interpolating several synthetic arrays, each of which represents the real array over a limited sector of bearing angles. The utility of the method is demonstrated through a design example and simulation using a circular array and an eigenvector-based bearing estimator for linear, uniformly-sampled arrays. >

01 Sep 1988
TL;DR: In this paper, an explicit conservative control-volume formation makes use of a universal limiter for transient interpolation modeling of the advective transport equations, applied to unsteady, one-dimensional scalar pure advection at constant velocity.
Abstract: A fresh approach is taken to the embarrassingly difficult problem of adequately modeling simple pure advection. An explicit conservative control-volume formation makes use of a universal limiter for transient interpolation modeling of the advective transport equations. This ULTIMATE conservative difference scheme is applied to unsteady, one-dimensional scalar pure advection at constant velocity, using three critical test profiles: an isolated sine-squared wave, a discontinuous step, and a semi-ellipse. The goal, of course, is to devise a single robust scheme which achieves sharp monotonic resolution of the step without corrupting the other profiles. The semi-ellipse is particularly challenging because of its combination of sudden and gradual changes in gradient. The ULTIMATE strategy can be applied to explicit conservation schemes of any order of accuracy. Second-order schemes are unsatisfactory, showing steepening and clipping typical of currently popular so-called high resolution shock-capturing of TVD schemes. The ULTIMATE third-order upwind scheme is highly satisfactory for most flows of practical importance. Higher order methods give predictably better step resolution, although even-order schemes generate a (monotonic) waviness in the difficult semi-ellipse simulation. Little is to be gained above ULTIMATE fifth-order upwinding which gives results close to the ultimate for which one might hope.

Patent
Tatsuro Juri1, Minoru Etoh1
29 Feb 1988
TL;DR: In this article, the interpolation information is superimposed on the least significant bit of the value of the non-thinned-out pixel, with the view of effecting the superimposition without increasing the amount of transmission data.
Abstract: A sub-Nyguist sampling encoder and decoder has a plurality of interpolators provided in an encoder and corresponding plural interpolators provided in a decoder. Information indicating which interpolator can be used to minimize the interpolation error is superimposed on a value of a non-thinned-out pixel and transmitted from the encoder to the decoder, and the decoder receiving the information is permitted to always perform optimum interpolation. The interpolation information is superimposed on the least significant bit of the value of the non-thinned-out pixel, with the view of effecting the superimposition without increasing the amount of transmission data and without degrading quality of the video signal.

Journal ArticleDOI
TL;DR: These algorithms extend the existing, methods for regular grids to grids consisting of non-equidistant convex four-point meshes, in order to obtain short CPU times and possible vectorization.