scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 1981"


Journal ArticleDOI
TL;DR: It can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines.
Abstract: Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data.

3,280 citations


Journal ArticleDOI
E. Hogenauer1
TL;DR: A class of digital linear phase finite impulse response (FIR) filters for decimation and interpolation and use limited storage making them an economical alternative to conventional implementations for certain applications.
Abstract: A class of digital linear phase finite impulse response (FIR) filters for decimation (sampling rate decrease) and interpolation (sampling rate increase) are presented. They require no multipliers and use limited storage making them an economical alternative to conventional implementations for certain applications. A digital filter in this class consists of cascaded ideal integrator stages operating at a high sampling rate and an equal number of comb stages operating at a low sampling rate. Together, a single integrator-comb pair produces a uniform FIR. The number of cascaded integrator-comb pairs is chosen to meet design requirements for aliasing or imaging error. Design procedures and examples are given for both decimation and interpolation filters with the emphasis on frequency response and register width.

1,372 citations


Journal ArticleDOI
01 Mar 1981
TL;DR: This paper presents a tutorial overview of multirate digital signal processing as applied to systems for decimation and interpolation and discusses a theoretical model for such systems (based on the sampling theorem), and shows how various structures can be derived to provide efficient implementations of these systems.
Abstract: The concepts of digital signal processing are playing an increasingly important role in the area of multirate signal processing, i.e. signal processing algorithms that involve more than one sampling rate. In this paper we present a tutorial overview of multirate digital signal processing as applied to systems for decimation and interpolation. We first discuss a theoretical model for such systems (based on the sampling theorem) and then show how various structures can be derived to provide efficient implementations of these systems. Design techniques for the linear-time-invariant components of these systems (the digital filter) are discussed, and finally the ideas behind multistage implementations for increased efficiency are presented.

584 citations


Journal ArticleDOI
TL;DR: In this article, a three-dimensional statistical interpolation method, multivariate in geopotential height, thickness and wind, is described, which has been implemented in the ECMWF operational global data-assimilation scheme, used for routine forecasting and for producing FGGE level III-b analyses.
Abstract: A three-dimensional statistical interpolation method, multivariate in geopotential height, thickness and wind, is described. The method has been implemented in the ECMWF operational global data-assimilation scheme, used for routine forecasting and for producing FGGE level III-b analyses. It is distinguished by the large number of data used simultaneously, enabling full exploitation of the potential of the three-dimensional multivariate technique, and by the incorporation of a statistical quality control check on each datum using the analysis itself. Some simple examples illustrating the properties of the method are presented, and a detailed study is made of the effect of various analysis parameters on one practical example.

573 citations


Journal ArticleDOI
TL;DR: In this article, a sampling strategy for soil survey in which an individual soil property is of interest and can be measured is presented, which depends on first determining accurately the semi-variogram for the property, and this must be done in a prior reconnaissance stage of a survey.
Abstract: Summary A sampling strategy is presented for soil survey in which an individual soil property is of interest and can be measured. It depends on first determining accurately the semi-variogram for the property, and this must be done in a prior reconnaissance stage of a survey. Then from the semi-variogram estimation variances can be found for any combination of block size and sampling density by the methods of kriging. Alternatively for a given block size the sampling density needed to achieve a predetermined precision (maximum estimation variance) can be determined. The strategy is optimal in the sense that the sampling effort is the least possible to achieve the precision desired. An equilateral triangular configuration of sampling points is best where variation is isotropic, but a square grid at the same density is very nearly as good, and will usually be preferred for convenience. Where there is simple anisotropic variation optimal sampling is achieved by choosing a rectangular grid with sides in the same proportion to one another as the slopes of the semi-variogram in the directions of maximum and minimum variation.

225 citations


Journal ArticleDOI
TL;DR: It is concluded that sophisticated mechanisms are not required to account for the main properties of vernier acuity with moving targets, and it is furthermore suggested that the spatiotemporal channels of human vision may be the interpolation filters themselves.
Abstract: Stroboscopic presentation of a moving object can be interpolated by our visual system into the perception of continuous motion. The precision of this interpolation process has been explored by measuring the vernier discrimination threshold for targets displayed stroboscopically at a sequence of stations. The vernier targets, moving at constant velocity, were presented either with a spatial offset or with a temporal offset or with both. The main results are: (1) vernier acuity for spatial offset is rather invariant over a wide range of velocities and separations between the stations (see Westheimer & McKee 1975); (2) vernier acuity for temporal offset depends on spatial separation and velocity. At each separation there is an optimal velocity such that the strobe interval is roughly constant at about 30 ms; optimal acuity decreases with increasing separation; (3) blur of the vernier pattern decreases acuity for spatial offsets, but improves acuity for temporal offsets (at high velocities and large separations); (4) a temporal offset exactly compensates the equivalent (at the given velocity) spatial offset only for a small separation and optimal velocity; otherwise the spatial offset dominates. A theoretical analysis of the interpolation problem suggests a computational scheme based on the assumption of constant velocity motion. This assumption reflects a constraint satisfied in normal vision over the short times and small distances normally relevant for the interpolation process. A reasonable implementation of this scheme only requires a set of independent, direction selective spatiotemporal channels, that is receptive fields with the different sizes and temporal properties revealed by psychophysical experiments. It is concluded that sophisticated mechanisms are not required to account for the main properties of vernier acuity with moving targets. It is furthermore suggested that the spatiotemporal channels of human vision may be the interpolation filters themselves. Possible neurophysiological implications are briefly discussed.

213 citations


Journal ArticleDOI
TL;DR: An exact interpolation scheme is proposed which, in practice, can be approached with arbitrary accuracy using well-conditioned algorithms and demonstrates the feasibility of direct FT reconstruction of CT data.
Abstract: Direct Fourier transform (FT) reconstruction of images in computerized tomography (CT) is not widely used because of the difficulty of precisely interpolating from polar to Cartesian samples. In this paper, an exact interpolation scheme is proposed which, in practice, can be approached with arbitrary accuracy using well-conditioned algorithms. Several features of the direct FT method are discussed. A method that allows angular band limiting of the data before processing -to avoid angular aliasing artifacts in the reconstructed image-is discussed and experimentally verified. The experimental results demonstrate the feasibility of direct FT reconstruction of CT data.

200 citations


Journal ArticleDOI
P. Delsarte1, Y. Genin1, Y. Kamp1
TL;DR: In this paper, the authors apply the standard Nevanlinna pick problem in various technical domains, such as interpolation by reflectance functions, polynomial stability checking, cascade synthesis of passive one-ports, and model reduction with a Hankel norm criterion.
Abstract: The paper is concerned with applications of the standard Nevanlinna–Pick problem in various technical domains, namely the following: interpolation by reflectance functions, polynomial stability checking, cascade synthesis of passive one-ports, and model reduction with a Hankel norm criterion. Some fundamental results on the Nevanlinna–Pick problem are shown to be of a definite interest in each of these subjects.

179 citations


Journal ArticleDOI
TL;DR: In this paper, a new numerical scheme is proposed for the dispersion-convection equation which combines the utility of a fixed grid in Eulerian coordinates with the computational power of the Lagrangian method.

172 citations


Journal ArticleDOI
TL;DR: A class of numerical methods for the treatment of delay differential equations is developed in this paper, which are based on the wellknown Runge-Kutta-Fehlberg methods.
Abstract: A class of numerical methods for the treatment of delay differential equations is developed. These methods are based on the wellknown Runge-Kutta-Fehlberg methods. The retarded argument is approximated by an appropriate multipoint Hermite Interpolation. The inherent jump discontinuities in the various derivatives of the solution are considered automatically. Problems with piecewise continuous right-hand side and initial function are treated too. Real-life problems are used for the numerical test and a comparison with other methods published in literature.

144 citations


Journal ArticleDOI
TL;DR: In this article, the effect of sample size on the precision of kriging estimators has been investigated and it was shown that for samples of size less than approximately 50, Kriging offered no clear advantage over least squares in a Bayesean sense.
Abstract: Kriging, a technique for interpolating nonstationary spatial phenomena, has recently been applied to such diverse hydrologic problems as interpolation of piezometric heads and transmissivities estimated from hydrogeologic surveys and estimation of mean areal precipitation accumulations. An important concern for users of this technique is the effect of sample size on the precision of estimates obtained. Comparisons made between conventional least squares and kriging estimators indicate that for samples of size less than approximately 50, kriging offered no clear advantage over least squares in a Bayesean sense, although kriging may be preferable from the minimax viewpoint. A network design algorithm was also developed; tests performed using the algorithm indicated that the information content of identified networks was relatively insensitive to the size of the pilot network. These results suggest that within the range of sample sizes typically of hydrologic interest, kriging may hold more potential for network design than for data analysis.

Journal ArticleDOI
Svante Janson1
TL;DR: In this paper, it was shown that several interpolation functors, including real and complex methods, are minimal or maximal extensions from a single couple of Banach spaces, and various consequences are drawn from this property.

Journal ArticleDOI
TL;DR: In this paper, an exact interpolation formula was proposed for reconstructing computerized tomographic (CT) imagery by direct Fourier methods, which is shown to yield superior results compared with other interpolation methods.
Abstract: In this paper an exact interpolation formula forms the basis for reconstructing computerized tomographic (CT) imagery by direct Fourier methods. Practical variations of exact interpolation are compared with other interpolation methods (i.e., nearest neighbor, etc.) and are shown to yield superior imagery. Images produced by the direct Fourier approach using near-exact interpolation are shown to be equal in quality with those produced by filtered convolution backprojection (FCBP). Moreover, the direct Fourier approach computes an image in O(N2 log N) time versus O(N3) for the FCBP method.

Patent
13 Apr 1981
TL;DR: In this article, information defining elements of a picture is estimated by interpolation using information from related locations in preceding and succeeding versions of the picture, and the related locations are determined by forming an estimate of the displacement of objects in the picture.
Abstract: Information defining elements of a picture is estimated by interpolation using information from related locations in preceding and succeeding versions of the picture. The related locations are determined by forming an estimate of the displacement of objects in the picture. Displacement estimates are advantageously formed recursively, with updates being formed only in moving areas of the picture. If desired, an adaptive technique can be used to permit motion compensated interpolation or fixed position interpolation, depending upon which produces better results.

Journal ArticleDOI
TL;DR: An optimally regularized (filtered) Fourier series can be used most effectively for estimating higher-order derivatives of noisy data sequences, such as occur in biomechanical investigations.

Journal ArticleDOI
TL;DR: In this article, numerical computations have been performed of various two-dimensional, elliptic flows at high Reynolds number with a view to assessing the relative merits of the widely used hybrid (i.e., upwind/central) interpolation and the recently proposed quadratic-upstream interpolation of Leonard, known as QUICK.

Book ChapterDOI
01 Jan 1981
TL;DR: In this article, the authors present two time-series outlier models, point out their ordinary regression analogues and the corresponding outlier patterns, and present robust alternatives to the least-squares method of fitting autoregressive-moving-average models.
Abstract: Outliers in time series can wreak havoc with conventional least-squares procedures, just as in the case of ordinary regression. This paper presents two time-series outlier models, points out their ordinary regression analogues and the corresponding outlier patterns, and presents robust alternatives to the least-squares method of fitting autoregressive-moving-average models. The main emphasis is on robust estimation in the presence of additive outliers. This results in the problem having an errors-in-variables aspect. While several methods of robust estimation for this problem are presented, the most attractive approach is an approximate non-Gaussian maximum-likelihood type method which involves the use of a robust non-linear filter/one-sided interpolator with data-dependent scaling. Robust smoothing/two-sided outlier interpolation, forecasting, model selection, and spectral analysis are briefly mentioned, as are the problems of estimating location and dealing with trends, seasonality, and missing data. Some examples of applying the methodology are given.

Journal ArticleDOI
P.R. Smith1
TL;DR: The application of the method of three-point bilinear interpolation is shown to generate a smoothly interpolated image, free from erroneous substructure generated by the interpolation scheme itself.

Journal ArticleDOI
TL;DR: In this paper, the Rietveld profile-analysis refinement procedure has been applied to X-ray powder diffractometer data collected from tin(II) oxide with Cu Kα radiation.

Journal ArticleDOI
TL;DR: A number of Reference-Pulse interpolation methods are described, all of which function in an online iterative mode controlled by an external interrupt source and are compared in terms of accuracy, maximum radius, uniformity of feedrate along the circular path, and maximum feedrate attainable.
Abstract: Interpolation techniques for CNC manufacturing systems are either of the Reference-Pulse or Reference-Word type. Reference-Pulse interpolations emit a sequence of pulses as reference signals to the control loops, whereas Reference-Word interpolators provide binary words as references to Sampled-Data control loops. In the present paper, a number of Reference-Pulse interpolation methods are described, all of which function in an online iterative mode controlled by an external interrupt source. These methods are compared in terms of accuracy, maximum radius, uniformity of feedrate along the circular path, and maximum feedrate attainable. Selection of the appropriate interpolation method is found to depend on the specific application. Reference-Word interpolators will be covered in a subsequent paper.

Journal ArticleDOI
TL;DR: It is found that in the models considered, small-scale noise can be ascribed to resonance anomalies associated with the method of spatial discretization, and a formal proof is given of the time-stepping stability of a general, discretized form of Laplace's tidal equations.

Journal ArticleDOI
TL;DR: In this paper, two explicit representations of a C1 quintic interpolant over triangles are derived by generalization of Coons' methods and Bernstein-Bezier methods, respectively.
Abstract: Two explicit representation of a C1 quintic interpolant over triangles are given. These representations are derived by generalization of Coons' methods and Bernstein-Bezier methods, respectively.

Journal ArticleDOI
TL;DR: In this article, a two-variable approach to model reduction with Hankel norm criterion is discussed, and the problem is proved to be reducible to obtaining a 2-variable all-pass rational function, interpolating a set of parametric values at specified points inside the unit circle.
Abstract: A two-variable approach to the model reduction problem with Hankel norm criterion is discussed. The problem is proved to be reducible to obtaining a two-variable all-pass rational function, interpolating a set of parametric values at specified points inside the unit circle. A polynomial formulation and the properties of the optimal Hankel norm approximations are then shown to result directly from the general form of the solution of the interpolation problem considered.

Journal ArticleDOI
TL;DR: In this paper, an accurate and efficient numerical method based on the rigorous Sommerfeld theory is described for modeling antennas near an interface such as the ground, which can be used for modeling an antenna within 10-6 wavelengths of the ground for about two to four times the computation time for the same antenna in free space.
Abstract: An accurate and efficient numerical method based on the rigorous Sommer-feld theory is described for modeling antennas near an interface such as the ground. The Sommerfeld integrals are evaluated by numerical integration along contours in the complex plane and two-dimensional interpolation is used subsequently to obtain the many Sommerfeld integral values needed for the moment-method solution of an integral equation. These methods permit modeling an antenna within 10-6 wavelengths of the ground for about two to four times the computation time for the same antenna in free space. Results showing currents and radiation patterns are included.

Journal ArticleDOI
R. Wiley1
TL;DR: This paper deals with approximate methods for demodulating FM signals using their zero crossings using a first-order (linear) interpolation method and the results are compared to those using the lower order methods.
Abstract: This paper deals with approximate methods for demodulating FM signals using their zero crossings. A first-order (linear) interpolation method is devised and analyzed. Computer programs were then prepared to compare this first-order interpolator to the more usual zero-order interpolator. Higher order interpolation is briefly discussed, and the results of using a second-order interpolator are compared to those using the lower order methods.

Journal ArticleDOI
TL;DR: This paper presents a new algorithm for the solution of linear equations with a Vandermonde coefficient matrix that uses a block decomposition of the matrix to solve the interpolation problem with complex-conjugate interpolation points where the coefficients of the interpolating polynomial are real.
Abstract: This paper presents a new algorithm for the solution of linear equations with a Vandermonde coefficient matrix. The algorithm can also be used to solve the dual problem. Since the algorithm uses a block decomposition of the matrix, it is especially suitable for parallel computation. A variation of the block decomposition leads to the efficient solution of the interpolation problem with complex-conjugate interpolation points where the coefficients of the interpolating polynomial are real. In addition the algorithm can be used to solve some kinds of confluent Vandermonde systems.


Journal ArticleDOI
TL;DR: In this article, the Inverse Fourier Transform is avoided by direct frequency domain fitting: either interpolation (exact for selected points) or weighted least squares approximation, and the method of recursive convolutions is generalized for complex exponentials.
Abstract: Recursive convolutions are believed to be the basic approach for digital calculation of electromagnetic transients on transmission systems. They require step responses expressed by means of exponential functions. This paper presents the theory for obtaining an arbitrary number of exponential components - with real or complex exponents directly from the frequency domain transfer function. The Inverse Fourier Transform is avoided by direct frequency domain fitting: either interpolation (exact for selected points) or weighted least squares approximation. Finally the method of recursive convolutions is generalized for complex exponentials.

Journal ArticleDOI
TL;DR: In this paper, a method based on forecasting techniques is proposed to estimate missing observations in time series using mean squares, this method is compared to the minimum mean square estimate, which is used in this paper.
Abstract: A method based on forecasting techniques is proposed to estimate missing observations in time series. Using mean squares, this method is compared to the minimum mean square estimate.

Journal ArticleDOI
TL;DR: In this article, the authors compared three interpolation techniques, optimum interpolation, eigenvector interpolation and distance-density interpolation for SO2, NO, NO2 and O3, and concluded that the interpolation errors and the associated persistence in space and time, as given by mutual correlations, should be specified with respect to pure space- or space-time variability.