scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 1976"


Journal ArticleDOI
TL;DR: In this article, a technique for the objective analysis of oceanic data has been developed and used on simulated data, which is based on a standard statistical result, the Gauss-Markov Theorem, which gives an expression for the least square error linear estimate of some physical variable given measurements at a limited number of data points, the statistics of the field being estimated in the form of space-time spectra, and the measurement errors.

1,039 citations


Book ChapterDOI
01 Jan 1976
TL;DR: In this article, the authors introduce some basic notation and definitions of interpolation spaces and discuss a few general results on the Aronszajn-Gagliardo theorem.
Abstract: In this chapter we introduce some basic notation and definitions. We discuss a few general results on interpolation spaces. The most important one is the Aronszajn-Gagliardo theorem.

540 citations



Journal ArticleDOI
TL;DR: A method is described for smooth interpolation between random data points in two or more dimensions that gives a smooth surface passing exactly through the given data points, and is suitable for graphical applications.
Abstract: A method is described for smooth interpolation between random data points in two or more dimensions. The method gives a smooth surface passing exactly through the given data points, and is suitable for graphical applications. It has practical advantages over other published algorithms, including that recently described by Maude (1973), to which it is similar, being both easier to implement and faster in computer operation. (Received March 1974)

204 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for obtaining solutions is given and shown to be closely related to a well-known algorithm of Levinson and the Jury stability test, which suggests that they are fundamental in the numerical analysis of stable discrete-time linear systems.
Abstract: It is common practice to partially characterize a filter with a finite portion of its impulse response, with the objective of generating a recursive approximation. This paper discusses the use of mixed first and second information, in the form of a finite portion of the impulse response and autocorrelation sequences. The discussion encompasses a number of techniques and algorithms for this purpose. Two approximation problems are studied: an interpolation problem and a least squares problem. These are shown to be closely related. The linear systems which form the solutions to these problems are shown to be stable. An efficient algorithm for obtaining solutions is given and shown to be closely related to a well-known algorithm of Levinson and the Jury stability test. The close connection between these algorithms suggests that they are fundamental in the numerical analysis of stable discrete-time linear systems.

196 citations


Journal ArticleDOI
TL;DR: In this paper, a grid of values that approaches the smoothest surface passing through the control points is generated, and a grid can be automatically contoured automatically by standard routines, producing acceptable results.

146 citations


Journal ArticleDOI
G.P Nevai1
TL;DR: In this paper, it was shown that if a continuous function satisfies some growth conditions, then the corresponding Lagrange interpolation process converges in every Lp (1 < p < ∞) provided that the weight function is chosen in a suitable way.

105 citations


Journal ArticleDOI
TL;DR: A practical method for overcoming a thresholding action that distorts low-amplitude input signals and use of appropriate weights in the accumulation has important advantages for providing finer resoution, less spectral distortion, and white quantization noise.
Abstract: We present and analyze a method of interpolation that improves the amplitude resolution of an analog-to-digital converter. The technique requires feedback around a quantizer that operates at high speed and digital accumulation of its quantized values to provide a PCM output. We show that use of appropriate weights in the accumulation has important advantages for providing finer resoution, less spectral distortion, and white quantization noise. The theoretical discussion is supplemented by the report of a practical converter designed especially to show up the strengths and weaknesses of the technique. This converter comprises a sigma-delta modulator operating at 8 MHz and an accumulation of the 1-bit code with triangularly distributed weights. 13-bit resolution at 8 kwords/s is realized by periodically dumping the accumulation to the output. We present a practical method for overcoming a thresholding action that distorts low-amplitude input signals.

95 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that there is a positive lower bound,c, to the uniform error in any scheme designed to recover all functions of a certain smoothness from their values at a fixed finite set of points.
Abstract: It is shown that there is a positive lower bound,c, to the uniform error in any scheme designed to recover all functions of a certain smoothness from their values at a fixed finite set of points. This lower bound is essentially attained by interpolation at the points by splines with canonical knots. Estimates ofc are also given.

89 citations


Journal ArticleDOI
TL;DR: The SIGMA1 (σ1) kernel broadening method is presented in this article to broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cr...
Abstract: The SIGMA1 (σ1) kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cr...

88 citations


Journal ArticleDOI
TL;DR: In this article, spline interpolation with a cubic space is investigated as a way of integrating the advective equation, and the integration scheme used is second-order accurate in time, and can easily he combined with can-leapfrog approximations as a practical way of exploiting the advantages of both types of approximation for general problems.
Abstract: Upstream interpolation with a cubic space is investigated as a way of integrating the advective equation. In advection tests with a cone this is found to give much better results than realized with second-order conservative centered differencing on a double resolution mesh, and used one-third the computation time and one eighth of the memory space. The phase errors are less than those of the fourth-order Arakawa scheme at double the resolution. The integration scheme used is second-order accurate in time, and can easily he combined with can “leapfrog” approximations as a practical way of exploiting the advantages of both types of approximation for general problems. The spline interpolation representation of advection should he of use where boundary conditions are not periodic and where the exact advection of a conservation law is not as important as good phase and amplitude fidelity.

Journal ArticleDOI
TL;DR: In this paper, the authors adapted an "old" technique of numerical analysis, Hermite interpolation, to the problem of estimating the Gini index and showed that it usually works in theory and in practice.
Abstract: ECONOMISTS OFTEN SUMMARIZE the income distribution by the Lorenz curve and Gini index A variety of parametric methods (eg, [1 and 8]) have been developed to estimate these measures from the grouped income data governments provide (eg, [3 and 12]) Previously, one of the authors developed a distribution-free approach [5] which yielded accurate bounds on the Gini index While analogous bounds on the Lorenz curve can be obtained [5 and 10], the resulting curve is not smooth so a method of interpolation is needed The purpose of this paper is to adapt an "old" technique of numerical analysis, Hermite interpolation [7 and 13], to our problem and to show that it usually works in theory and in practice Our paper was motivated by the work of Brittain [2] who also used numerical methods Unfortunately, his procedure often resulted in estimates of the Gini index which were inconsistent with the above-mentioned bounds Although the piecewise Hermite interpolation yielded accurate estimates of the Gini index, it is not always convex as the Lorenz curve must be Section 5 states conditions for the interpolated curve to be convex or at least increasing over an interval While these conditions are usually satisfied by real data, a theoretical example illustrates how an error may arise

Journal ArticleDOI
TL;DR: In this paper, the authors give a necessary condition for an interpolation pair to have its interpolation spaces characterized by K-monotonicity, which is the strongest form of monotonicity which holds in such generality.
Abstract: For any interpolation pair (A 0 A 1), Peetre’sK-functional is defined by: $$K\left( {t,a;A_0 ,A_1 } \right) = \mathop {\inf }\limits_{a = a_0 + a_1 } \left( {\left\| {a_0 } \right\|_{A_0 } + t\left\| {a_1 } \right\|_{A_1 } } \right).$$ It is known that for several important interpolation pairs (A 0,A 1), all the interpolation spacesA of the pair can be characterised by the property ofK-monotonicity, that is, ifa∈A andK(t, b; A0, A1)≦K(t, a; A0, A1) for all positivet thenb∈A also. We give a necessary condition for an interpolation pair to have its interpolation spaces characterized byK-monotonicity. We describe a weaker form ofK-monotonicity which holds for all the interpolation spaces of any interpolation pair and show that in a certain sense it is the strongest form of monotonicity which holds in such generality. On the other hand there exist pairs whose interpolation spaces exhibit properties lying somewhere betweenK-monotonicity and weakK-monotonicity. Finally we give an alternative proof of a result of Gunnar Sparr, that all the interpolation spaces for (L v p , L w q ) areK-monotone.



Journal ArticleDOI
01 Dec 1976
TL;DR: This paper reviews the methods of height interpolation for digital terrain models that have been published in photogrammetric and related journals and concludes that these methods should be considered as stand-alone interpolation techniques.
Abstract: This paper reviews the methods of height interpolation for digital terrain models that have been published in photogrammetric and related journals These methods are arranged in six groups, accordi

Journal ArticleDOI
TL;DR: The software interpolator and theFeed-rate control are contained in the numerical control program of a computer numerical control (CNC) system and enable a contouring control of the machine tool in any required feed-rate.
Abstract: A software interpolator which is comprised of linear and circular interpolations is compared with its hardware counterpart and with other circular interpolation methods. The software interpolator and the feed-rate control are contained in the numerical control (NC) program of a computer numerical control (CNC) system and enable a contouring control of the machine tool in any required feed-rate.

Journal ArticleDOI
TL;DR: This correspondence presents results concerning the display of interpolated images for cosmetically pleasing effects and a degree of freedom analysis is provided to demonstrate the fact that apparent image improvement does not necessarily increase the inherent quantitative information within an image.
Abstract: This correspondence presents results concerning the display of interpolated images for cosmetically pleasing effects. Replication, bilinear interpolation, and various cubic spline function interpolators are investigated as to numeric computational difficulty and psychovisually pleasing results. A degree of freedom analysis is provided to demonstrate the fact that apparent image improvement does not necessarily increase the inherent quantitative information within an image.

Journal ArticleDOI
TL;DR: In this paper, the authors developed the mathematical properties of a new technique for performing this extrapolation, which is based on maximum-likelihood estimation of parameters in dose-response relations whose form is derived from a general multistage carcinogenesis model.
Abstract: Use of data from animal experiments to estimate the human cancer risk due to long-term exposure to very low doses of chemicals in the environment poses a number of biological and statistical problems. One of the statistical problems is to extrapolate the animal dose-response relation from the high dose range where test data are available to low doses, which humans might encounter. We develop the mathematical properties of a new technique for performing this extrapolation. (Strictly speaking, it is an interpolation, since some animals are always tested at the background dose as a control.) The technique is based on maximum-likelihood estimation of parameters in dose-response relations whose form is derived from a general multistage carcinogenesis model. Since the number of stages is itself an unknown, the number of parameters to be estimated is infinite, although only finitely many will be nonzero for any given set of data. Existence and uniqueness properties of the estimates are proved using results of Karlin and Studden and of Krein and Rehtman from the theory of interpolation of polynomials with nonnegative coefficients. The algorithm for calculating the estimates is based on reducing an infinite set of Kuhn-Tucker conditions to an equivalent finite set of conditions, by exploiting properties of lexicographic ordering. This paper deals with the mathematical properties of the estimation technique. Statistical properties will be discussed in a subsequent paper.

Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of an unsymmetrical double minimum potential were obtained for the E, F 1 ε+g state of H 2 using three different numerical techniques.
Abstract: Disagreement between calculated vibrational eigenvalues for the E, F 1Σ+g state of H2 obtained by three different numerical techniques has been interpreted in the literature in terms of inherent inadequacies of these techniques. We have tested these three numerical procedures by using each to obtain the eigenvalues of an unsymmetrical double minimum potential. An analytical potential was used to eliminate uncertainties introduced by interpolation. All three techniques are shown to give accurate results when properly applied.

Journal ArticleDOI
TL;DR: In this article, the authors describe a technique based on local procedures (in contrast to interpolators such as spline or Fourier that are based on global properties of the sample data) that has been very useful for the interpolation of hand-digitized seismograms.
Abstract: There are many applications for which a continuous interpolation is desired for a curve that has been sampled at discrete points. In this note I will describe a technique based on local procedures (in contrast to interpolators such as spline or Fourier that are based on global properties of the sample data) that has been very useful for the interpolation of hand-digitized seismograms. The definition of an \"optimal\" interpolator depends on the type of data being interpolated and on the expectations of the person who is digitizing. For this reason, each investigator might design a different optimal interpolator. My own optimal interpolator was designed to meet the requirements that (a) its behavior be intuitively obvious to the digitizer operator and (b) it require the minimum number of samples needed to define the curve. In particular, for hand digitizing seismograms I wished to sample the peaks and troughs and occasionally an inflection point when necessary. The interpolator should place the peaks and troughs of the interpolated curves at the sample points and should never introduce spurious peaks, particularly in the vicinity of broad fiat peaks or troughs. The interpolator described here fits these criteria very well. Akima (1970) describes another interpolator based on local properties. His desiderata were very different from my own and hence his interpolation procedure is also different. Consider a single-valued function sampled at the points x~, y~ i = 1 . . . n. All of the interpolation techniques presented here are based on fitting piecewise continuous cubic polynomials (P C C P) to the data. A different polynomial is found between each pair of points such that the interpolated curve and its slope are continuous at the knots, x~. Cubic splines interpolate curves for which both the first and second derivatives are continuous at the knots. Such a criterion is sufficient to define a unique interpolation that selects a particular slope at each knot. In a local procedure, the second derivative is not required to be continuous and a criterion must be found that will select the slope s~ at each point x i. Figures 1 and 2 illustrate the behavior of a number of methods for choosing the slopes s~. Each of the methods is described below. Most of the methods are based upon finding s~ as a function of the slopes rn~ = ( y i Y i ~ ) / ( x ~ x ~ _ l ) of straight lines connecting the sample points. As Akima (1970) points out, schemes based on slopes are invariant to linear scale changes in x or y. Weigh ted average slopes. The slope s i at each knot is given by a weighted average of the adjacent linear slopes mi and mi+ 1



Book ChapterDOI
01 Jan 1976
TL;DR: Blending function methods permit the exact interpolation of data given along curves and/or surfaces to yield finite dimensional schemes.
Abstract: Blending function methods permit the exact interpolation of data given along curves and/or surfaces. Appropriate discretisations yield finite dimensional schemes. These methods are useful for Finite Element Analysis and for Computer Aided Geometric Design.

Patent
Henry H. J. Liao1
03 Jun 1976
TL;DR: In this article, an interpolation process is used to predict a gray scale value for each element of output data from the quantized levels of a m × n matrix of input data elements.
Abstract: Expanded gray scale information is recovered from quantized video input data in a raster scanned imaging system by utilizing an interpolation process to predict a gray scale value for each element of output data from the quantized levels of a m × n matrix of input data elements. The prediction matrix for each output data element includes the spatially corresponding input data element, together with vertically and horizontally input data elements.


Journal ArticleDOI
TL;DR: This paper proves the validity of numerical methods, designed to retrieve the phase of the wave function in some plane in a light or electron microscope, that use the intensity distributions in the image plane at two different settings of the defocus.
Abstract: In this paper we prove the validity of numerical methods, designed to retrieve the phase of the wave function in some plane in a light or electron microscope, that use the intensity distributions in the image plane at two different settings of the defocus. The main condition we have to impose on the solutions is that it is differentiable almost everywhere. This condition is usually satisfied by numerical solutions, as they are an interpolation of a finite number of function values. The method we use to show the uniqueness is also well suited to studying the influence of all kinds of errors. A rough estimate of their influence will be given. In the next papers of this series a fast computation method to perform the phase retrieval will be introduced and demonstrated.

Patent
Henry H. J. Liao1
03 Jun 1976
TL;DR: In this paper, a resolution converter for interfacing raster input and output scanners having different, predetermined resolution characteristics relies on a maximum likelihood interpolation process, whereby the conversion is carried out with minimum statistical error.
Abstract: A resolution converter for interfacing raster input and output scanners having different, predetermined resolution characteristics relies on a maximum likelihood interpolation process, whereby the conversion is carried out with minimum statistical error.

Journal ArticleDOI
TL;DR: In this article, an analog of the well-known Jackson-Bernstein-Zygmund theory on best approximation by trigonometric polynomials is developed for approximation methods which use piecewise polynomial functions.
Abstract: An analog of the well-known Jackson-Bernstein-Zygmund theory on best approximation by trigonometric polynomials is developed for approximation methods which use piecewise polynomial functions. Interpolation and best approximation by polynomial splines, Hermite and finite element functions are examples of such methods. A direct theorem is proven for methods which are stable, quasi-linear and optimally accurate for sufficiently smooth functions. These assumptions are known to be satisfied in many cases of practical interest. Under a certain additional assumption, on the family of meshes, an inverse theorem is proven which shows that the direct theorem is sharp.

Book ChapterDOI
01 Jan 1976
TL;DR: Three examples of studies that required special kriging developments: realizations of conditional simulations, or variations of the studied phenomenon - use of gradient data -consideration of the uncertainty of location of the measure points.
Abstract: In the field of automatic contouring and representation of phenomena defined in a two-dimensional space, kriging is not solely a method of interpolation amongst others: its sound theoretical basis allows its adaptation to various non-classical problems. We give here three examples of studies that required special kriging developments: realizations of conditional simulations, or variations of the studied phenomenon - use of gradient data -consideration of the uncertainty of location of the measure points.