scispace - formally typeset
Search or ask a question

Showing papers on "Bicubic interpolation published in 1994"


Journal ArticleDOI
TL;DR: A bicubic rectangular patch complex which surrounds an n -sided hole in and the problem of filling the hole with n bicUBic rectangular patches is studied.

63 citations


Journal ArticleDOI
TL;DR: A novel 8-bit linear interpolation algorithm was implemented as a CMOS VLSI circuit using a readily available, high-level synthesis tool and results produced were virtually identical to IEEE-format, single-precision, floating-point results.

43 citations


Journal ArticleDOI
TL;DR: The most accurate of the two new algorithms is about eight times faster than nearest neighbor interpolation, and the subjective image quality is between nearest neighbor and bilinear interpolation.

40 citations


Patent
19 May 1994
TL;DR: In this paper, the nonlinear, deflection waveform used to improve registration in a three picture tube projection television system is produced using interpolation of stored data setting points by first performing a reduced number of high-order interpolation calculations using the setting points and then performing low-order calculations either between two calculated highorder interpolated data points or between one of the calculated high order interpolated interpolated sets and one set points.
Abstract: The nonlinear, deflection waveform used improve registration in a three picture tube projection television system is produced using interpolation of stored data setting points by first performing a reduced number of high-order interpolation calculations using the setting points and then performing low-order interpolation calculations either between two calculated high-order interpolated data points or between one of the calculated high-order interpolated data points and one of the setting points. This results in reducing the work load on the central processing unit in the registration system. In addition, a reduced bit-size requirement for the interpolation portion of the registration is obtained by storing registration data of a first bit size and then adding bits below the original LSB for the interpolation calculation prior to performing the digital to analog conversion.

27 citations


Book ChapterDOI
02 May 1994
TL;DR: A direct method of surface reconstruction which is based on regularized uniform bicubic B-spline surface patches is proposed, which states directly the problem of regularization on the 3D surface.
Abstract: This paper considers the problem of 3D surface reconstruction using image sequences. We propose a direct method of surface reconstruction which is based on regularized uniform bicubic B-spline surface patches. The reconstruction is achieved by observing the motion of occluding contours [1, 4, 10], i.e., where the view lines graze the surface. It has been shown that reconstruction of such a 3D surface is possible when the camera motion is known, with the exception of those fully concave parts where no view line can be tangential to the surface. This approach differs from previous work, it states directly the problem of regularization on the 3D surface. Experimental results are presented for real data.

25 citations


Proceedings ArticleDOI
17 Aug 1994
TL;DR: The paper introduces in airborne laser scanner architecture and its data postprocessing the definition of a contiguously measured digital terrain model (DTM) by filtering neighboring strips and their attachment to each other by spline approximations and datum transforms.
Abstract: The paper introduces in airborne laser scanner architecture and its data postprocessing One main problem is the definition of a contiguously measured digital terrain model (DTM) by filtering neighboring strips and their attachment to each other These problems are solved by spline approximations and datum transforms The spline approximation starts with a bicubic polynomial which can be reparameterized in terms of its function values and the first derivatives as new unknown parameters Filtering is carried out in a two dimensional rectangle bordered by the nodes of the spline The next step of the data postprocessing is the datum transform Using a similarity transform the seven datum parameters have to be defined by a data snooping procedure with non-parametric hypothesis tests The reason for using non- standard test statistics is the systematic effects produced by the sensor system itself are man- made and natural 3-D phenomena cannot be eliminated a priori perfectly Therefore, the datum transform should give the hints which observations are blunders and which are not

24 citations


Journal ArticleDOI
TL;DR: In this paper, the scattered Hermite interpolation problem was studied and several classes of radial basis functions, including the multiquadrics, which may be implemented for this interpolation scheme.

24 citations


Patent
27 Sep 1994
TL;DR: In this article, a computed tomography apparatus where an examination subject is irradiated by an x-ray beam from a number of different angular positions, a measured data set is obtained in a first geometry, and the reconstruction of the tomographic image ensues by interpolation of the measured data in a second geometry.
Abstract: In a computed tomography apparatus wherein an examination subject is irradiated by an x-ray beam from a number of different angular positions, a measured data set is obtained in a first geometry, and the reconstruction of the tomographic image ensues by interpolation of the measured data in a second geometry. The interpolation results in an unwanted smoothing of the data. In order to compensate for this smoothing, the data in the frequency space are divided by the average Fourier transform of the interpolation function, or alternatively, a discrete convolution of the data in the spatial domain is undertaken.

19 citations


Proceedings Article
01 Sep 1994

18 citations


Proceedings ArticleDOI
20 Jun 1994
TL;DR: In this article, the authors considered the problem of the radial interpolation in order to determine the maximum spacing between rings in a plane-polar scanning system and proposed an optimal sampling interpolation algorithm of central type, which is accurate, fast and stable with respect to random errors affecting the data.
Abstract: Among the near-field-far-field (NF-FF) measurement techniques, that using a plane-polar scanning has attracted considerable attention due to its particular characteristics. The main drawback of the earliest approach, i.e., the large computer time required to reconstruct the antenna far field, can be overcome by applying the bivariate Lagrange interpolation (BCI) to recover the plane-rectangular data from the plane-polar ones. This step enables the use of the fast Fourier transform (FFT) also in the plane-polar scanning. However, the very simple BCI technique is not tailored to interpolate electromagnetic fields and, accordingly, requires close spacings to reduce the interpolation error. Previously (see IEEE Trans. Antennas Propag., vol. 39, p.48, 1991) by exploiting the quasi-bandlimitation properties of the radiated or scattered fields, an optimal sampling interpolation algorithm of central type has been developed, which is accurate, fast and stable with respect to random errors affecting the data. The maximum allowable azimuthal spacing has been derived and it has been demonstrated that the radial step can be significantly larger than the usually adopted one. It must be stressed that, at variance of the classical approach, the number of samples for each ring stays bounded even if the ring radius approaches infinity. In this way, the problem of data redundancy in the plane-polar scanning, i.e., the determination of the minimum number of required samples, has been partially resolved. However, when the radius of the scanning zone approaches infinity, the number of required rings becomes again unbounded. The aim of this paper is to reconsider thoroughly the problem of the radial interpolation in order to determine the maximum spacing between rings. In this way, the problem of data redundancy will be fully resolved. >

16 citations


Journal ArticleDOI
TL;DR: In this paper, the solution of the time-varying analogue of the two-sided Nudelman interpolation problem is presented, where the Lagrange-Sylvester problem is solved.
Abstract: This paper introduces and presents the solution of the time-varying analogue of the two-sided Nudelman interpolation problem. As a first step the related Lagrange-Sylvester interpolation problem is solved.

Patent
Seong-Won Lee1, Joonki Paik1
14 Jan 1994
TL;DR: In this article, a method of interpolating digital image data includes steps for calculating an absolute value of the difference between two neighboring pixels of image data, among four neighboring pixels, determining maximum and second-most maximum values among the thus-calculated absolute values and interpolating the image data using the results.
Abstract: A method of interpolating digital image data includes steps for calculating an absolute value of the difference between two neighboring pixels of image data, among four neighboring pixels of image data, determining maximum and second-most maximum values among the thus-calculated absolute values and interpolating the image data using the results. An interpolation circuit for performing above-described method includes an edge detector, a pre-filter, a zero-order interpolation area controller, a zero-order interpolator, a movement averaging device, a magnification factor controller and a post-filter.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: The main conclusions of this study are: the energy of the analysis frames should be used in the interpolation process and it is beneficial to give analysis frames an overlap of p samples, where p is the order of the model.
Abstract: Spectral interpolation improves the performance of low bit rate speech coders without increasing the bit rate. We have investigated the problem of spectral interpolation, by means of autoregressive theory. Our analysis is supported by experiments on stationary and non-stationary data and by experiments on real speech data. The main conclusions of our study are: the energy of the analysis frames should be used in the interpolation process. It is beneficial to give analysis frames an overlap of p samples, where p is the order of the model. The reflection coefficients, log area ratios and the arcsine of reflection coefficients are less suitable for interpolation. >

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new FIR image interpolation filter known as a perceptually weighted least square (PWLS) filter which is designed using both sampling theory and properties of human vision to minimize the ripple response around edges of the interpolated images and to best satisfy frequency response constraints.
Abstract: Image interpolation is an important image operation. It is commonly used in image enlargement to obtain a close-up view of the detail of an image. From sampling theory, an ideal low-pass filter can be used for image interpolation. However, ripples appear around image edges which are annoying to a human viewer. The authors introduce a new FIR image interpolation filter known as a perceptually weighted least square (PWLS) filter which is designed using both sampling theory and properties of human vision. The goal of this design approach is to minimize the ripple response around edges of the interpolated images and to best satisfy frequency response constraints. The interpolation results using the proposed approach are substantially better than those resulting from replication or bilinear interpolation, and are at least as good as and possibly better than that of cubic convolution interpolation. >


Journal ArticleDOI
TL;DR: The athlet code as mentioned in this paper was developed at GRS for the thermohydraulic analysis of the behavior of PWRs and BWRs under postulated transient and accidental conditions.

Journal ArticleDOI
Abstract: In this work, uniform bicubic B-spline functions are used to represent the surface geometry and interpolation functions in the formulation of boundary-element method (BEM) for three-dimensional problems This is done as a natural generalization of cubic B-spline curves, introduced by Cabral et al, for two-dimensional problems Three-dimensional scalar problems, with particular applications to Laplace and Helmholtz equations, are considered

Journal ArticleDOI
TL;DR: A new interpolation algorithm for 2D data is presented that is based on the least-squares optimization of a spline-like scheme and integrated into a two-source decomposition scheme for image-data compression.

Journal ArticleDOI
TL;DR: In this paper, a discussion and algorithm for combined interpolation and approximation by convexity-preserving rational splines is given, together with an algorithm for combining interpolation with rational spline approximation.
Abstract: A discussion and algorithm for combined interpolation and approximation by convexity-preserving rational splines is given.

Proceedings ArticleDOI
James M. Kasson1
15 Apr 1994
TL;DR: This paper analyzes trilinear interpolation and several tetrahedral interpolation schemes that extract data from a cubical packing of space, including a five-tetrahedron scheme proposed by Kanamori and Kotera, a six-tetic method due to Clark, and three variations on the Clark arrangement.
Abstract: Three-dimensional interpolation can minimize calculations when converting images from one device-independent color space to another or converting information between device-dependent and device-independent color spaces; this makes 3D interpolation a suitable way to implement many kinds of color space transformations. This paper analyzes trilinear interpolation and several tetrahedral interpolation schemes that extract data from a cubical packing of space, including a five-tetrahedron scheme proposed by Kanamori and Kotera, a six-tetrahedron method due to Clark, and three variations on the Clark arrangement. Also analyzed are two versions of the disphenoid extraction from the body-centered-cubic packing proposed by Kasson, Plouffe, and Nin, and the PRISM method reported by Kanamori, et al. The test for interpolation algorithm performance of the earlier paper is applied to a large set of color space conversions and lattice granularities, allowing meaningful conclusions about average and worst-case performance.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: In this article, a method for obtaining parametric equations of a smooth curve that passes through a sequence of points, and compare it to cubic splines and to the method of interpolation by quadratic polynomials using the Lagrange formula is discussed.
Abstract: We shall discuss a method for obtaining parametric equations of a smooth curve that passes through a sequence of points, and compare it to cubic splines and to the method of interpolation by quadratic polynomials using the Lagrange formula. A mathematica program automating this method of curve fitting is also provided.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: This work introduces a new approach which it calls sequential linear interpolation (SLI) for interpolating multidimensional nonlinear functions and applies this technique to the color printer calibration problem where highly non linear functions must be efficiently implemented.
Abstract: Introduces a new approach which we call sequential linear interpolation (SLI) for interpolating multidimensional nonlinear functions. SLI grid points can be nonuniformly placed. By applying asymptotic analysis, we obtain optimal conditions for placing the interpolation grid points in the SLI grid structure to minimize the interpolation error. Thus, we use the grid points more efficiently. We apply this technique to the color printer calibration problem where highly nonlinear functions must be efficiently implemented. >


Proceedings ArticleDOI
13 Nov 1994
TL;DR: Pixels, segments, triangles, or combination for the end-points to be applied on the triangularisation, is the most common structure used in modelisation and the method is extended to non-linear interpolation.
Abstract: Defines special algorithms working directly in screen space for synthetical animation. These take place on synthetical animation based on spatio-temporal interpolation which is a 4D interpolation with three dimensions in space and one in time. In order to avoid heavy memory storage and calculations, I discuss 3D (two for the space adding to one for the time) interpolation between two key frames resulting from the projection of two 3D scenes. Extension to 4D interpolation is discussed for some special algorithms. Pixels, segments, triangles (end-points stored during the projection) matching with a spatial linear interpolation, or combination for the end-points to be applied on the triangularisation, is the most common structure used in modelisation. The method is extended to non-linear interpolation. This way permits to reduce bad visual effects and computation time which, by so alleviating one of the most important problems in synthetical animation, conciliates detailed graphical quality with real time. >

Proceedings ArticleDOI
09 Sep 1994
TL;DR: A new interpolation algorithm, directed distance morphing, is introduced and used to address the nonoverlapping problem and results of SBIG are superior visually and quantitatively to L or CS.
Abstract: Shape-based interpolation (SBI) is used for interpolation between binary serial slice images. Although SBI approximates the interslice geometry more accurately than traditional techniques such as linear (L) or cubic spline (CS) interpolation, SBI produces only a binary result. This paper extends SBI to interpolation of grayscale images (SBIG) using simulated 3D distance maps to produce a grayscale image volume. Results of SBIG are superior visually (sharper detail, no artificial intensities) and quantitatively to L or CS. This is particularly evident in sagittal and coronal reconstructions. Clipping artifacts due to nonoverlapping structures or rapid changes in image brightness are minimized using simulated 3D maps. However, when objects between slices do not overlap, shape-based interpolation results in compressed or nonexistent geometry in some or all of the interpolated slices. The nonoverlapping problem is described and quantified. A new interpolation algorithm, directed distance morphing, is introduced and used to address the nonoverlapping problem.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
23 May 1994
TL;DR: In this article, the nonlinear, deflection waveform used to improve registration in a three picture tube projection television system is produced using interpolation of stored data setting points by first performing a reduced number of high-order interpolation calculations using the setting points and then performing low-order calculations either between two calculated highorder interpolated data points or between one of the calculated high order interpolated interpolated sets and one set points.
Abstract: The nonlinear, deflection waveform used improve registration in a three picture tube projection television system is produced using interpolation of stored data setting points by first performing a reduced number of high-order interpolation calculations using the setting points and then performing low-order interpolation calculations either between two calculated high-order interpolated data points or between one of the calculated high-order interpolated data points and one of the setting points. This results in reducing the work load on the central processing unit in the registration system. In addition, a reduced bit-size requirement for the interpolation portion of the registration is obtained by storing registration data of a first bit size and then adding bits below the original LSB for the interpolation calculation prior to performing the digital to analog conversion.

Proceedings ArticleDOI
01 May 1994
TL;DR: The paper develops a time as well as order update recursion for linear least-squares lattice lattice (LSL) interpolation filters and reveals that although interpolation needs more computing power than prediction does, however, interpolation can generate much smaller error power and, thus, reduces much more temporal redundancy than predictiondoes.
Abstract: The paper develops a time as well as order update recursion for linear least-squares lattice (LSL) interpolation filters. The LSL interpolation filter has the nice stage-to-stage modularity which allows its length to be increased or decreased "two-sidedly" (i.e., both past and future) without affecting the already computed parameters. The LSL interpolation filter is also efficient in computation, flexible in implementation and fast in convergence. The computer simulation results shown reveal that although interpolation needs more computing power than prediction does, however, interpolation can generate much smaller error power and, thus, reduces much more temporal redundancy than prediction does. >

Journal ArticleDOI
TL;DR: In this article, the exact error of approximation of a convolution class with interpolation cardinal splines is determined and the exact values of averagen-Kolmogorov widths are obtained for the convolution classes.
Abstract: This paper discusses some problems on the cardinal spline interpolation corresponding to infinite order differential operators. The remainder formulas and a dual theorem are established for some convolution classes, where the kernels arePF densities. Moreover, the exact error of approximation of a convolution class with interpolation cardinal splines is determined. The exact values of averagen-Kolmogorov widths are obtained for the convolution class.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: The computer simulation results shown in this paper reveal that although interpolation needs more computing power than prediction does, interpolation can generate much smaller error power and thus reduces much more temporal redundancy than predictiondoes.
Abstract: This paper develops a time as well as order update recursion for linear least-squares lattice (LSL) interpolation filters. The LSL interpolation filter has the nice stage-to-stage modularity which allows its length to be increased or decreased "two-sidedly" (i.e., both past and future) without affecting the already computed parameters. The LSL interpolation filter is also efficient in computation, flexible in implementation and fast in convergence. The computer simulation results shown in this paper reveal that although interpolation needs more computing power than prediction does, interpolation can generate much smaller error power and thus reduces much more temporal redundancy than prediction does. >