scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 2011"


Book
18 Nov 2011
TL;DR: In this paper, the authors define the Riesz-Thorin Theorem as a necessary and sufficient condition for interpolation spaces, and apply it to approximate spaces in the context of vector spaces.
Abstract: 1. Some Classical Theorems.- 1.1. The Riesz-Thorin Theorem.- 1.2. Applications of the Riesz-Thorin Theorem.- 1.3. The Marcinkiewicz Theorem.- 1.4. An Application of the Marcinkiewicz Theorem.- 1.5. Two Classical Approximation Results.- 1.6. Exercises.- 1.7. Notes and Comment.- 2. General Properties of Interpolation Spaces.- 2.1. Categories and Functors.- 2.2. Normed Vector Spaces.- 2.3. Couples of Spaces.- 2.4. Definition of Interpolation Spaces.- 2.5. The Aronszajn-Gagliardo Theorem.- 2.6. A Necessary Condition for Interpolation.- 2.7. A Duality Theorem.- 2.8. Exercises.- 2.9. Notes and Comment.- 3. The Real Interpolation Method.- 3.1. The K-Method.- 3.2. The J-Method.- 3.3. The Equivalence Theorem.- 3.4. Simple Properties of ??, q.- 3.5. The Reiteration Theorem.- 3.6. A Formula for the K-Functional.- 3.7. The Duality Theorem.- 3.8. A Compactness Theorem.- 3.9. An Extremal Property of the Real Method.- 3.10. Quasi-Normed Abelian Groups.- 3.11. The Real Interpolation Method for Quasi-Normed Abelian Groups.- 3.12. Some Other Equivalent Real Interpolation Methods.- 3.13. Exercises.- 3.14. Notes and Comment.- 4. The Complex Interpolation Method.- 4.1. Definition of the Complex Method.- 4.2. Simple Properties of ?[?].- 4.3. The Equivalence Theorem.- 4.4. Multilinear Interpolation.- 4.5. The Duality Theorem.- 4.6. The Reiteration Theorem.- 4.7. On the Connection with the Real Method.- 4.8. Exercises.- 4.9. Notes and Comment.- 5. Interpolation of Lp-Spaces.- 5.1. Interpolation of Lp-Spaces: the Complex Method.- 5.2. Interpolation of Lp-Spaces: the Real Method.- 5.3. Interpolation of Lorentz Spaces.- 5.4. Interpolation of Lp-Spaces with Change of Measure: p0 = p1.- 5.5. Interpolation of Lp-Spaces with Change of Measure: p0 ? p1.- 5.6. Interpolation of Lp-Spaces of Vector-Valued Sequences.- 5.7. Exercises.- 5.8. Notes and Comment.- 6. Interpolation of Sobolev and Besov Spaces.- 6.1. Fourier Multipliers.- 6.2. Definition of the Sobolev and Besov Spaces.- 6.3. The Homogeneous Sobolev and Besov Spaces.- 6.4. Interpolation of Sobolev and Besov Spaces.- 6.5. An Embedding Theorem.- 6.6. A Trace Theorem.- 6.7. Interpolation of Semi-Groups of Operators.- 6.8. Exercises.- 6.9. Notes and Comment.- 7. Applications to Approximation Theory.- 7.1. Approximation Spaces.- 7.2. Approximation of Functions.- 7.3. Approximation of Operators.- 7.4. Approximation by Difference Operators.- 7.5. Exercises.- 7.6. Notes and Comment.- References.- List of Symbols.

4,025 citations


Posted Content
TL;DR: In this article, the authors deal with the fractional Sobolev spaces W^[s,p] and analyze the relations among some of their possible definitions and their role in the trace theory.
Abstract: This paper deals with the fractional Sobolev spaces W^[s,p]. We analyze the relations among some of their possible definitions and their role in the trace theory. We prove continuous and compact embeddings, investigating the problem of the extension domains and other regularity results. Most of the results we present here are probably well known to the experts, but we believe that our proofs are original and we do not make use of any interpolation techniques nor pass through the theory of Besov spaces. We also present some counterexamples in non-Lipschitz domains.

707 citations


Journal ArticleDOI
TL;DR: The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.
Abstract: In this correspondence, the authors propose an image resolution enhancement technique based on interpolation of the high frequency subband images obtained by discrete wavelet transform (DWT) and the input image. The edges are enhanced by introducing an intermediate stage by using stationary wavelet transform (SWT). DWT is applied in order to decompose an input image into different subbands. Then the high frequency subbands as well as the input image are interpolated. The estimated high frequency subbands are being modified by using high frequency subband obtained through SWT. Then all these subbands are combined to generate a new high resolution image by using inverse DWT (IDWT). The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.

357 citations


Journal ArticleDOI
TL;DR: Interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals, however, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods.

354 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed different algorithms of spatial interpolation for daily rainfall on 1 km2 regular grids in the catchment area and compared the results of geostatistical and deterministic approaches.
Abstract: . Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging) are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km2 regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km2). This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical) were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and Inverse Distance Weighting (IDW) algorithms outperformed considerably the interpolation with the Thiessen polygon, commonly used in various hydrological models. Integrating elevation into Kriging with an External Drift (KED) and Ordinary Cokriging (OCK) did not improve the interpolation accuracy for daily rainfall. Ordinary Kriging (ORK) and IDW were considered to be the best methods, as they provided smallest RMSE value for nearly all cases. Care should be taken in applying UNK and KED when interpolating daily rainfall with very few neighbourhood sample points. These recommendations complement the results reported in the literature. ORK, UNK and KED using only spherical model offered a slightly better result whereas OCK using seven variogram models achieved better result.

299 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of two fundamentally different approaches to achieve sub-pixel precision of normalised cross-correlation when measuring surface displacements on mass movements from repeat optical images was evaluated.

297 citations


Journal ArticleDOI
TL;DR: This paper presents an algorithm for the local implementation of Galerkin projection of discrete fields between meshes, which extends naturally to three dimensions and is very efficient.

294 citations


Journal ArticleDOI
TL;DR: In this paper, a new algorithm is developed to improve the accuracy and efficiency of the material point method for problems involving extremely large tensile deformations and rotations, and a novel set of grid basis functions is proposed for efficiently calculating nodal force and consistent mass integrals on the grid.
Abstract: SUMMARY A new algorithm is developed to improve the accuracy and efficiency of the material point method for problems involving extremely large tensile deformations and rotations. In the proposed procedure, particle domains are convected with the material motion more accurately than in the generalized interpolation material point method. This feature is crucial to eliminate instability in extension, which is a common shortcoming of most particle methods. Also, a novel alternative set of grid basis functions is proposed for efficiently calculating nodal force and consistent mass integrals on the grid. Specifically, by taking advantage of initially parallelogram-shaped particle domains, and treating the deformation gradient as constant over the particle domain, the convected particle domain is a reshaped parallelogram in the deformed configuration. Accordingly, an alternative grid basis function over the particle domain is constructed by a standard 4-node finite element interpolation on the parallelogram. Effectiveness of the proposed modifications is demonstrated using several large deformation solid mechanics problems. Copyright 2011 John Wiley & Sons, Ltd.

277 citations


Proceedings ArticleDOI
30 Oct 2011
TL;DR: In this paper, a simplified and faster implementation of Bradley's procedure is presented, and successful and unsuccessful attempts to improve it are discussed, as well as their experience with the algorithm suggests that it is stronger than interpolation on industrial problems.
Abstract: Last spring, in March 2010, Aaron Bradley published the first truly new bit-level symbolic model checking algorithm since Ken McMillan's interpolation based model checking procedure introduced in 2003. Our experience with the algorithm suggests that it is stronger than interpolation on industrial problems, and that it is an important algorithm to study further. In this paper, we present a simplified and faster implementation of Bradley's procedure, and discuss our successful and unsuccessful attempts to improve it.

270 citations


Proceedings Article
01 Jan 2011
TL;DR: A simplified and faster implementation of Aaron Bradley's bit-level symbolic model checking algorithm is presented, and successful and unsuccessful attempts to improve it are discussed.

260 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods.
Abstract: . Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

Journal ArticleDOI
TL;DR: This work develops two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline, and examines the tradeoff between sparsity and signal reconstruction accuracy in these methods.
Abstract: We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which, in turn, offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality.

Proceedings ArticleDOI
12 Dec 2011
TL;DR: The use of displacement interpolation is developed for this class of problem, which provides a generic method for interpolating between distributions or functions based on advection instead of blending.
Abstract: Interpolation between pairs of values, typically vectors, is a fundamental operation in many computer graphics applications. In some cases simple linear interpolation yields meaningful results without requiring domain knowledge. However, interpolation between pairs of distributions or pairs of functions often demands more care because features may exhibit translational motion between exemplars. This property is not captured by linear interpolation. This paper develops the use of displacement interpolation for this class of problem, which provides a generic method for interpolating between distributions or functions based on advection instead of blending. The functions can be non-uniformly sampled, high-dimensional, and defined on non-Euclidean manifolds, e.g., spheres and tori. Our method decomposes distributions or functions into sums of radial basis functions (RBFs). We solve a mass transport problem to pair the RBFs and apply partial transport to obtain the interpolated function. We describe practical methods for computing the RBF decomposition and solving the transport problem. We demonstrate the interpolation approach on synthetic examples, BRDFs, color distributions, environment maps, stipple patterns, and value functions.

Journal ArticleDOI
TL;DR: A reliability-guided displacement scanning strategy is employed to avoid time-consuming integer–pixel displacement searching for each calculation point, and a pre-computed global interpolation coefficient look-up table is utilized to entirely eliminate repetitive interpolation calculation at sub-pixel locations.

Journal ArticleDOI
TL;DR: A new upscaling method (iterative curvature-based interpolation) based on a two-step grid filling and an iterative correction of the interpolated pixels obtained by minimizing an objective function depending on the second-order directional derivatives of the image intensity is described.
Abstract: The problem of creating artifact-free upscaled images appearing sharp and natural to the human observer is probably more interesting and less trivial than it may appear. The solution to the problem, often referred to also as “single-image super-resolution,” is related both to the statistical relationship between low-resolution and high-resolution image sampling and to the human perception of image quality. In many practical applications, simple linear or cubic interpolation algorithms are applied for this task, but the results obtained are not really satisfactory, being affected by relevant artifacts like blurring and jaggies. Several methods have been proposed to obtain better results, involving simple heuristics, edge modeling, or statistical learning. The most powerful ones, however, present a high computational complexity and are not suitable for real-time applications, while fast methods, even if edge adaptive, are not able to provide artifacts-free images. In this paper, we describe a new upscaling method (iterative curvature-based interpolation) based on a two-step grid filling and an iterative correction of the interpolated pixels obtained by minimizing an objective function depending on the second-order directional derivatives of the image intensity. We show that the constraints used to derive the function are related with those applied in another well-known interpolation method, providing good results but computationally heavy (i.e., new edge-directed interpolation (NEDI). The high quality of the images enlarged with the new method is demonstrated with objective and subjective tests, while the computation time is reduced of one to two orders of magnitude with respect to NEDI so that we were able, using a graphics processing unit implementation based on the nVidia Compute Unified Device Architecture technology, to obtain real-time performances.

Journal ArticleDOI
TL;DR: This paper uses higher order piecewise interpolation polynomial to approximate the fractional integral and fractional derivatives, and uses the Simpson method to design a higher order algorithm for the fractionsal differential equations.

Journal ArticleDOI
TL;DR: Different spatial interpolation algorithms have been evaluated to produce a reasonably good continuous dataset bridging the gaps in the historical series of precipitation records in Sicily and validation results indicate that the univariate methods, neglecting the information of elevation, are characterized by the largest errors.

Journal ArticleDOI
TL;DR: In this article, a new type of interpolation function is introduced that has a zero slope at the equilibrium values of the nonconserved field variables representing the different phases and allows for a thermodynamically consistent interpolation of the free energies.

Journal ArticleDOI
TL;DR: This paper presents bilinear and bicubic interpolation methods tailored for the division of focal plane polarization imaging sensor targeting a 1-Mega pixel polarization Imaging sensor operating in the visible spectrum.
Abstract: This paper presents bilinear and bicubic interpolation methods tailored for the division of focal plane polarization imaging sensor. The interpolation methods are targeted for a 1-Mega pixel polarization imaging sensor operating in the visible spectrum. The five interpolation methods considered in this paper are: bilinear, weighted bilinear, bicubic spline, an approximated bicubic spline and a bicubic interpolation method. The modulation transfer function analysis is applied to the different interpolation methods, and test images as well as numerical error analyses are also presented. Based on the comparison results, the full frame bicubic spline interpolation achieves the best performance for polarization images.

Proceedings ArticleDOI
25 Jul 2011
TL;DR: This work proposes an example-based approach for simulating complex elastic material behavior that promotes an art-directed approach to solid simulation, which it exemplify on a set of practical examples.
Abstract: We propose an example-based approach for simulating complex elastic material behavior. Supplied with a few poses that characterize a given object, our system starts by constructing a space of prefered deformations by means of interpolation. During simulation, this example manifold then acts as an additional elastic attractor that guides the object towards its space of prefered shapes. Added on top of existing solid simulation codes, this example potential effectively allows us to implement inhomogeneous and anisotropic materials in a direct and intuitive way. Due to its example-based interface, our method promotes an art-directed approach to solid simulation, which we exemplify on a set of practical examples.

Journal ArticleDOI
TL;DR: Two multi-material interpolation schemes as direct generalizations of the well-known SIMP and RAMP material interpolation scheme originally developed for isotropic mixtures of two isotropIC material phases are presented.
Abstract: This paper presents two multi-material interpolation schemes as direct generalizations of the well-known SIMP and RAMP material interpolation schemes originally developed for isotropic mixtures of two isotropic material phases. The new interpolation schemes provide generally applicable interpolation schemes between an arbitrary number of pre-defined materials with given (anisotropic) properties. The method relies on a large number of sparse linear constraints to enforce the selection of at most one material in each design subdomain. Topology and multi-material optimization is formulated within a unified parametrization.

Patent
27 Sep 2011
TL;DR: In this paper, the authors present a method of operating a centralized healthcare management system that includes a data translation map database and a central interpolation server computer interconnected to a computer network.
Abstract: A method of operating a centralized healthcare management system that includes a data translation map database and a central interpolation server computer interconnected to a computer network The central server references a data translation map database for a desired translation map that enables the central interpolation server to translate data records from a source format to a destination format

Journal ArticleDOI
TL;DR: In this article, a thorough analysis of the generalized delayed-signal cancellation (DSC) operator in both synchronous and stationary reference frames is first conducted, and the discretization error during digital implementation due to nonideal system sampling frequency and/or grid-frequency variation is quantified with the proposed concept of relative harmonic gain error.
Abstract: Phase-locked loop (PLL) is usually required to detect grid phase angle in grid-tied converters. Conventional PLL schemes have to compromise between steady-state accuracy and transient dynamics when grid voltage is polluted by unbalance and harmonics. To overcome this challenge, a generalized delayed-signal-cancellation (DSC) operator is proposed recently to form cascaded DSC (CDSC) operator to eliminate arbitrary harmonics. With the CDSC operator, the conditioned voltage can be used in PLL loop with very high bandwidth for fast tracking. However, for digital implementation, the CDSC operator may subject to delay-time error, which subsequently leads to residual distortions in the conditioned voltage. In this paper, a thorough analysis of the CDSC operator in both synchronous and stationary reference frames is first conducted. The discretization error during digital implementation due to nonideal system sampling frequency and/or grid-frequency variation is quantified with the proposed concept of relative harmonic gain error. An effective improvement method is then developed that is based on linear interpolation and is effective for all delay-based PLL schemes. Finally, experimental results are obtained to verify the harmonic elimination ability of CDSC in various scenarios and the effectiveness of the interpolation-based digital implementation scheme.

Journal ArticleDOI
Thierry Coupez1
TL;DR: This paper proposes to build a metric field directly at the nodes of the mesh for a direct use in the meshing tools, by using the statistical concept of length distribution tensors.

Journal ArticleDOI
TL;DR: A numerical scheme of computing quantities involving gradients of shape functions is introduced for the material point method, so that the quantities are continuous as material points move across cell boundaries, and is proved to satisfy mass and momentum conservations exactly.

Journal ArticleDOI
TL;DR: This paper describes the implementation of immersed boundary method using the direct-forcing concept to investigate complex shock-obstacle interactions and an interpolation algorithm is developed for more stable boundary conditions with easier implementation procedure.

Journal ArticleDOI
TL;DR: This method is well suited for a topology optimization problem with a design domain containing higher-order elements or non-quadrilateral elements and has the ability to yield mesh-independent solutions if the radius of the influence domain is reasonably specified.

Journal ArticleDOI
TL;DR: Two new algorithms to improve greedy sampling of high-dimensional parametrized functions are proposed, based on a saturation assumption of the error in the greedy algorithm and an algorithm in which the train set for the greedy approach is adaptively sparsified and enriched.
Abstract: We propose two new algorithms to improve greedy sampling of high-dimensional func- tions. While the techniques have a substantial degree of generality, we frame the discussion in the context of methods for empirical interpolation and the development of reduced basis techniques for high-dimensional parametrized functions. The first algorithm, based on a saturation assumption of the error in the greedy algorithm, is shown to result in a significant reduction of the workload over the standard greedy algorithm. In a further improved approach, this is combined with an algorithm in which the train set for the greedy approach is adaptively sparsified and enriched. A safety check step is added at the end of the algorithm to certify the quality of the sampling. Both these techniques are applicable to high-dimensional problems and we shall demonstrate their performance on a number of numerical examples.

Journal ArticleDOI
TL;DR: In this article, a detailed analysis of natural frequencies of laminated composite plates using the mesh-free moving Kriging interpolation method is presented, and the convergence of the method on the natural frequency is also given.
Abstract: A detailed analysis of natural frequencies of laminated composite plates using the meshfree moving Kriging interpolation method is presented. The present formulation is based on the classical plate theory while the moving Kriging interpolation satisfying the delta property is employed to construct the shape functions. Since the advantage of the interpolation functions, the method is more convenient and no special techniques are needed in enforcing the essential boundary conditions. Numerical examples with different shapes of plates are presented and the achieved results are compared with reference solutions available in the literature. Several aspects of the model involving relevant parameters, fiber orientations, lay-up number, length-to-length, stiffness ratios, etc. affected on frequency are analyzed numerically in details. The convergence of the method on the natural frequency is also given. As a consequence, the applicability and the effectiveness of the present method for accurately computing natural frequencies of generally shaped laminates are demonstrated.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: The main results are model-theoretic characterizations of uniform interpolants and their existence in terms of bisimulations, tight complexity bounds for deciding the existence of Uniform interpolants, an approach to computing interpolants when they exist, and tight bounds on their size.
Abstract: We study uniform interpolation and forgetting in the description logic ALC. Our main results are model-theoretic characterizations of uniform interpolants and their existence in terms of bisimulations, tight complexity bounds for deciding the existence of uniform interpolants, an approach to computing interpolants when they exist, and tight bounds on their size. We use a mix of model-theoretic and automata-theoretic methods that, as a by-product, also provides characterizations of and decision procedures for conservative extensions.