scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 2004"


Book
13 Dec 2004
TL;DR: In this paper, the authors propose a set of auxiliary tools from analysis and measure theory for radial basis function interpolation on spheres and other manifolds, including Native Spaces, Native spaces, Conditionally Positive definite functions, and Compactly supported functions.
Abstract: 1. Applications and motivations 2. Hear spaces and multivariate polynomials 3. Local polynomial reproduction 4. Moving least squares 5. Auxiliary tools from analysis and measure theory 6. Positive definite functions 7. Completely monotine functions 8. Conditionally positive definite functions 9. Compactly supported functions 10. Native spaces 11. Error estimates for radial basis function interpolation 12. Stability 13. Optimal recovery 14. Data structures 15. Numerical methods 16. Generalised interpolation 17. Interpolation on spheres and other manifolds.

1,821 citations


Journal ArticleDOI
01 Oct 2004-Ecology
TL;DR: In this paper, a binomial mixture model is proposed for the species accumulation function based on presence-absence (incidence) of species in a sample of quadrats or other sampling units, which covers interpolation between zero and the observed number of samples, as well as extrapolation beyond the observed sample set.
Abstract: A general binomial mixture model is proposed for the species accumulation function based on presence-absence (incidence) of species in a sample of quadrats or other sampling units. The model covers interpolation between zero and the observed number of samples, as well as extrapolation beyond the observed sample set. For interpolation (sample- based rarefaction), easily calculated, closed-form expressions for both expected richness and its confidence limits are developed (using the method of moments) that completely eliminate the need for resampling methods and permit direct statistical comparison of richness between sample sets. An incidence-based form of the Coleman (random-placement) model is developed and compared with the moment-based interpolation method. For ex- trapolation beyond the empirical sample set (and simultaneously, as an alternative method of interpolation), a likelihood-based estimator with a bootstrap confidence interval is de- scribed that relies on a sequential, AIC-guided algorithm to fit the mixture model parameters. Both the moment-based and likelihood-based estimators are illustrated with data sets for temperate birds and tropical seeds, ants, and trees. The moment-based estimator is confi- dently recommended for interpolation (sample-based rarefaction). For extrapolation, the likelihood-based estimator performs well for doubling or tripling the number of empirical samples, but it is not reliable for estimating the richness asymptote. The sensitivity of individual-based and sample-based rarefaction to spatial (or temporal) patchiness is dis- cussed.

1,669 citations


Journal ArticleDOI
TL;DR: Barrault et al. as discussed by the authors presented an efficient reduced-basis discretization procedure for partial differential equations with nonaffine parameter dependence, replacing non-affine coefficient functions with a collateral reducedbasis expansion, which then permits an affine offline-online computational decomposition.

1,265 citations


Journal ArticleDOI
TL;DR: Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable and deserves to be known as the standard method of polynometric interpolation.
Abstract: Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of polynomial interpolation.

1,177 citations


Journal ArticleDOI
TL;DR: This paper observes that one of the standard interpolation or "gridding" schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights, of particular value in two- and three- dimensional settings.
Abstract: The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N log N ) operations rather than O(N 2 ) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid (A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368-1383). In this paper, we observe that one of the standard interpolation or "gridding" schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in two- and three- dimensional settings, saving either 10 d N in storage in d dimensions or a factor of about 5-10 in CPUtime (independent of dimension).

714 citations


Journal ArticleDOI
TL;DR: In this paper, the Generalized Interpolation Material Point (GIMP) method is generalized using a variational form and a Petrov-Galerkin discretization scheme, resulting in a family of methods named the GIMP methods.
Abstract: The Material Point Method (MPM) discrete solution procedure for computational solid mechanics is generalized using a variational form and a Petrov- Galerkin discretization scheme, resulting in a family of methods named the Generalized Interpolation Material Point (GIMP) methods. The generalization permits iden- tification with aspects of other point or node based dis- crete solution techniques which do not use a body-fixed grid, i.e. the "meshless methods". Similarities are noted and some practical advantages relative to some of these methods are identified. Examples are used to demon- strate and explain numerical artifact noise which can be expected in MPM calculations. This noise results in non- physical local variations at the material points, where constitutive response is evaluated. It is shown to destroy the explicit solution in one case, and seriously degrade it in another. History dependent, inelastic constitutive laws can be expected to evolve erroneously and report inac- curate stress states because of noisy input. The noise is due to the lack of smoothness of the interpolation func- tions, and occurs due to material points crossing compu- tational grid boundaries. The next degree of smoothness available in the GIMP methods is shown to be capable of eliminating cell crossing noise. keyword: MPM, PIC, meshless methods, Petrov- Galerkin discretization.

550 citations


Journal ArticleDOI
TL;DR: This work presents results of 3D numerical simulations using a finite difference code featuring fixed mesh refinement (FMR), in which a subset of the computational domain is refined in space and time.
Abstract: We present results of 3D numerical simulations using a finite difference code featuring fixed mesh refinement (FMR), in which a subset of the computational domain is refined in space and time. We apply this code to a series of test cases including a robust stability test, a nonlinear gauge wave and an excised Schwarzschild black hole in an evolving gauge. We find that the mesh refinement results are comparable in accuracy, stability and convergence to unigrid simulations with the same effective resolution. At the same time, the use of FMR reduces the computational resources needed to obtain a given accuracy. Particular care must be taken at the interfaces between coarse and fine grids to avoid a loss of convergence at higher resolutions, and we introduce the use of 'buffer zones' as one resolution of this issue. We also introduce a new method for initial data generation, which enables higher order interpolation in time even from the initial time slice. This FMR system, 'Carpet', is a driver module in the freely available Cactus computational infrastructure, and is able to endow generic existing Cactus simulation modules ('thorns') with FMR with little or no extra effort.

525 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that not all parameters in the Matern class can be estimated consistently if data are observed in an increasing density in a fixed domain, regardless of the estimation methods used.
Abstract: It is shown that in model-based geostatistics, not all parameters in the Matern class can be estimated consistently if data are observed in an increasing density in a fixed domain, regardless of the estimation methods used. Nevertheless, one quantity can be estimated consistently by the maximum likelihood method, and this quantity is more important to spatial interpolation. The results are established by using the properties of equivalence and orthogonality of probability measures. Some sufficient conditions are provided for both Gaussian and non-Gaussian equivalent measures, and necessary conditions are provided for Gaussian equivalent measures. Two simulation studies are presented that show that the fixed-domain asymptotic properties can explain some finite-sample behavior of both interpolation and estimation when the sample size is moderately large.

517 citations


Book ChapterDOI
David Levin1
01 Jan 2004
TL;DR: In this article, a basic mesh-independent projection strategy for general surface interpolation is proposed, based upon the moving-least-squares (MLS) approach, and the resulting surface is C ∞ smooth.
Abstract: Smooth interpolation of unstructured surface data is usually achieved by joining local patches, where each patch is an approximation (usually parametric) defined on a local reference domain. A basic mesh-independent projection strategy for general surface interpolation is proposed here. The projection is based upon the ’Moving-Least-Squares’ (MLS) approach, and the resulting surface is C ∞ smooth. The projection involves a first stage of defining a local reference domain and a second stage of constructing an MLS approximation with respect to the reference domain. The approach is presented for the general problem of approximating a (d − 1)-dimensional manifold in ℝ d , d ≥ 2. The approach is applicable for interpolating or smoothing curve and surface data, as demonstrated here by some graphical examples.

503 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: A new interpolation technique for demosaicing of color images produced by single-CCD digital cameras shows that the proposed simple linear filter can lead to an improvement in PSNR and improvement in R and B interpolation when compared to a recently introduced linear interpolator.
Abstract: This paper introduces a new interpolation technique for demosaicing of color images produced by single-CCD digital cameras. We show that the proposed simple linear filter can lead to an improvement in PSNR of over 5.5 dB when compared to bilinear demosaicing, and about 0.7 dB improvement in R and B interpolation when compared to a recently introduced linear interpolator. The proposed filter also outperforms most nonlinear demosaicing algorithms, without the artifacts due to nonlinear processing, and a much reduced computational complexity.

452 citations


Journal ArticleDOI
TL;DR: The theoretical optimal shift that maximizes the quality of the authors' shifted linear interpolation is nonzero and close to 1/5, and this optimal value is similar to that of the computationally more costly "high-quality" cubic convolution.
Abstract: We present a simple, original method to improve piecewise-linear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1/5. We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly "high-quality" cubic convolution.

Journal ArticleDOI
TL;DR: The geodesic atlas creation algorithm is quantitatively compared to the Euclidean anatomical average to elucidate the need for optimized atlases and generate improved average representations of highly variable anatomy from distinct populations.

Journal ArticleDOI
TL;DR: In this article, a wavefield reconstruction scheme for spatially band-limited signals is proposed, where a finite domain regularization term is included to constrain the solution to be spatially bounded and imposes a prior spectral shape.
Abstract: In seismic data processing, we often need to interpolate and extrapolate data at missing spatial locations. The reconstruction problem can be posed as an inverse problem where, from inadequate and incomplete data, we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired.We propose a wavefield reconstruction scheme for spatially band‐limited signals. The method entails solving an inverse problem where a wavenumber‐domain regularization term is included. The regularization term constrains the solution to be spatially band‐limited and imposes a prior spectral shape. The numerical algorithm is quite efficient since the method of conjugate gradients in conjunction with fast matrix–vector multiplications, implemented via the fast Fourier transform (FFT), is adopted. The algorithm can be used to perform multidimensional reconstruction in any spatial domain.

Journal ArticleDOI
TL;DR: In this article, a neural network scheme for the construction of a continuous potential energy surface (PES) is presented, and the sticking probability of H2/K(2 · 2)/Pd(1 0 0) is determined by molecular dynamics simulations on the neural network PES and compared to results using an independent analytical interpolation.

Journal ArticleDOI
TL;DR: In this article, the spatial prediction of point values from areal data of the same attribute is addressed within the general geostatistical framework of change of support; the term support refers to the domain informed by each datum or unknown value.
Abstract: The spatial prediction of point values from areal data of the same attribute is addressed within the general geostatistical framework of change of support; the term support refers to the domain informed by each datum or unknown value. It is demonstrated that the proposed geostatistical framework can explicitly and consistently account for the support differences between the available areal data and the sought-after point predictions. In particular, it is proved that appropriate modeling of all area-to-area and area-to-point covariances required by the geostatistical framework yields coherent (mass-preserving or pycnophylactic) predictions. In other words, the areal average (or areal total) of point predictions within any arbitrary area informed by an areal-average (or areal-total) datum is equal to that particular datum. In addition, the proposed geostatistical framework offers the unique advantage of providing a measure of the reliability (standard error) of each point prediction. It is also demonstrated that several existing approaches for area-to-point interpolation can be viewed within this geostatistical framework. More precisely, it is shown that (i) the choropleth map case corresponds to the geostatistical solution under the assumption of spatial independence at the point support level; (ii) several forms of kernel smoothing can be regarded as alternative (albeit sometimes incoherent) implementations of the geostatistical approach; and (iii) Tobler’s smooth pycnophylactic interpolation, on a quasi-infinite domain without non-negativity constraints, corresponds to the geostatistical solution when the semivariogram model adopted at the point support level is identified to the free-space Green’s functions (linear in 1-D or logarithmic in 2-D) of Poisson’s partial differential equation. In lieu of a formal case study, several 1-D examples are given to illustrate pertinent concepts.

Proceedings ArticleDOI
06 Jul 2004
TL;DR: This paper examines some of the implementation issues in rigid body path planning and presents techniques which have been found to be effective experimentally for Rigid Body path planning.
Abstract: Important implementation issues in rigid body path planning are often overlooked. In particular, sampling-based motion planning algorithms typically require a distance metric defined on the configuration space, a sampling function, and a method for interpolating sampled points. The configuration space of a 3D rigid body is identified with the Lie group SE(3). Defining proper metrics, sampling, and interpolation techniques for SE(3) is not obvious, and can become a hidden source of failure for many planning algorithm implementations. This paper examines some of these issues and presents techniques which have been found to be effective experimentally for Rigid Body path planning.

Journal ArticleDOI
TL;DR: The quality of DEMs derived from the interpolation of photogrammetrically derived elevation points in Alberta, Canada, is tested and it is revealed that the optimum grid cell size is between 5 and 20 m, depending on terrain complexity and terrain derivative.
Abstract: It is well known that the grid cell size of a raster digital elevation model has significant effects on derived terrain variables such as slope, aspect, plan and profile curvature or the wetness index. In this paper the quality of DEMs derived from the interpolation of photogrammetrically derived elevation points in Alberta, Canada, is tested. DEMs with grid cell sizes ranging from 100 to 5 m were interpolated from 100 m regularly spaced elevation points and numerous surface-specific point elevations using the ANUDEM interpolation method. In order to identify the grid resolution that matches the information content of the source data, three approaches were applied: density analysis of point elevations, an analysis of cumulative frequency distributions using the Kolmogorov-Smirnov test and the root mean square slope measure. Results reveal that the optimum grid cell size is between 5 and 20 m, depending on terrain complexity and terrain derivative. Terrain variables based on 100 m regularly sampled elevation points are compared to an independent high-resolution DEM used as a benchmark. Subsequent correlation analysis reveals that only elevation and local slope have a strong positive relationship while all other terrain derivatives are not represented realistically when derived from a coarse DEM. Calculations of root mean square errors and relative root mean square errors further quantify the quality of terrain derivatives.

Book
26 Apr 2004
TL;DR: This book presents a brief review of the CIE system of colorimetry, and some basic definitions of MATLAB, as well as implementations and examples of device-independent representation and techniques for multispectral imaging.
Abstract: Acknowledgements.1. Introduction.1.1 Who this book is for.1.2 Why base this book on MATLAB?1.3 A brief review of the CIE system of colorimetry.2. Linear Algebra for Beginners.2.1 Some basic definitions.2.2 Solving systems of simultaneous equations.2.3 Transposes and inverses.2.4 Linear and non-linear transforms.3. A Short Introduction to MATLAB.3.1 Matrix operations.3.2 Computing the transpose and inverse of matrices.3.3 M-files.3.4 Using functions in MATLAB.4. Computing CIE Tristimulus Values.4.1 Introduction.4.2 Standard colour-matching functions.4.3 Interpolation methods.4.4 Extrapolation methods.4.5 Tables of weights.4.6 Correction for spectral bandpass.4.7 Chromaticity diagrams.4.8 Implementation and examples.4.8.1 Spectral bandpass correction.4.8.2 Reflectance interpolation.4.8.3 Computing tristimulus values.4.8.4 Plotting the spectral locus.5. Computing Colour Difference.5.1 Introduction.5.2 CIELAB and CIELUV colour space.5.3 CIELAB colour difference.5.4 Optimized colour-difference formulae.5.4.1 CMC(l:c).5.4.2 CIE94.5.4.3 CIEDE2000.5.5 Implementations and examples.5.5.1 Computing CIELAB and CIELUV coordinates.5.5.2 Computing colour difference.6. Chromatic-adaptation Transforms and Colour Appearance.6.1 Introduction.6.2 CATs.6.2.1 CIECAT94.6.2.2 CMCCAT97.6.2.3 CMCCAT2000.6.3 CAMs.6.3.1 CIECAM97s.6.3.2 CMCCAM2000.6.4 Implementations and examples.6.4.1 CATs.6.4.2 Computing colour appearance.7. Characterization of Computer Displays.7.1 Introduction.7.2 Gamma.7.3 The GOG model.7.4 Device-independent transformation.7.5 Typical characterization procedure.7.6 Implementations and examples.8. Characterization of Cameras.8.1 Introduction.8.2 Correction for non-linearity.8.3 Device-independent representation.8.4 Implementations and examples.9. Characterization of Printers.9.1 Introduction.9.2 Physical models.9.3 Neural networks.9.4 Characterization of half-tone printers.9.4.1 Correction for non-linearity.9.4.2 Device-independent representation.9.4.3 The Kubelka-Munk model.9.5 Implementations and examples.9.5.1 Half-tone printer.9.5.2 Continuous-tone printer.10. Multispectral Imaging.10.1 Introduction.10.2 Computational colour constancy and linear models.10.3 Surface and illuminant estimation algorithms.10.4 Techniques for multispectral imaging.10.4.1 The Hardeberg method.10.4.2 The Imai and Berns method.10.4.3 Methods based on maximum smoothness.10.5 Implementations and examples.10.5.1 Deriving a set of basis functions.10.5.2 Representation of reflectance spectra in a linear model.10.5.3 Estimation of reflectance spectra from tristimulus values.10.5.4 Estimation of reflectance spectra from camera responses.10.5.5 Fourier operations on reflectance spectra.11. Colour Toolbox.11.1 cband.m (Box 1).11.2 pinterp.m (Box 2).11.3 r2xyz.m (Box 3).11.4 plocus.m (Box 4).11.5 xyz2lab.m (Box 5).11.6 lab2xyz.m (Box 6).11.7 xyz2luv.m (Box 7).11.8 car2pol.m (Box 8).11.9 pol2car (Box 9).11.10 cielabde.m (Box 10).11.11 dhpolarity (Box 11).11.12 cmcde.m (Box 12).11.13 cie94de.m (Box 13).11.14 cie00de.m (Box 14).11.15 cmccat97.m (Box 15).11.16 cmccat00.m (Box 16).11.17 ciecam97s.m (Box 17).11.18 gogtest.m (Box 18).11.19 compgog.m (Box 19).11.20 rgb2xyz.m (Box 20).11.21 xyz2rgb.m (Box 21).11.22 compigog (Box 22).11.23 getlincam.m (Box 23).11.24 lincam (Box 24).11.25 gettrc (Box 25).11.26 r2xyz (Box 26).References.Index.

Proceedings Article
01 Dec 2004
TL;DR: An unsupervised algorithm for registering 3D surface scans of an object undergoing significant deformations that can be used for compelling computer graphics tasks such as interpolation between two scans of a non-rigid object and automatic recovery of articulated object models.
Abstract: We present an unsupervised algorithm for registering 3D surface scans of an object undergoing significant deformations. Our algorithm does not need markers, nor does it assume prior knowledge about object shape, the dynamics of its deformation, or scan alignment. The algorithm registers two meshes by optimizing a joint probabilistic model over all point-to-point correspondences between them. This model enforces preservation of local mesh geometry, as well as more global constraints that capture the preservation of geodesic distance between corresponding point pairs. The algorithm applies even when one of the meshes is an incomplete range scan; thus, it can be used to automatically fill in the remaining surfaces for this partial scan, even if those surfaces were previously only seen in a different configuration. We evaluate the algorithm on several real-world datasets, where we demonstrate good results in the presence of significant movement of articulated parts and non-rigid surface deformation. Finally, we show that the output of the algorithm can be used for compelling computer graphics tasks such as interpolation between two scans of a non-rigid object and automatic recovery of articulated object models.

Journal ArticleDOI
TL;DR: About eighty MATLAB functions from plot and sum to svd and cond have been overloaded so that one can work with "chebfun" objects using almost exactly the usual MATLAB syntax.
Abstract: An object-oriented MATLAB system is described for performing numerical linear algebra on continuous functions and operators rather than the usual discrete vectors and matrices. About eighty MATLAB functions from plot and sum to svd and cond have been overloaded so that one can work with our "chebfun" objects using almost exactly the usual MATLAB syntax. All functions live on [-1,1] and are represented by values at sufficiently many Chebyshev points for the polynomial interpolant to be accurate to close to machine precision. Each of our overloaded operations raises questions about the proper generalization of familiar notions to the continuous context and about appropriate methods of interpolation, differentiation, integration, zerofinding, or transforms. Applications in approximation theory and numerical analysis are explored, and possible extensions for more substantial problems of scientific computing are mentioned.

Book
01 Jan 2004
TL;DR: Carleson's interpolation theorem Interpolating sequences and the Pick property Interpolation and sampling in Bergman spacesInterpolation in the Bloch space Interpolations, sampling, and Toeplitz operators Interpolators and operators in Paley-Wiener spaces Bibliography Index.
Abstract: Carleson's interpolation theorem Interpolating sequences and the Pick property Interpolation and sampling in Bergman spaces Interpolation in the Bloch space Interpolation, sampling, and Toeplitz operators Interpolation and sampling in Paley-Wiener spaces Bibliography Index.

Journal ArticleDOI
Jung Woo Hwang1, Hwang Soo Lee1
TL;DR: This letter presents two adaptive interpolation methods based on applying an inverse gradient to conventional bilinear and bicubic interpolation that can be used irrespective of the magnification factor and easily implemented due to their simple structure.
Abstract: This letter presents two adaptive interpolation methods based on applying an inverse gradient to conventional bilinear and bicubic interpolation. In simulations, the proposed methods exhibited a better performance than conventional bilinear and bicubic methods, particularly in the edge regions. In addition, the proposed methods can be used irrespective of the magnification factor (MF) and easily implemented due to their simple structure.

Journal ArticleDOI
TL;DR: This paper proposes an interpolation methodology, whose key idea is based on the interpolation of relations instead of interpolating /spl alpha/-cut distances, and which offers a way to derive a family of interpolation methods capable of eliminating some typical deficiencies of fuzzy rule interpolation techniques.
Abstract: The concept of fuzzy rule interpolation in sparse rule bases was introduced in 1993. It has become a widely researched topic in recent years because of its unique merits in the topic of fuzzy rule base complexity reduction. The first implemented technique of fuzzy rule interpolation was termed as /spl alpha/-cut distance based fuzzy rule base interpolation. Despite its advantageous properties in various approximation aspects and in complexity reduction, it was shown that it has some essential deficiencies, for instance, it does not always result in immediately interpretable fuzzy membership functions. This fact inspired researchers to develop various kinds of fuzzy rule interpolation techniques in order to alleviate these deficiencies. This paper is an attempt into this direction. It proposes an interpolation methodology, whose key idea is based on the interpolation of relations instead of interpolating /spl alpha/-cut distances, and which offers a way to derive a family of interpolation methods capable of eliminating some typical deficiencies of fuzzy rule interpolation techniques. The proposed concept of interpolating relations is elaborated here using fuzzy- and semantic-relations. This paper presents numerical examples, in comparison with former approaches, to show the effectiveness of the proposed interpolation methodology.

Journal ArticleDOI
01 Aug 2004
TL;DR: An anisotropic kernel mean shift technique is developed to segment the video data into contiguous volumes that provide a simple cartoon style, but more importantly provide the capability to semi-automatically rotoscope semantically meaningful regions.
Abstract: We describe a system for transforming an input video into a highly abstracted, spatio-temporally coherent cartoon animation with a range of styles. To achieve this, we treat video as a space-time volume of image data. We have developed an anisotropic kernel mean shift technique to segment the video data into contiguous volumes. These provide a simple cartoon style in themselves, but more importantly provide the capability to semi-automatically rotoscope semantically meaningful regions.In our system, the user simply outlines objects on keyframes. A mean shift guided interpolation algorithm is then employed to create three dimensional semantic regions by interpolation between the keyframes, while maintaining smooth trajectories along the time dimension. These regions provide the basis for creating smooth two dimensional edge sheets and stroke sheets embedded within the spatio-temporal video volume. The regions, edge sheets, and stroke sheets are rendered by slicing them at particular times. A variety of styles of rendering are shown. The temporal coherence provided by the smoothed semantic regions and sheets results in a temporally consistent non-photorealistic appearance.

Journal ArticleDOI
TL;DR: In this article, a simple methodology to design isotropic triangular shell finite elements based on the Mixed Interpolation of Tensorial Components (MITC) approach is presented, which performs well-established numerical tests and shows the performance of the new elements.

Journal ArticleDOI
TL;DR: It is shown that this new algorithm can take advantage of the redundancy provided by multiple microphone sensors to improve TDE against both reverberation and noise and can be treated as a natural generalization of the generalized cross correlation (GCC) TDE method to the multichannel case.
Abstract: Time-delay estimation (TDE), which aims at measuring the relative time difference of arrival (TDOA) between different channels is a fundamental approach for identifying, localizing, and tracking radiating sources Recently, there has been a growing interest in the use of TDE based locator for applications such as automatic camera steering in a room conferencing environment where microphone sensors receive not only the direct-path signal, but also attenuated and delayed replicas of the source signal due to reflections from boundaries and objects in the room This multipath propagation effect introduces echoes and spectral distortions into the observation signal, termed as reverberation, which severely deteriorates a TDE algorithm in its performance This paper deals with the TDE problem with emphasis on combating reverberation using multiple microphone sensors The multichannel cross correlation coefficient (MCCC) is rederived here, in a new way, to connect it to the well-known linear interpolation technique Some interesting properties and bounds of the MCCC are discussed and a recursive algorithm is introduced so that the MCCC can be estimated and updated efficiently when new data snapshots are available We then apply the MCCC to the TDE problem The resulting new algorithm can be treated as a natural generalization of the generalized cross correlation (GCC) TDE method to the multichannel case It is shown that this new algorithm can take advantage of the redundancy provided by multiple microphone sensors to improve TDE against both reverberation and noise Experiments confirm that the relative time-delay estimation accuracy increases with the number of sensors

Proceedings ArticleDOI
05 Dec 2004
TL;DR: This paper presents novel, customized (application driven) sequential designs based on cross-validation and bootstrapping, and provides 'exact' interpolation of the underlying simulation models, which gives better global predictions than regression analysis.
Abstract: Many simulation experiments require much computer time, so they necessitate interpolation for sensitivity analysis and optimization. The interpolating functions are 'metamodels' (or 'response surfaces') of the underlying simulation models. Classic methods combine low-order polynomial regression analysis with fractional factorial designs. Modern Kriging provides 'exact' interpolation, i.e., predicted output values at inputs already observed equal the simulated output values. Such interpolation is attractive in deterministic simulation, and is often applied in computer aided engineering. In discrete-event simulation, however, Kriging has just started. Methodologically, a Kriging metamodel covers the whole experimental area; i.e., it is global (not local). Kriging often gives better global predictions than regression analysis. Technically, Kriging gives more weight to 'neighboring' observations. To estimate the Kriging metamodel, space filling designs are used; for example, latin hypercube sampling (LHS). This paper also presents novel, customized (application driven) sequential designs based on cross-validation and bootstrapping.

Journal ArticleDOI
TL;DR: In this paper, the norms of the small and grand Lebesgue spaces depend only on the non-decreasing rearrangement of the underlying measure space, and they assume that the original measure space has measure 1.
Abstract: We give the following, equivalent, explicit expressions for the norms of the small and grand Lebesgue spaces, which depend only on the non-decreasing rearrangement (we assume here that the underlying measure space has measure 1):

Journal ArticleDOI
TL;DR: In this paper, a zonal grid algorithm for direct numerical simulation (DNS) of incompressible turbulent flows within a Finite-Volume framework is presented, which uses fully coupled embedded grids and a conservative treatment of the grid-interface variables.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the Cauchy problem for the semilinear damped wave equation with the diffusive structure as t→∞ and gave the precise Lp-Lq estimates.
Abstract: In this paper we study the Cauchy problem to the linear damped wave equation utt-Δu+2aut=0 in (0,∞)×Rn(n≥2). It has been asserted that the above equation has the diffusive structure as t→∞. We give the precise interpolation of the diffusive structure, which is shown by Lp-Lq estimates. We apply the above Lp-Lq estimates to the Cauchy problem for the semilinear damped wave equation utt-Δu+2aut=|u|σu in (0,∞)×Rn(2≤n≤5). If the power σ is larger than the critical exponent 2/n(Fujita critical exponent) and it satisfies σ≤2/(n-2) when n≥3, then the time global existence of small solution is proved, and the decay estimates of several norms of the solution are derived.