scispace - formally typeset
Search or ask a question

Showing papers on "Interpolation published in 2006"


Journal ArticleDOI
TL;DR: A program for calculating the semi-classic transport coefficients is described, based on a smoothed Fourier interpolation of the bands, which in principle should be exact within Boltzmann theory.

3,909 citations


Journal ArticleDOI
TL;DR: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions, based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples.
Abstract: This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n^2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.

2,603 citations


Journal ArticleDOI
TL;DR: A new edge-guided nonlinear interpolation technique is proposed through directional filtering and data fusion that can preserve edge sharpness and reduce ringing artifacts in image interpolation algorithms.
Abstract: Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts

971 citations


Journal ArticleDOI
TL;DR: It is shown that tapering the correct covariance matrix with an appropriate compactly supported positive definite function reduces the computational burden significantly and still leads to an asymptotically optimal mean squared error.
Abstract: Interpolation of a spatially correlated random process is used in many scientific areas. The best unbiased linear predictor, often called a kriging predictor in geostatistical science, requires the solution of a (possibly large) linear system based on the covariance matrix of the observations. In this article, we show that tapering the correct covariance matrix with an appropriate compactly supported positive definite function reduces the computational burden significantly and still leads to an asymptotically optimal mean squared error. The effect of tapering is to create a sparse approximate linear system that can then be solved using sparse matrix algorithms. Monte Carlo simulations support the theoretical results. An application to a large climatological precipitation dataset is presented as a concrete and practical illustration.

757 citations


Journal ArticleDOI
TL;DR: A frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part, and a high-resolution image is then reconstructed using cubic interpolation.
Abstract: Super-resolution algorithms reconstruct a high-resolution image from a set of low-resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low-resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high-resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher-resolution final image.

520 citations


Journal ArticleDOI
TL;DR: In this article, a new method for handling scattering using interpolation in the areas affected by first and second-order Rayleigh and Raman scatter in such a way that the interfering signal is, at best, removed.
Abstract: Fluorescence excitation-emission matrix (EEM) measurements are useful in fields such as food science, analytical chemistry, biochemistry and environmental science. EEMs contain information which can be modeled using the parallel factor analysis (PARAFAC) model but the data analysis is often complicated due to both Rayleigh and Raman scattering. There are several established ways to deal with scattering effects. However, all of these methods have associated problems. This paper develops a new method for handling scattering using interpolation in the areas affected by first- and second-order Rayleigh and Raman scatter in such a way that the interfering signal is, at best, removed. The suggested method is fast and requires no additional input other than specifying the scattering region. The results of the proposed method were compared with those obtained from common alternative approaches used for preprocessing fluorescence data before analysis with PARAFAC and were shown to be equally good for various types of EEM data. The main advantage of the interpolation method is in its lack of additional metaparameters, its algorithmic speed and subsequent speed-up of PARAFAC modeling. It also allows for using EEM data in software not able to handle missing data. Copyright © 2007 John Wiley & Sons, Ltd.

435 citations


Journal ArticleDOI
TL;DR: In this paper, a method for estimating daily rainfall on a 0.05° latitude/longitude grid covering all of New Zealand for the period 1960-2004 using a second order derivative trivariate thin plate smoothing spline spatial interpolation model was presented.
Abstract: This study presents a method for estimating daily rainfall on a 0.05° latitude/longitude grid covering all of New Zealand for the period 1960–2004 using a second order derivative trivariate thin plate smoothing spline spatial interpolation model. Use of a hand-drawn (and subsequently digitised) mean annual rainfall surface as an independent variable in the interpolation is shown to reduce the interpolation error compared with using an elevation surface. This result is confirmed when long-term average annual rainfall data, derived from the daily interpolations, are validated using long-term river flow data. Copyright © 2006 Royal Meteorological Society

372 citations


Journal ArticleDOI
TL;DR: An interpolation‐based planning and replanning algorithm for generating low‐cost paths through uniform and nonuniform resolution grids that addresses two of the most significant shortcomings of grid‐based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids.
Abstract: We present an interpolation-based planning and replanning algorithm for generating low-cost paths through uniform and nonuniform resolution grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent's motion to a small set of possible headings (e.g., 0, π/4, π/2, etc.). As a result, even “optimal” grid-based planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce a version of the algorithm for uniform resolution grids and a version for nonuniform resolution grids. Together, these approaches address two of the most significant shortcomings of grid-based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids. We demonstrate our approaches on a number of example planning problems, compare them to related algorithms, and present several implementations on real robotic systems.

366 citations


Book
01 Jan 2006
TL;DR: 1D PROBLEMS 1D Model Elliptic Problem A Two-Point Boundary Value Problem Algebraic Structure of the Variational Formulation Equivalence with a Minimization Problem Sobolev Space H1(0, l) Well Posedness of theVariational BVP Examples from Mechanics and Physics
Abstract: 1D PROBLEMS 1D Model Elliptic Problem A Two-Point Boundary Value Problem Algebraic Structure of the Variational Formulation Equivalence with a Minimization Problem Sobolev Space H1(0, l) Well Posedness of the Variational BVP Examples from Mechanics and Physics The Case with "Pure Neumann" BCs Exercises Galerkin Method Finite Dimensional Approximation of the VBVP Elementary Convergence Analysis Comments Exercises 1D hp Finite Element Method 1D hp Discretization Assembling Element Matrices into Global Matrices Computing the Element Matrices Accounting for the Dirichlet BC Summary Assignment 1: A Dry Run Exercises 1D hp Code Setting up the 1D hp Code Fundamentals Graphics Element Routine Assignment 2: Writing Your Own Processor Exercises Mesh Refinements in 1D The h-Extension Operator. Constrained Approximation Coefficients Projection-Based Interpolation in 1D Supporting Mesh Refinements Data-Structure-Supporting Routines Programming Bells and Whistles Interpolation Error Estimates Convergence Assignment 3: Studying Convergence Definition of a Finite Element Exercises Automatic hp Adaptivity in 1D The hp Algorithm Supporting the Optimal Mesh Selection Exponential Convergence. Comparing with h Adaptivity Discussion of the hp Algorithm Algebraic Complexity and Reliability of the Algorithm Exercises Wave Propagation Problems Convergence Analysis for Noncoercive Problems Wave Propagation Problems Asymptotic Optimality of the Galerkin Method Dispersion Error Analysis Exercises 2D ELLIPTIC PROBLEMS 2D Elliptic Boundary-Value Problem Classical Formulation Variational (Weak) Formulation Algebraic Structure of the Variational Formulation Equivalence with a Minimization Problem Examples from Mechanics and Physics Exercises Sobolev Spaces Sobolev Space H1(O) Sobolev Spaces of an Arbitrary Order Density and Embedding Theorems Trace Theorem Well Posedness of the Variational BVP Exercises 2D hp Finite Element Method on Regular Meshes Quadrilateral Master Element Triangular Master Element Parametric Element Finite Element Space. Construction of Basis Functions Calculation of Element Matrices Modified Element. Imposing Dirichlet Boundary Conditions Postprocessing- Local Access to Element d.o.f Projection-Based Interpolation Exercises 2D hp Code Getting Started Data Structure in FORTRAN 90 Fundamentals The Element Routine Modified Element. Imposing Dirichlet Boundary Conditions Assignment 4: Assembly of Global Matrices The Case with "Pure Neumann" Boundary Conditions Geometric Modeling and Mesh Generation Manifold Representation Construction of Compatible Parametrizations Implicit Parametrization of a Rectangle Input File Preparation Initial Mesh Generation The hp Finite Element Method on h-Refined Meshes Introduction. The h Refinements 1-Irregular Mesh Refinement Algorithm Data Structure in Fortran 90 (Continued) Constrained Approximation for C0 Discretizations Reconstructing Element Nodal Connectivities Determining Neighbors for Midedge Nodes Additional Comments Automatic hp Adaptivity in 2D The Main Idea The 2D hp Algorithm Example: L-Shape Domain Problem Example: 2D "Shock" Problem Additional Remarks Examples of Applications A "Battery Problem" Linear Elasticity An Axisymmetric Maxwell Problem Exercises Exterior Boundary-Value Problems Variational Formulation. Infinite Element Discretization Selection of IE Radial Shape Functions Implementation Calculation of Echo Area Numerical Experiments Comments Exercises 2D MAXWELL PROBLEMS 2D Maxwell Equations Introduction to Maxwell's Equation Variational Formulation Exercises Edge Elements and the de Rham Diagram Exact Sequences Projection-Based Interpolation De Rham Diagram Shape Functions Exercises 2D Maxwell Code Directories. Data Structure The Element Routine Constrained Approximation. Modified Element Setting up a Maxwell Problem Exercises hp Adaptivity for Maxwell Equations Projection-Based Interpolation Revisited The hp Mesh Optimization Algorithm Example: The Screen Problem Exterior Maxwell Boundary-Value Problems Variational Formulation Infinite Element Discretization in 3D Infinite Element Discretization in 2D Stability Implementation Numerical Experiments Exercises A Quick Summary and Outlook Appendix Bibliography Index

359 citations


Book
25 Oct 2006
TL;DR: Local Models and Methods: Local models and methods What is local? Spatial Dependence Spatial Scale Stationarity Spatial Data Models Data Sets Used for Illustrative Purposes A Note on Notation Overview Local Modeling Approaches to Local Adaptation Stratification or Segmentation of spatial data Moving Window/Kernel Methods Locally Varying Model Parameters Transforming and Detrending Spatial data as discussed by the authors.
Abstract: Introduction Remit of This Book Local Models and Methods What Is Local? Spatial Dependence Spatial Scale Stationarity Spatial Data Models Data Sets Used for Illustrative Purposes A Note on Notation Overview Local Modeling Approaches to Local Adaptation Stratification or Segmentation of Spatial Data Moving Window/Kernel Methods Locally Varying Model Parameters Transforming and Detrending Spatial Data Overview Grid Data Exploring Spatial Variation in Single Variables Global Univariate Statistics Local Univariate Statistics Analysis of Grid Data Moving Windows for Grid Analysis Wavelets Segmentation Analysis of Digital Elevation Models Overview Spatial Relations Spatial Autocorrelation: Global Measures Spatial Autocorrelation: Local Measures Global Regression Local Regression Regression and Spatial Data Spatial Autoregressive Models Multilevel Modeling Allowing for Local Variation in Model Parameters Moving Window Regression (MWR) Geographically Weighted Regression (GWR) Spatially Weighted Classification Overview Spatial Prediction 1: Deterministic Methods Point Interpolation Global Methods Local Methods Areal Interpolation General Approaches: Overlay Local Models and Local Data Limitations: Point and Areal Interpolation Overview Spatial Prediction 2: Geostatistics Random Function Models Stationarity Global Models Exploring Spatial Variation Kriging Equivalence of Splines and Kriging Conditional Simulation The Change of Support Problem Other Approaches Local Approaches: Nonstationary Models Nonstationary Mean Nonstationary Models for Prediction Nonstationary Variogram Variograms in Texture Analysis Summary Point Patterns Point Patterns Visual Examination of Point Patterns Density and Distance Methods Statistical Tests of Point Patterns Global Methods Distance Methods Other Issues Local Methods Density Methods Accounting for the Population at Risk The Local K Function Point Patterns and Detection of Clusters Overview Summary: Local Models for Spatial Analysis Review Key Issues Software Future Developments Summary References Index

354 citations


Journal ArticleDOI
Ray Abma1, Nurul Kabir1
TL;DR: The Gerchberg-Saxton projection onto convex sets (POCS) algorithm as mentioned in this paper interpolates irregularly populated grids of seismic data with a simple iterative method that produces high-quality results.
Abstract: Seismic surveys generally have irregular areas where data cannot be acquired. These data should often be interpolated. A projection onto convex sets (POCS) algorithm using Fourier transforms allows interpolation of irregularly populated grids of seismic data with a simple iterative method that produces high-quality results. The original 2D image restoration method, the Gerchberg-Saxton algorithm, is extended easily to higher dimensions, and the 3D version of the process used here produces much better interpolations than typical 2D methods. The only parameter that makes a substantial difference in the results is the number of iterations used, and this number can be overestimated without degrading the quality of the results. This simplicity is a significant advantage because it relieves the user of extensive parameter testing. Although the cost of the algorithm is several times the cost of typical 2D methods, the method is easily parallelized and still completely practical.

Journal ArticleDOI
TL;DR: The general applicability of random walk particle tracking in comparison to the standard transport models is discussed and it is concluded that in advection-dominated problems using a high spatial discretization or requiring the performance of many model runs, RWPT represents a good alternative for modelling contaminant transport.

Journal ArticleDOI
TL;DR: A high-precision CMOS time-to-digital converter IC has been designed based on a counter and two-level interpolation realized with stabilized delay lines that reduces the number of delay elements and registers and lowers the power consumption.
Abstract: A high-precision CMOS time-to-digital converter IC has been designed. Time interval measurement is based on a counter and two-level interpolation realized with stabilized delay lines. Reference recycling in the delay line improves the integral nonlinearity of the interpolator and enables the use of a low frequency reference clock. Multi-level interpolation reduces the number of delay elements and registers and lowers the power consumption. The load capacitor scaled parallel structure in the delay line permits very high resolution. An INL look-up table reduces the effect of the remaining nonlinearity. The digitizer measures time intervals from 0 to 204 /spl mu/s with 8.1 ps rms single-shot precision. The resolution of 12.2 ps from a 5-MHz external reference clock is divided by means of only 20 delay elements.

Journal ArticleDOI
TL;DR: In this paper, the sensitivity of hedonic models of house prices to the spatial interpolation of measures of air quality was investigated, using a sample of 115,732 individual house sales for 1999 in the South Coast Air Quality Management District of Southern California.
Abstract: This paper investigates the sensitivity of hedonic models of house prices to the spatial interpolation of measures of air quality. We consider three aspects of this question: the interpolation technique used, the inclusion of air quality as a continuous vs discrete variable in the model, and the estimation method. Using a sample of 115,732 individual house sales for 1999 in the South Coast Air Quality Management District of Southern California, we compare Thiessen polygons, inverse distance weighting, Kriging and splines to carry out spatial interpolation of point measures of ozone obtained at 27 air quality monitoring stations to the locations of the houses. We take a spatial econometric perspective and employ both maximum-likelihood and general method of moments techniques in the estimation of the hedonic. A high degree of residual spatial autocorrelation warrants the inclusion of a spatially lagged dependent variable in the regression model. We find significant differences across interpolators...

Journal ArticleDOI
TL;DR: This contribution explains why and how kernels are applied in these disciplines and uncovers the links between them, in so far as they are related to kernel techniques.
Abstract: Kernels are valuable tools in various elds of Numerical Analysis, including approximation, interpolation, meshless methods for solving partial dieren tial equations, neural networks, and Machine Learning. This contribution explains why and how kernels are applied in these disciplines. It uncovers the links between them, as far as they are related to kernel techniques. It addresses non-expert readers and focuses on practical guidelines for using kernels in applications.

Journal ArticleDOI
TL;DR: If group findings are the primary objective, as typical for cognitive ERP research, low-resolution CSD topographies may be as efficient, given the effective spatial smoothing when averaging across subjects and/or conditions.

Journal ArticleDOI
TL;DR: In this article, an anisotropic plane stress yield function based on interpolation by second order Bezier curves is proposed, which can be used to describe, e.g., the yield stress and R-value as a function of the loading direction more accurately than with other common analytical yield functions.

Journal ArticleDOI
01 Jul 2006
TL;DR: A new paradigm is proposed that allows one to incorporate physical jump conditions in data "on the fly," which is significantly more efficient for multiple regions especially at triple points or near boundaries with solids.
Abstract: The particle level set method has proven successful for the simulation of two separate regions (such as water and air, or fuel and products). In this paper, we propose a novel approach to extend this method to the simulation of as many regions as desired. The various regions can be liquids (or gases) of any type with differing viscosities, densities, viscoelastic properties, etc. We also propose techniques for simulating interactions between materials, whether it be simple surface tension forces or more complex chemical reactions with one material converting to another or two materials combining to form a third. We use a separate particle level set method for each region, and propose a novel projection algorithm that decodes the resulting vector of level set values providing a "dictionary" that translates between them and the standard single-valued level set representation. An additional difficulty occurs since discretization stencils (for interpolation, tracing semi-Lagrangian rays, etc.) cross region boundaries naively combining non-smooth or even discontinuous data. This has recently been addressed via ghost values, e.g. for fire or bubbles. We instead propose a new paradigm that allows one to incorporate physical jump conditions in data "on the fly," which is significantly more efficient for multiple regions especially at triple points or near boundaries with solids.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of p-adically interpolating the systems of eigenvalues attached to automorphic Hecke eigenforms (as well as corresponding Galois representations, in situations where these appear in the étale cohomology of Shimura varieties).
Abstract: The goal of this paper is to illustrate how the techniques of locally analytic p-adic representation theory (as developed in [28, 29, 30, 31] and [13, 14, 17]; see also [16] for a short summary of some of these results) may be applied to study arithmetic properties of automorphic representations. More specifically, we consider the problem of p-adically interpolating the systems of eigenvalues attached to automorphic Hecke eigenforms (as well as the corresponding Galois representations, in situations where these appear in the étale cohomology of Shimura varieties). We can summarize our approach to the problem as follows: rather than attempting to directly interpolate the systems of eigenvalues attached to eigenforms, we instead attempt to interpolate the automorphic representations that these eigenforms give rise to. To be more precise, we fix a connected reductive linear algebraic group G defined over a number field F , and a finite prime p of F . We let Fp denote the completion of F at p, let E be a finite extension of Fp over which the group G splits, let A denote the ring of adèles of F , and let Af denote the ring of finite adèles of F . The representations that we construct are admissible locally analytic representations of the group G(Af ) on certain locally convex topological E-vector spaces. These representations are typically not irreducible; rather, they contain as closed subrepresentations many locally algebraic representations of G(Af ) which are closely related to automorphic representations of G(A) of cohomological type. (It is for this reason that we regard the representations that we construct as forming an “interpolation” of those automorphic representations.) Once we have our locally analytic representations of G(Af ) in hand, we may apply to them the Jacquet module functors of [14]. In this way we obtain p-adic analytic families of systems of Hecke eigenvalues, which (under a suitable hypothesis, for which see the statement of Theorem 0.7 below) p-adically interpolate (in the

Journal ArticleDOI
TL;DR: In this paper, a sample of Sun-like stars with accurate effective temperatures, metallicities and colours (from the ultraviolet to the near-infrared) was compiled, and they fit the colours as a function of effective temperature and metallicity, and derive colour estimates for the Sun in the Johnson-Cousins, Tycho, Stromgren, 2MASS and SDSS photometric systems.
Abstract: We compile a sample of Sun-like stars with accurate effective temperatures, metallicities and colours (from the ultraviolet to the near-infrared). A crucial improvement is that the effective temperature scale of the stars has recently been established as both accurate and precise through direct measurement of angular diameters obtained with stellar interferometers. We fit the colours as a function of effective temperature and metallicity, and derive colour estimates for the Sun in the Johnson—Cousins, Tycho, Stromgren, 2MASS and SDSS photometric systems. For (B-V) ⊙ , we favour the ‘red’ colour 0.64 versus the ‘blue’ colour 0.62 of other recent papers, but both values are consistent within the errors; we ascribe the difference to the selection of Sun-like stars versus interpolation of wider colour—T eff —metallicity relations.

Journal ArticleDOI
TL;DR: A fast high accuracy Polar FFT based on the pseudo-Polar domain, an FFT where the evaluation frequencies lie in an oversampled set of nonangularly equispaced points, including fast forward and inverse transforms.

Journal ArticleDOI
TL;DR: In this paper, a wide selection of the interpolation algorithms that are in use in financial markets for construction of curves such as forward curves, basis curves, and most importantly, yield curves is surveyed.
Abstract: This paper surveys a wide selection of the interpolation algorithms that are in use in financial markets for construction of curves such as forward curves, basis curves, and most importantly, yield curves. In the case of yield curves the issue of bootstrapping is reviewed and how the interpolation algorithm should be intimately connected to the bootstrap itself is discussed. The criterion for inclusion in this survey is that the method has been implemented by a software vendor (or indeed an inhouse developer) as a viable option for yield curve interpolation. As will be seen, many of these methods suffer from problems: they posit unreasonable expections, or are not even necessarily arbitrage free. Moreover, many methods lead one to derive hedging strategies that are not intuitively reasonable. In the last sections, two new interpolation methods (the monotone convex method and the minimal method) are introduced, which it is believed overcome many of the problems highlighted with the other methods discussed ...

Journal ArticleDOI
TL;DR: This work interpolated tracking data from albatrosses, penguins, boobies, sea lions, fur seals and elephant seals using six mathematical algorithms, choosing Bézier, hermite and cubic splines, in addition to a commonly used linear algorithm to interpolate data.
Abstract: Interpolation of geolocation or Argos tracking data is a necessity for habitat use analyses of marine vertebrates. In a fluid marine environment, characterized by curvilinear structures, linearly interpolated track data are not realistic. Based on these two facts, we interpolated tracking data from albatrosses, penguins, boobies, sea lions, fur seals and elephant seals using six mathematical algorithms. Given their popularity in mathematical computing, we chose Bezier, hermite and cubic splines, in addition to a commonly used linear algorithm to interpolate data. Performance of interpolation methods was compared with different temporal resolutions representative of the less-precise geolocation and the more-precise Argos tracking techniques. Parameters from interpolated sub-sampled tracks were compared with those obtained from intact tracks. Average accuracy of the interpolated location was not affected by the interpolation method and was always within the precision of the tracking technique used. However, depending on the species tested, some curvilinear interpolation algorithms produced greater occurrences of more accurate locations, compared with the linear interpolation method. Total track lengths were consistently underestimated but were always more accurate using curvilinear interpolation than linear interpolation. Curvilinear algorithms are safe to use because accuracy, shape and length of the tracks are either not different or are slightly enhanced and because analyses always remain conservative. The choice of the curvilinear algorithm does not affect the resulting track dramatically so it should not preclude their use. We thus recommend using curvilinear interpolation techniques because of the more realistic fluid movements of animals. We also provide some guidelines for choosing an algorithm that is most likely to maximize track quality for different types of marine vertebrates.

Journal ArticleDOI
TL;DR: In this article, the analysis and improvement of an immersed boundary method (IBM) for simulating turbulent flows over complex geometries are presented. Butler et al. proposed a method to interpolate boundary conditions from the solid body to the Cartesian mesh on which the computation is performed.
Abstract: The analysis and improvement of an immersed boundary method (IBM) for simulating turbulent flows over complex geometries are presented. Direct forcing is employed. It consists in interpolating boundary conditions from the solid body to the Cartesian mesh on which the computation is performed. Lagrange and least squares high-order interpolations are considered. The direct forcing IBM is implemented in an incompressible finite volume Navier–Stokes solver for direct numerical simulations (DNS) and large eddy simulations (LES) on staggered grids. An algorithm to identify the body and construct the interpolation schemes for arbitrarily complex geometries consisting of triangular elements is presented. A matrix stability analysis of both interpolation schemes demonstrates the superiority of least squares interpolation over Lagrange interpolation in terms of stability. Preservation of time and space accuracy of the original solver is proven with the laminar two-dimensional Taylor–Couette flow. Finally, practicability of the method for simulating complex flows is demonstrated with the computation of the fully turbulent three-dimensional flow in an air-conditioning exhaust pipe. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A high-order particle-in-cell (PIC) algorithm for the simulation of kinetic plasmas dynamics combining high- order accuracy with geometric flexibility and algorithms in the Lagrangian framework that preserve the favorable properties of the field solver in the PIC solver are introduced.

Journal ArticleDOI
Richard Szeliski1
01 Jul 2006
TL;DR: This approach removes the need to heuristically adjust the optimal number of preconditioning levels, significantly outperforms previously proposed approaches, and also maps cleanly onto data-parallel architectures such as modern GPUs.
Abstract: This paper develops locally adapted hierarchical basis functions for effectively preconditioning large optimization problems that arise in computer graphics applications such as tone mapping, gradient-domain blending, colorization, and scattered data interpolation. By looking at the local structure of the coefficient matrix and performing a recursive set of variable eliminations, combined with a simplification of the resulting coarse level problems, we obtain bases better suited for problems with inhomogeneous (spatially varying) data, smoothness, and boundary constraints. Our approach removes the need to heuristically adjust the optimal number of preconditioning levels, significantly outperforms previously proposed approaches, and also maps cleanly onto data-parallel architectures such as modern GPUs.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: An interpolation-based planning and replanning algorithm that is able to produce direct, low-cost paths through three-dimensional environments and presents a number of results demonstrating its advantages and real-time capabilities.
Abstract: We present an interpolation-based planning and replanning algorithm that is able to produce direct, low-cost paths through three-dimensional environments. Our algorithm builds upon recent advances in 2D grid-based path planning and extends these techniques to 3D grids. It is often the case for robots navigating in full three-dimensional environments that moving in some directions is significantly more difficult than others (e.g. moving upwards is more expensive for most aerial vehicles). Thus, we also provide a facility to incorporate such characteristics into the planning process. Along with the derivation of the 3D interpolation function used by our planner, we present a number of results demonstrating its advantages and real-time capabilities.

Journal ArticleDOI
TL;DR: Two novel error concealment techniques are proposed for video transmission over noisy channels to compensate a lost macroblock in intra-coded frames, in which no useful temporal information is available, and a dynamic mode-weighted error concealments method for replenishing missing pixels in alost macroblock of inter- coded frames.
Abstract: Two novel error concealment techniques are proposed for video transmission over noisy channels in this work. First, we present a spatial error concealment method to compensate a lost macroblock in intra-coded frames, in which no useful temporal information is available. Based on selective directional interpolation, our method can recover both smooth and edge areas efficiently. Second, we examine a dynamic mode-weighted error concealment method for replenishing missing pixels in a lost macroblock of inter-coded frames. Our method adopts a decoder-based error tracking model and combines several concealment modes adaptively to minimize the mean square error of each pixel. The method is capable of concealing lost packets as well as reducing the error propagation effect. Extensive simulations have been performed to demonstrate the performance of the proposed methods in error-prone environments

Journal ArticleDOI
TL;DR: Analysis is used to identify two sources of kinetic energy conservation error in the collocated-mesh scheme: errors arising from the interpolations used to estimate the velocity on the cell faces, and errors associated with the slightly inconsistent pressure field used to ensure mass conservation for the cell face volume fluxes.

Journal ArticleDOI
TL;DR: This work proposes two algorithms for the problem of obtaining a single high-resolution image from multiple noisy, blurred, and undersampled images based on a Bayesian formulation that is implemented via the expectation maximization algorithm and a maximum a posteriori formulation.
Abstract: Using a stochastic framework, we propose two algorithms for the problem of obtaining a single high-resolution image from multiple noisy, blurred, and undersampled images. The first is based on a Bayesian formulation that is implemented via the expectation maximization algorithm. The second is based on a maximum a posteriori formulation. In both of our formulations, the registration, noise, and image statistics are treated as unknown parameters. These unknown parameters and the high-resolution image are estimated jointly based on the available observations. We present an efficient implementation of these algorithms in the frequency domain that allows their application to large images. Simulations are presented that test and compare the proposed algorithms.