scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2006"


Journal ArticleDOI
TL;DR: A versatile resource program was developed for diffusion tensor image (DTI) computation and fiber tracking, based on the Fiber Assignment by Continuous Tracking (FACT) algorithm and a brute-force reconstruction approach.

987 citations


Journal ArticleDOI
TL;DR: In this paper, a triple-grid inversion technique based on unstructured tetrahedral meshes and finite-element forward calculation is presented for the determination of resistivity structures associated with arbitrary surface topography.
Abstract: SUMMARY We present a novel technique for the determination of resistivity structures associated with arbitrary surface topography. The approach represents a triple-grid inversion technique that is based on unstructured tetrahedral meshes and finite-element forward calculation. The three grids are characterized as follows: A relatively coarse parameter grid defines the elements whose resistivities are to be determined. On the secondary field grid the forward calculations in each inversion step are carried out using a secondary potential (SP) approach. The primary fields are provided by a one-time simulation on the highly refined primary field grid at the beginning of the inversion process. We use a Gauss‐Newton method with inexact line search to fit the data within error bounds. A global regularization scheme using special smoothness constraints is applied. The regularization parameter compromising data misfit and model roughness is determined by an L-curve method and finally evaluated by the discrepancy principle. To solve the inverse subproblem efficiently, a least-squares solver is presented. We apply our technique to synthetic data from a burial mound to demonstrate its effectiveness. A resolution-dependent parametrization helps to keep the inverse problem small to cope with memory limitations of today’s standard PCs. Furthermore, the SP calculation reduces the computation time significantly. This is a crucial issue since the forward calculation is generally very time consuming. Thus, the approach can be applied to large-scale 3-D problems as encountered in practice, which is finally proved on field data. As a by-product of the primary potential calculation we obtain a quantification of the topography effect and the corresponding geometric factors. The latter are used for calculation of apparent resistivities to prevent the reconstruction process from topography induced artefacts.

571 citations


Proceedings ArticleDOI
25 Jun 2006
TL;DR: This result proves efficient reinforcement learning is possible without learning a model of the MDP from experience, and Delayed Q-learning's per-experience computation cost is much less than that of previous PAC algorithms.
Abstract: For a Markov Decision Process with finite state (size S) and action spaces (size A per state), we propose a new algorithm---Delayed Q-Learning. We prove it is PAC, achieving near optimal performance except for O(SA) timesteps using O(SA) space, improving on the O(S2 A) bounds of best previous algorithms. This result proves efficient reinforcement learning is possible without learning a model of the MDP from experience. Learning takes place from a single continuous thread of experience---no resets nor parallel sampling is used. Beyond its smaller storage and experience requirements, Delayed Q-learning's per-experience computation cost is much less than that of previous PAC algorithms.

474 citations


Journal ArticleDOI
TL;DR: The issue of prior specification for such multiple tests; computation of key posterior quantities; and useful ways to display these quantities are studied.

364 citations


Journal ArticleDOI
TL;DR: The space–time finite element techniques developed for computation of fluid–structure interaction (FSI) problems are described and it is demonstrated that the techniques have increased the scope and accuracy of the methods used in computation of FSI problems.

299 citations


Book ChapterDOI
18 Sep 2006
TL;DR: A novel sampling-based approximation technique for classical multidimensional scaling that yields an extremely fast layout algorithm suitable even for very large graphs, and is among the fastest methods available.
Abstract: We present a novel sampling-based approximation technique for classical multidimensional scaling that yields an extremely fast layout algorithm suitable even for very large graphs. It produces layouts that compare favorably with other methods for drawing large graphs, and it is among the fastest methods available. In addition, our approach allows for progressive computation, i.e. a rough approximation of the layout can be produced even faster, and then be refined until satisfaction.

241 citations


Journal ArticleDOI
TL;DR: Methods to determine the permanent and transient deformation induced by earthquakes or similar sources, including an optional link to Okada's analytical solutions in the special case of a homogeneous half-space are introduced.

226 citations


BookDOI
01 Jan 2006
TL;DR: In this article, a taxonomy of techniques for designing parameterized algorithms is presented, along with an exact algorithm for the Minimum Dominating Clique Problem and an approximate algorithm for edge dominating set.
Abstract: Applying Modular Decomposition to Parameterized Bicluster Editing.- The Cluster Editing Problem: Implementations and Experiments.- The Parameterized Complexity of Maximality and Minimality Problems.- Parameterizing MAX SNP Problems Above Guaranteed Values.- Randomized Approximations of Parameterized Counting Problems.- Fixed-Parameter Complexity of Minimum Profile Problems.- On the OBDD Size for Graphs of Bounded Tree- and Clique-Width.- Greedy Localization and Color-Coding: Improved Matching and Packing Algorithms.- Fixed-Parameter Approximation: Conceptual Framework and Approximability Results.- On Parameterized Approximability.- Parameterized Approximation Problems.- An Exact Algorithm for the Minimum Dominating Clique Problem.- edge dominating set: Efficient Enumeration-Based Exact Algorithms.- Parameterized Complexity of Independence and Domination on Geometric Graphs.- Fixed Parameter Tractability of Independent Set in Segment Intersection Graphs.- On the Parameterized Complexity of d-Dimensional Point Set Pattern Matching.- Finding a Minimum Feedback Vertex Set in Time .- The Undirected Feedback Vertex Set Problem Has a Poly(k) Kernel.- Fixed-Parameter Tractability Results for Full-Degree Spanning Tree and Its Dual.- On the Effective Enumerability of NP Problems.- The Parameterized Complexity of Enumerating Frequent Itemsets.- Random Separation: A New Method for Solving Fixed-Cardinality Optimization Problems.- Towards a Taxonomy of Techniques for Designing Parameterized Algorithms.- Kernels: Annotated, Proper and Induced.- The Lost Continent of Polynomial Time: Preprocessing and Kernelization.- FPT at Work: Using Fixed Parameter Tractability to Solve Larger Instances of Hard Problems.

201 citations


Journal ArticleDOI
Wei Yu1
TL;DR: A numerical algorithm based on a Lagrangian dual decomposition technique and it uses a modified iterative water-filling approach for the Gaussian multiple-access channel converges to the sum capacity globally and efficiently.
Abstract: A numerical algorithm for the computation of sum capacity for the Gaussian vector broadcast channel is proposed. The sum capacity computation relies on a duality between the Gaussian vector broadcast channel and the sum-power constrained Gaussian multiple-access channel. The numerical algorithm is based on a Lagrangian dual decomposition technique and it uses a modified iterative water-filling approach for the Gaussian multiple-access channel. The algorithm converges to the sum capacity globally and efficiently

165 citations


Journal ArticleDOI
TL;DR: The performance of two correlation optimized warping and semi-parametric time warping algorithms is equally good considering the improvement of the precision of the peak retention times and correlation coefficients between the chromatograms, after alignment.

165 citations


Journal ArticleDOI
TL;DR: In this paper, displacement coefficients and profiles are presented as promising kernel condition and damage indices along with real-life examples, and the level of variation and the uncertainty that may be expected when displacement coefficients are extracted from real civil infrastructure systems are also presented.
Abstract: Displacement coefficients and profiles are presented as promising kernel condition and damage indices along with real-life examples. It is shown that dynamic tests, which do not require stationary reference measurement locations, can also be used to generate data for the computation of modal flexibility. Modal flexibility can then be employed to obtain the displacement profiles. It is also shown that the modal flexibility can be obtained from the frequency response function measurements of the structures. Problems such as environmental effects on measured data and limitations such as incomplete dynamic measurements, spatial and temporal truncation effects are commonly faced in damage detection and condition assessment of real structures. Possible approaches to mitigate these obstacles are discussed. The level of variation and the uncertainty that may be expected when displacement coefficients are extracted from real civil infrastructure systems are also presented. The methods are demonstrated on two real-life bridges and the findings are validated by independent test results.

Journal ArticleDOI
TL;DR: In this paper, the authors developed several numerical algorithms for the computation of invariant manifolds in quasi-periodically forced systems, such as invariant tori and asymptotic invariant manifold (whiskers).
Abstract: In this paper we develop several numerical algorithms for the computation of invariant manifolds in quasi-periodically forced systems. The invariant manifolds we consider are invariant tori and the asymptotic invariant manifolds (whiskers) to these tori. The algorithms are based on the parameterization method described in [36], where some rigorous results are proved. In this paper, we concentrate on numerical issues of algorithms. Examples of implementations appear in the companion paper [34]. The algorithms for invariant tori are based essentially on Newton method, but taking advantage of dynamical properties of the torus, such as hyperbolicity or reducibility as well as geometric properties. The algorithms for whiskers are based on power-matching expansions of the parameterizations. Whiskers include as particular cases the usual (strong) stable and (strong) unstable manifolds, and also, in some cases, the slow manifolds which dominate the asymptotic behavior of solutions converging to the torus.


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the problem of nonlocal computation, where separated parties must compute a function with nonlocally encoded inputs and outputs, such that each party individually learns nothing, yet together they compute the correct function output.
Abstract: We investigate the problem of "nonlocal" computation, in which separated parties must compute a function with nonlocally encoded inputs and output, such that each party individually learns nothing, yet together they compute the correct function output. We show that the best that can be done classically is a trivial linear approximation. Surprisingly, we also show that quantum entanglement provides no advantage over the classical case. On the other hand, generalized (i.e. super-quantum) nonlocal correlations allow perfect nonlocal computation. This gives new insights into the nature of quantum nonlocality and its relationship to generalised nonlocal correlations.

Journal ArticleDOI
TL;DR: An online, probabilistic model is introduced to provide an efficient, self‐supervised learning method that accurately predicts traversal costs over large areas from overhead data and can significantly improve the versatility of many unmanned ground vehicles by allowing them to traverse highly varied terrains with increased performance.
Abstract: In mobile robotics, there are often features that, while potentially powerful for improving navigation, prove difficult to profit from as they generalize poorly to novel situations. Overhead imagery data, for instance, have the potential to greatly enhance autonomous robot navigation in complex outdoor environments. In practice, reliable and effective automated interpretation of imagery from diverse terrain, environmental conditions, and sensor varieties proves challenging. Similarly, fixed techniques that successfully interpret on-board sensor data across many environments begin to fail past short ranges as the density and accuracy necessary for such computation quickly degrade and features that are able to be computed from distant data are very domain specific. We introduce an online, probabilistic model to effectively learn to use these scope-limited features by leveraging other features that, while perhaps otherwise more limited, generalize reliably. We apply our approach to provide an efficient, self-supervised learning method that accurately predicts traversal costs over large areas from overhead data. We present results from field testing on-board a robot operating over large distances in various off-road environments. Additionally, we show how our algorithm can be used offline with overhead data to produce a priori traversal cost maps and detect misalignments between overhead data and estimated vehicle positions. This approach can significantly improve the versatility of many unmanned ground vehicles by allowing them to traverse highly varied terrains with increased performance. © 2007 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this paper, the ground state energy of the nonrelativistic two-electron atom is reported and the most rapid convergence is found with a combination of negative powers and a logarithm of the coordinate s = r1 + r2.
Abstract: Extensive variational computations are reported for the ground state energy of the nonrelativistic two-electron atom. Several dieren t sets of basis functions were systematically explored, starting with the original scheme of Hylleraas. The most rapid convergence is found with a combination of negative powers and a logarithm of the coordinate s = r1 + r2. At N = 3091 terms we pass the previous best calculation (Korobov’s 25 decimal accuracy with N = 5200 terms) and we stop at N = 10257 with E = 2:90372, 43770, 34119, 59831, 11592, 45194, 40444, : : : Previous mathematical analysis sought to link the convergence rate of such calculations to specic analytic properties of the functions involved. The application of that theory to this new experimental data leaves a rather frustrating situation, where we seem able to do little more than invoke vague concepts, such as \exibilit y." We conclude that theoretical understanding here lags well behind the power of available computing machinery.

Journal ArticleDOI
TL;DR: In this paper, a coarse-fine search method based on an affine transform and a new technique of fine searching called "nested fine search method" is proposed for image correlation analysis.

Journal ArticleDOI
TL;DR: In this article, a new implementation of the Desroziers and Ivanov algorithm, including a new computation scheme for the required traces, is proposed, which is compared to Girard's in two aspects: its use in the implementation of a tuning algorithm, and the computation of a quantification of the observation impacts on the analysis known as Degrees of Freedom for Signal.
Abstract: Desroziers and Ivanov proposed a method to tune error variances used for data assimilation. The implementation of this algorithm implies the computation of the trace of certain matrices which are not explicitly known. A method proposed by Girard, allowing an approximate estimation of the traces without explicit knowledge of the matrices, was then used. This paper proposes a new implementation of the Desroziers and Ivanov algorithm, including a new computation scheme for the required traces. This method is compared to Girard's in two aspects: its use in the implementation of the tuning algorithm, and the computation of a quantification of the observation impacts on the analysis known as Degrees of Freedom for Signal. Those results are illustrated by studies utilizing the French data assimilation/numerical weather-prediction system ARPEGE. The impact of a first quasi-operational tuning of variances on forecasts is shown and discussed. Copyright © 2006 Royal Meteorological Society

Proceedings ArticleDOI
Shumeet Baluja1, Michele Covell1
01 Jan 2006
TL;DR: Waveprint uses a combination of computer-vision techniques and large-scale-data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched, and explicitly measures the tradeoffs between performance, memory usage, and computation.
Abstract: In this paper, we introduce Waveprint, a novel method for audio identification. Waveprint uses a combination of computer-vision techniques and large-scale-data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched. The resulting system has excellent identification capabilities for small snippets of audio that have been degraded in a variety of manners, including competing noise, poor recording quality, and cell-phone playback. We explicitly measure the tradeoffs between performance, memory usage, and computation through extensive experimentation.

Journal ArticleDOI
TL;DR: The Uintah Computational Framework is described, a set of software components and libraries that facilitate the simulation of partial differential equations on structured adaptive mesh refinement grids using hundreds to thousands of processors.

Journal ArticleDOI
TL;DR: A novel approach called BORDER is described, which employs the state-of-the-art database technique - the Gorder kNN join and makes use of the special property of the reverse k nearest neighbor (RkNN) to detect boundary points in multidimensional data sets.
Abstract: This work addresses the problem of finding boundary points in multidimensional data sets. Boundary points are data points that are located at the margin of densely distributed data such as a cluster. We describe a novel approach called BORDER (a BOundaRy points DEtectoR) to detect such points. BORDER employs the state-of-the-art database technique - the Gorder kNN join and makes use of the special property of the reverse k nearest neighbor (RkNN). Experimental studies on data sets with varying characteristics indicate that BORDER is able to detect the boundary points effectively and efficiently.

Proceedings Article
13 Jul 2006
TL;DR: In this paper, the effects of actions from causal assumptions represented as a directed graph and statistical knowledge given as a probability distribution are elucidated, and a necessary and sufficient graphical condition for the cases where such distributions can be uniquely computed from the available information, as well as an algorithm which performs this computation whenever the condition holds.
Abstract: The subject of this paper is the elucidation of effects of actions from causal assumptions represented as a directed graph, and statistical knowledge given as a probability distribution. In particular, we are interested in predicting distributions on post-action outcomes given a set of measurements. We provide a necessary and sufficient graphical condition for the cases where such distributions can be uniquely computed from the available information, as well as an algorithm which performs this computation whenever the condition holds. Furthermore, we use our results to prove completeness of do-calculus [Pearl, 1995] for the same identification problem, and show applications to sequential decision making.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the stabilization and shock-capturing parameters introduced recently for the Streamline-Upwind/Petrov-Galerkin (SUPG) formulation of compressible flows based on conservation variables.
Abstract: Numerical experiments with inviscid supersonic flows around cylinders and spheres are carried out to evaluate the stabilization and shock-capturing parameters introduced recently for the Streamline–Upwind/Petrov–Galerkin (SUPG) formulation of compressible flows based on conservation variables The tests with the cylinders are carried out for both structured and unstructured meshes The new shock-capturing parameters, which we call “YZβ Shock-Capturing”, are compared to earlier SUPG parameters derived based on the entropy variables In addition to being much simpler, the new shock-capturing parameters yield better shock quality in the test computations, with more substantial improvements seen for unstructured meshes with triangular and tetrahedral elements Furthermore, the results obtained with YZβ Shock-Capturing compare very favorably to those obtained with the well established OVERFLOW code

Journal ArticleDOI
TL;DR: In this article, the authors investigate two imaging methods to detect buried scatterers from electromagnetic measurements at a fixed frequency, one is the classical linear sampling method that requires the computation of Green's tensor for the background medium.
Abstract: We investigate two imaging methods to detect buried scatterers from electromagnetic measurements at a fixed frequency. The first one is the classical linear sampling method that requires the computation of Green's tensor for the background medium. This job can be numerically very costly for complex background geometries. The second one is an alternative approach based on the reciprocity gap functional that avoids the computation of Green's tensor but requires knowledge of both the electric and magnetic fields. Numerical examples are given showing the performance of both methods.

01 Jan 2006
TL;DR: The Prefix-sum algorithm is one of the most important building blocks for data-parallel computation and its applications include parallel implementations of deleting marked elements from an array, radix-sort, solving recurrence equations, solving tri-diagonal linear systems, and quicksort.
Abstract: The Prefix-sum algorithm [Hillis and Steele Jr 1986] is one of the most important building blocks for data-parallel computation Its applications include parallel implementations of deleting marked elements from an array (stream-compaction), radix-sort, solving recurrence equations, solving tri-diagonal linear systems, and quicksort In addition to being a useful building block, the prefix-sum algorithm is a good example of a computation that seems inherently sequential, but for which there are efficient data-parallel algorithms

01 Jan 2006
TL;DR: Recursive filters have traditionally been viewed as problematic for GPUs, but using the well-established method of cyclic reduction of tridiagonal systems, this work is able to vectorize the computation and achieve interactive frame rates.
Abstract: Author(s): Kass, Michael; Lefohn, Aaron; Owens, John D. | Abstract: Accurate computation of depth-of-field effects in computer graphics rendering is generally very time consuming, creating a problematic workflow for film authoring. The computation is particularly challenging because it depends on large-scale spatially-varying filtering that must accurately respect complex boundaries. A variety of real-time algorithms have been proposed for games, but the compromises required to achieve the necessary frame rates have made them them unsuitable for film. Here we introduce an approximatedepth-of-field computation that is good enough for film preview, yet can be computed interactively on a GPU. The computation creates depth-of-field blurs by simulating the heat equation for a nonuniform medium. Our alternating direction implicit solution gives rise to separable spatially varying recursive filters that can compute large-kernel convolutions in constant time per pixel while respecting the boundaries between in-focus and out-of-focus objects. Recursive filters have traditionally been viewed as problematic for GPUs, but using the well-established method of cyclic reduction of tridiagonal systems, we are able to vectorize the computation and achieve interactive frame rates.

Journal ArticleDOI
TL;DR: A new T-characteristic vector ζ is proposed to compute strict minimal siphons (SMS) for S3PR (systems of simple sequential processes with resources) in an algebraic fashion and it is discovered that elementary siphons can be constructed from elementary circuits where all places are resources.
Abstract: When designing liveness-enforcing Petri net supervisors, unlike other techniques, Li et al. added control places and arcs to a plant net model for its elementary siphons only, greatly reducing the structural complexity of the controlled system. Their method, however, suffers from the expensive computation of siphons. We propose a new T-characteristic vector ζ to compute strict minimal siphons (SMS) for S3PR (systems of simple sequential processes with resources) in an algebraic fashion. For a special subclass of S3PR, called S4PR (simple S3PR), we discover that elementary siphons can be constructed from elementary circuits where all places are resources. Thus, the set of elementary siphons can be computed without the knowledge of all SMS. We also propose to construct characteristic T-vectors η by building a graph to find dependent siphons without their computations.

Patent
30 Jun 2006
TL;DR: A meeting locator system as mentioned in this paper enables users to receive locative data from one another by displaying the location of each user upon a visual map on a mobile phone of at least one user.
Abstract: A meeting locator system enables users, each having a mobile phone equipped with locative sensing capabilities, to receive locative data from one another. The meeting locator system displays the location of each user upon a visual map on a mobile phone of at least one user. The visual map is automatically scaled to simultaneously display the location of each user. The meeting locator system computes midpoint location (geometric or geographic) between the users and displays the midpoint over the visual map as an approximate location where the users are likely to meet. The midpoint location can be updated and further adjusted based upon an estimated travel time for each user to reach the midpoint. The estimated travel time is computed based upon a current speed of each user, a recent average speed of each user, a computation of path lengths between each user, and/or other travel conditions and is displayed.

Book
11 Dec 2006
TL;DR: In this paper, the authors describe the theory and use of random processes to describe the properties of atomic, polymeric and colloidal systems in terms of the dynamics of the particles in the system.
Abstract: The book is concerned with the description of aspects of the theory and use of so-called random processes to describe the properties of atomic, polymeric and colloidal systems in terms of the dynamics of the particles in the system. It provides derivations of the basic equations, the development of numerical schemes to solve them on computers and gives illustrations of application to typical systems. Extensive appendices are given to enable the reader to carry out computations to illustrate many of the points made in the main body of the book. * Starts from fundamental equations * Gives up-to-date illustration of the application of these techniques to typical systems of interest * Contains extensive appendices including derivations, equations to be used in practice and elementary computer codes.

Proceedings ArticleDOI
21 Oct 2006
TL;DR: A first definition of general secure computation that, without any trusted set-up, handles an arbitrary number of concurrent executions; and is implementable based on standard complexity assumptions is put forward.
Abstract: We put forward a first definition of general secure computation that, without any trusted set-up, --handles an arbitrary number of concurrent executions; and --is implementable based on standard complexity assumptions. In contrast to previous definitions of secure computation, ours is not simulation-based.