scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2005"


Journal ArticleDOI
03 Mar 2005-Nature
TL;DR: This work reports a simple architecture for fault-tolerant quantum computing, providing evidence that accurate quantum computing is possible for EPGs as high as three per cent, and shows that non-trivial quantum computations at EPG’s of as low as one per cent could be implemented.
Abstract: In theory, quantum computers offer a means of solving problems that would be intractable on conventional computers. Assuming that a quantum computer could be constructed, it would in practice be required to function with noisy devices called 'gates'. These gates cause decoherence of the fragile quantum states that are central to the computer's operation. The goal of so-called 'fault-tolerant quantum computing' is therefore to compute accurately even when the error probability per gate (EPG) is high. Here we report a simple architecture for fault-tolerant quantum computing, providing evidence that accurate quantum computing is possible for EPGs as high as three per cent. Such EPGs have been experimentally demonstrated, but to avoid excessive resource overheads required by the necessary architecture, lower EPGs are needed. Assuming the availability of quantum resources comparable to the digital resources available in today's computers, we show that non-trivial quantum computations at EPGs of as high as one per cent could be implemented.

1,030 citations



Journal ArticleDOI
TL;DR: Subramanian et al. as discussed by the authors proposed a two-parameter parabolic profile model for Li-ion cells, which can be used to increase the accuracy of the model by adding parameters, instead of using empirical terms.
Abstract: of thermal behavior of Li-ion cells. However, they did not enumerate when their models fail. The two-parameter model introduced in this paper yields results similar to the parabolic profile model described by Wang and co-workers. It is also noted that the approach developed in this paper can be used to increase the accuracy of the model by adding parameters, instead of using empirical terms. The performance of the new model is found to be valid even at short times. In this paper, efficient approximations are developed for the microscale diffusion, which reduce the microscale diffusion 1P DE to two or three differential algebraic equations. These approximations are developed by assuming that the solid-state concentration inside the spherical particle can be expressed as a polynomial in the spatial direction. 5 Subramanian et al. 5 developed approximate solutions for solid-phase diffusion based on polynomial profile approximations for constant pore wall flux at the surface of the particle. However, these models cannot be used for battery modeling directly because the pore wall flux at the surface of the particle changes both as a function of time and distance across the porous electrode. In this paper, approximations are developed for the microscale diffusion for time-dependent pore wall flux. These approximations are then tested with the exact numerical solution of particle diffusion for various defined functions in time for the pore wall flux. Next, these approximations are used with the macroscale model to predict the electrochemical behavior of an Li-ion cell sandwich. The approximations developed reduce the computation time for simulation without compromising accuracy.

314 citations


01 Dec 2005
TL;DR: In this paper, space-time finite element techniques were developed for computation of fluid-structure interaction (FSI) problems, including deforming-spatial-domain/stabilized space time (DSD/SST) formulation and mesh update methods, including the solid-extension mesh moving technique (SEMMT).
Abstract: We describe the space–time finite element techniques we developed for computation of fluid–structure interaction (FSI) problems. Among these techniques are the deforming-spatial-domain/stabilized space–time (DSD/SST) formulation and its special version, and the mesh update methods, including the solid-extension mesh moving technique (SEMMT). Also among these techniques are the block-iterative, quasi-direct and direct coupling methods for the solution of the fully discretized, coupled fluid and structural mechanics equations. We present some test computations for the mesh moving techniques described. We also present numerical examples where the fluid is governed by the Navier– Stokes equations of incompressible flows and the structure is governed by the membrane and cable equations. Overall, we demonstrate that the techniques we have developed have increased the scope and accuracy of the methods used in computation of FSI problems. � 2005 Elsevier B.V. All rights reserved.

297 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the implementation, development and performance of methods of stochastic computation in Gaussian graphical models, with a particular interest in the scalability with dimension of Markov chain Monte Carlo (MCMC).
Abstract: We discuss the implementation, development and performance of methods of stochastic computation in Gaussian graphical models. We view these methods from the perspective of high-dimensional model search, with a particular interest in the scalability with dimension of Markov chain Monte Carlo (MCMC) and other stochastic search methods. After reviewing the structure and context of undirected Gaussian graphical models and model uncertainty (covariance selection), we discuss prior specifications, including new priors over models, and then explore a number of examples using various methods of stochastic computation. Traditional MCMC methods are the point of departure for this experimentation; we then develop alternative stochastic search ideas and contrast this new approach with MCMC. Our examples range from low (12–20) to moderate (150) dimension, and combine simple synthetic examples with data analysis from gene expression studies. We conclude with comments about the need and potential for new computational methods in far higher dimensions, including constructive approaches to Gaussian graphical modeling and computation.

285 citations


Journal ArticleDOI
TL;DR: In this paper, a solution strategy for achieving cooperative timing among teams of vehicles is presented based on the notion of coordination variables and coordination functions, the strategy facilitates cooperative timing by making efficient use of communication and computation resources.
Abstract: A solution strategy for achieving cooperative timing among teams of vehicles is presented. Based on the notion of coordination variables and coordination functions, the strategy facilitates cooperative timing by making efficient use of communication and computation resources. The application of the coordination variable/function approach to trajectory-planning problems for teams of unmanned air vehicles with timing constraints is described. Three types of timing constraints are considered: simultaneous arrival, tight sequencing, and loose sequencing. Simulation results demonstrating the viability of the approach are presented.

252 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.
Abstract: There has been concern over the apparent conflict between privacy and data mining. There is no inherent conflict, as most types of data mining produce summary results that do not reveal information about individuals. The process of data mining may use private data, leading to the potential for privacy breaches. Secure Multiparty Computation shows that results can be produced without revealing the data used to generate them. The problem is that general techniques for secure multiparty computation do not scale to data-mining size computations. This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.

237 citations


Journal Article
TL;DR: Automatic differentiation, by which the derivatives of the function can be evaluated both exactly and economically, is applied to the field of scientific and engineering computation extensively.
Abstract: Evaluation relevant to the partial derivatives of the multivariable functions is often done in scientific computation, usually by means of the symbolic differentiation or the divided difference. But for the middle and large scale problems, the computation cost by symbolic differentiation is very expensive. When the direction derivative is evaluated, the computation cost by divided difference can be reduced, but it is only one kind of aproximate computation. Moreover, it is very difficult to confirm the divided difference interval rightly. Automatic differentiation, by which the derivatives of the function can be evaluated both exactly and economically, is applied to the field of scientific and engineering computation extensively.

231 citations


Journal ArticleDOI
TL;DR: It is demonstrated that rats can segment the streams using the frequency of co-occurrence (not transitional probabilities, as human infants do) among items, showing that some basic statistical learning mechanism generalizes over nonprimate species.
Abstract: Statistical learning is one of the key mechanisms available to human infants and adults when they face the problems of segmenting a speech stream (Saffran, Aslin, & Newport, 1996) and extracting long-distance regularities (Gomez, 2002; Pena, Bonatti, Nespor, & Mehler, 2002). In the present study, we explore statistical learning abilities in rats in the context of speech segmentation experiments. In a series of five experiments, we address whether rats can compute the necessary statistics to be able to segment synthesized speech streams and detect regularities associated with grammatical structures. Our results demonstrate that rats can segment the streams using the frequency of co-occurrence (not transitional probabilities, as human infants do) among items, showing that some basic statistical learning mechanism generalizes over nonprimate species. Nevertheless, rats did not differentiate among test items when the stream was organized over more complex regularities that involved nonadjacent elements and abstract grammar-like rules.

199 citations


Journal ArticleDOI
TL;DR: The algorithm constructed here has an advantage over the Fraser-Swinney algorithm in providing an explicit calculation of the probability of the null hypothesis that X and Y are independent and is marginally the more accurate of the two algorithms when large data sets are used.
Abstract: Given two time series X and Y , their mutual information, I (X,Y) = I (Y,X) , is the average number of bits of X that can be predicted by measuring Y and vice versa. In the analysis of observational data, calculation of mutual information occurs in three contexts: identification of nonlinear correlation, determination of an optimal sampling interval, particularly when embedding data, and in the investigation of causal relationships with directed mutual information. In this contribution a minimum description length argument is used to determine the optimal number of elements to use when characterizing the distributions of X and Y . However, even when using partitions of the X and Y axis indicated by minimum description length, mutual information calculations performed with a uniform partition of the XY plane can give misleading results. This motivated the construction of an algorithm for calculating mutual information that uses an adaptive partition. This algorithm also incorporates an explicit test of the statistical independence of X and Y in a calculation that returns an assessment of the corresponding null hypothesis. The previously published Fraser-Swinney algorithm for calculating mutual information includes a sophisticated procedure for local adaptive control of the partitioning process. When the Fraser and Swinney algorithm and the algorithm constructed here are compared, they give very similar numerical results (less than 4% difference in a typical application). Detailed comparisons are possible when X and Y are correlated jointly Gaussian distributed because an analytic expression for I (X,Y) can be derived for that case. Based on these tests, three conclusions can be drawn. First, the algorithm constructed here has an advantage over the Fraser-Swinney algorithm in providing an explicit calculation of the probability of the null hypothesis that X and Y are independent. Second, the Fraser-Swinney algorithm is marginally the more accurate of the two algorithms when large data sets are used. With smaller data sets, however, the Fraser-Swinney algorithm reports structures that disappear when more data are available. Third, the algorithm constructed here requires about 0.5% of the computation time required by the Fraser-Swinney algorithm.

194 citations


Posted Content
TL;DR: In this article, the authors proposed a distributed randomized algorithm for computing separable functions, which can be written as linear combinations of functions of individual variables, using a randomized gossip mechanism for minimum computation as the subroutine.
Abstract: The problem of computing functions of values at the nodes in a network in a totally distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and ad-hoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete totally distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme.

Journal ArticleDOI
TL;DR: In this article, the authors compute all dynamical spin-spin correlation functions for the spin-1/2/XXZ anisotropic Heisenberg model in the gapless antiferromagnetic regime at zero temperature, using numerical sums of exact determinant representations for form factors of spin operators.
Abstract: We compute all dynamical spin–spin correlation functions for the spin-1/2 XXZ anisotropic Heisenberg model in the gapless antiferromagnetic regime at zero temperature, using numerical sums of exact determinant representations for form factors of spin operators on the lattice. Contributions from intermediate states containing many particles and string (bound) states are included. We present modified determinant representations for the form factors valid in the general case with string solutions to the Bethe equations. Our results are such that the available sum rules are saturated to high precision. We Fourier transform our results back to real space, allowing us in particular to make a comparison with known exact formulae for equal-time correlation functions for small separations in zero field, and with predictions for the zero-field asymptotics from conformal field theory.

Journal ArticleDOI
TL;DR: An accurate atomic-scale finite element method AFEM is developed that has exactly the same formal structure as continuum finite element methods, and therefore can seamlessly be combined with them in multiscale computations.
Abstract: We have developed an accurate atomic-scale finite element method AFEM that has exactly the same formal structure as continuum finite element methods, and therefore can seamlessly be combined with them in multiscale computations. The AFEM uses both first and second derivatives of system energy in the energy minimization computation. It is faster than the standard conjugate gradient method which uses only the first order derivative of system energy, and can thus significantly save computation time especially in studying large scale problems. Woven nanostructures of carbon nanotubes are proposed and studied via this new method, and strong defect insensitivity in such nanostructures is revealed. The AFEM is also readily applicable for solving many physics related optimization problems.

Book
01 Jan 2005
TL;DR: This work presents a new practically-oriented perspective on the theory of algorithms, computation, and automata, as a whole and demonstrates how these algorithms are more appropriate as mathematical models for modern computers and how they present a better framework for computing methods.
Abstract: * The first exposition on super-recursive algorithms, systematizing all main classes and providingan accessible, focused examination of the theory and its ramifications *Demonstrates how these algorithms are more appropriate as mathematical models for modern computers and how they present a better framework for computing methods *Developsa new practically-oriented perspective on the theory of algorithms, computation, and automata, as a whole

Journal ArticleDOI
TL;DR: This article describes the implementations of the GMRES algorithm for both real and complex, single and double precision arithmetics suitable for serial, shared memory and distributed memory computers and the implemented stopping criterion is based on a normwise backward error.
Abstract: In this article we describe our implementations of the GMRES algorithm for both real and complex, single and double precision arithmetics suitable for serial, shared memory and distributed memory computers. For the sake of portability, simplicity, flexibility and efficiency the GMRES solvers have been implemented in Fortran 77 using the reverse communication mechanism for the matrix-vector product, the preconditioning and the dot product computations. For distributed memory computation, several orthogonalization procedures have been implemented to reduce the cost of the dot product calculation, which is a well-known bottleneck of efficiency for the Krylov methods. Either implicit or explicit calculation of the residual at restart are possible depending on the actual cost of the matrix-vector product. Finally the implemented stopping criterion is based on a normwise backward error.

Journal ArticleDOI
TL;DR: In this article, the authors give an elementary review of black holes in string theory and discuss BPS holes, the microscopic computation of entropy and the ''fuzzball'' picture of the black hole interior suggested by microstates of the 2-charge system.
Abstract: We give an elementary review of black holes in string theory. We discuss BPS holes, the microscopic computation of entropy and the `fuzzball' picture of the black hole interior suggested by microstates of the 2-charge system.

Book
01 Jan 2005
TL;DR: DNA: The Molecule of Life, Theoretical Computer Science: A Primer, and Models of Molecular Computation.
Abstract: DNA: The Molecule of Life.- Theoretical Computer Science: A Primer.- Models of Molecular Computation.- Complexity Issues.- Physical Implementations.- Cellular Computing.

Journal ArticleDOI
TL;DR: It is found that the most accurate algorithm depends on the class and that for some classes, none of the available algorithms is particularly good.
Abstract: We investigate current vertex normal computation algorithms and evaluate their effectiveness at approximating analytically computable (and thus comparable) normals for a variety of classes of model. We find that the most accurate algorithm depends on the class and that for some classes, none of the available algorithms is particularly good. We also compare the relative speeds of all algorithms.

Journal ArticleDOI
TL;DR: In this article, a tractable approximation method for estimating these values and constructing bids is presented, which provides a way for carriers to discover their true costs and construct optimal or near optimal bids by solving a single NP-hard problem.
Abstract: Trucking companies (carriers) are increasingly facing combinatorial auctions conducted by shippers seeking contracts for their transportation needs. The bid valuation and construction problem for carriers facing these combinatorial auctions is very difficult and involves the computation of a number of NP-hard sub problems. In this paper we examine computationally tractable approximation methods for estimating these values and constructing bids. The benefit of our approximation method is that it provides a way for carriers to discover their true costs and construct optimal or near optimal bids by solving a single NP-hard problem. This represents a significant improvement in computational efficiency. We examine our method both analytically and empirically using a simulation based analysis.

Patent
12 May 2005
TL;DR: In this paper, the authors present a method for creating a graphical program that uses multiple models of computation (MoC) in response to first input, where the assembled first plurality of graphical program elements have a first MoC.
Abstract: System and method for creating a graphical program that uses multiple models of computation (MoC). A first plurality of graphical program elements is assembled in a graphical program in response to first input, where the assembled first plurality of graphical program elements have a first MoC. A structure is displayed in the graphical program indicating use of a second MoC for graphical program elements comprised within the interior of the structure. A second plurality of graphical program elements is assembled within the structure in response to second input, where the assembled second plurality of graphical program elements have the second MoC. The graphical program is executable to perform a function, for example, by executing the assembled first plurality of graphical program elements in accordance with the first model of computation, and executing the assembled second plurality of graphical program elements in accordance with the second model of computation.

Journal ArticleDOI
TL;DR: This article discusses the high-performance parallel implementation of the computation and updating of QR factorizations of dense matrices, including problems large enough to require out-of-core computation, where the matrix is stored on disk.
Abstract: This article discusses the high-performance parallel implementation of the computation and updating of QR factorizations of dense matrices, including problems large enough to require out-of-core computation, where the matrix is stored on disk. The algorithms presented here are scalable both in problem size and as the number of processors increases. Implementation using the Parallel Linear Algebra Package (PLAPACK) and the Parallel Out-of-Core Linear Algebra Package (POOCLAPACK) is discussed. The methods are shown to attain excellent performance, in some cases attaining roughly 80% of the “realizable” peak of the architectures on which the experiments were performed.

Journal ArticleDOI
TL;DR: In this paper, the PageRank computation in the original random surfer model is transformed in the problem of computing the solution of a sparse linear system, and the sparsity of the obtained linear system makes it possible to exploit the effectiveness of the Markov chain index reordering.
Abstract: Recently, the research community has devoted increased attention to reducing the computational time needed by web ranking algorithms. In particular, many techniques have been proposed to speed up the well-known PageRank algorithm used by Google. This interest is motivated by two dominant factors: (1) the web graph has huge dimensions and is subject to dramatic updates in terms of nodes and links, therefore the PageRank assignment tends to became obsolete very soon; (2) many PageRank vectors need to be computed according to different choices of the personalization vectors or when adopting strategies of collusion detection. In this paper, we show how the PageRank computation in the original random surfer model can be transformed in the problem of computing the solution of a sparse linear system. The sparsity of the obtained linear system makes it possible to exploit the effectiveness of the Markov chain index reordering to speed up the PageRank computation. In particular, we rearrange the system matrix acco...

Journal ArticleDOI
TL;DR: This work shows how high-level functional programs can be mapped compositionally into a simple kind of automata which are immediately seen to be reversible.

Journal ArticleDOI
01 Dec 2005
TL;DR: A new way to perform the elementarity tests required during the computation of elementary modes which empirically improves significantly the computation time in large networks and a promising approach for computing EMs in a completely distributed manner by decomposing the full problem in arbitrarity many sub-tasks is presented.
Abstract: The concept of elementary (flux) modes provides a rigorous description of pathways in metabolic networks and proved to be valuable in a number of applications. However, the computation of elementary modes is a hard computational task that gave rise to several variants of algorithms during the last years. This work brings substantial progresses to this issue. The authors start with a brief review of results obtained from previous work regarding (a) a unified framework for elementary-mode computation, (b) network compression and redundancy removal and (c) the binary approach by which elementary modes are determined as binary patterns reducing the memory demand drastically without loss of speed. Then the authors will address herein further issues. First, a new way to perform the elementarity tests required during the computation of elementary modes which empirically improves significantly the computation time in large networks is proposed. Second, a method to compute only those elementary modes where certain reactions are involved is derived. Relying on this method, a promising approach for computing EMs in a completely distributed manner by decomposing the full problem in arbitrarity many sub-tasks is presented. The new methods have been implemented in the freely available software tools FluxAnalyzer and Metatool and benchmark tests in realistic networks emphasise the potential of our proposed algorithms.


Journal ArticleDOI
TL;DR: The emphasis here is on developing practical methods that are illustrated to be numerically reliable, robust to choice of initialization point, and numerically efficient in terms of how computation and memory requirements scale relative to problem size.
Abstract: This paper addresses the problem of estimating the parameters in a multivariable bilinear model on the basis of observed input-output data. The main contribution is to develop, analyze, and empirically study new techniques for computing a maximum-likelihood based solution. In particular, the emphasis here is on developing practical methods that are illustrated to be numerically reliable, robust to choice of initialization point, and numerically efficient in terms of how computation and memory requirements scale relative to problem size. This results in new methods that can be reliably deployed on systems of nontrivial state, input and output dimension. Underlying these developments is a new approach (in this context) of employing the expectation-maximization method as a means for robust and gradient free computation of the maximum-likelihood solution.

Journal ArticleDOI
TL;DR: This paper develops algorithms for distributed computation of averages of the node data over networks with arbitrary but fixed connectivity that are linear dynamical systems that generate sequences of improving approximations to the desired computation at each node via iterative processing and broadcasting.
Abstract: In this paper, we develop algorithms for distributed computation of averages of the node data over networks with arbitrary but fixed connectivity. The algorithms we develop are linear dynamical systems that generate sequences of improving approximations to the desired computation at each node, via iterative processing and broadcasting. The algorithms are locally constructed at each node by exploiting only locally available and macroscopic information about the network topology. We present methods for optimizing the convergence rates of these algorithms to the desired computation, and evaluate their performance characteristics in the context of a problem of signal estimation from multinode noisy observations. By conducting simulations based on simple power-loss propagation models, we perform a preliminary comparison of the algorithms we develop against other types of distributed algorithms for computing averages, and identify transmit-power optimized algorithmic implementations as a function of the size and density of the sensor network.

Proceedings ArticleDOI
12 Dec 2005
TL;DR: In this paper, a new technique for the computation of ellipsoidal invariant sets for continuous-time linear systems controlled by a saturating linear control law is presented, where the proposed sufficient condition is expressed in form of linear matrix inequalities constraints.
Abstract: In this work, a new technique for the computation of ellipsoidal invariant sets for continuous-time linear systems controlled by a saturating linear control law is presented. New sufficient conditions to guarantee that an ellipsoid is a contractive invariant set for the closed-loop system is presented. The contractive nature of the invariant set ensures asymptotic stability of the controlled system. The main contributions of the paper are the following: the proposed sufficient condition is expressed in form of linear matrix inequalities constraints. The presented method includes (and consequently improves) previous results on this topic. The computational complexity of the proposed approach is analyzed. Illustrative examples are given.

Proceedings ArticleDOI
04 Apr 2005
TL;DR: A novel approach called homogeneous redundancy (HR) is presented, in which the redundant instances of a computation are dispatched to numerically identical computers, allowing strict equality comparison of the results.
Abstract: Distributed computing using PCs volunteered by the public can provide high computing capacity at low cost. However, computational results from volunteered PCs have a non-negligible error rate, so result validation is needed to ensure overall correctness. A generally applicable technique is "redundant computing", in which each computation is done on several separate computers, and results are accepted only if there is a consensus. Variations in numerical processing between computers (due to a variety of hardware and software factors) can lead to different results for the same task. In some cases, this can be addressed by doing a "fuzzy comparison" of results, so that two results are considered equivalent if they agree within given tolerances. However, this approach is not applicable to applications that are "divergent", that is, for which small numerical differences can produce large differences in the results. In this paper we examine the problem of validating results of divergent applications. We present a novel approach called homogeneous redundancy (HR), in which the redundant instances of a computation are dispatched to numerically identical computers, allowing strict equality comparison of the results. HR has been deployed in Predictor@home, a world-wide community effort to predict protein structure from sequence.

Journal ArticleDOI
TL;DR: In this article, the authors examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability of being flipped, and show a protocol to compute any threshold function using only a linear number of transmissions.
Abstract: In this paper, we examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability of being flipped. Each processor has some initial input bit, and the goal is to compute a function of these input bits. In this model, we show a protocol to compute any threshold function using only a linear number of transmissions.