scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2008"


Journal ArticleDOI
TL;DR: Wannier90 is a program for calculating maximally-localised Wannier functions (MLWF) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands, and is able to output MLWF for visualisation and other post-processing purposes.

2,599 citations



Journal ArticleDOI
TL;DR: This work shows a design of a chemical computer that achieves fast and reliable Turing-universal computation using molecular counts, and demonstrates that molecular counts can be a useful form of information for small molecular systems such as those operating within cellular environments.
Abstract: A highly desired part of the synthetic biology toolbox is an embedded chemical microcontroller, capable of autonomously following a logic program specified by a set of instructions, and interacting with its cellular environment. Strategies for incorporating logic in aqueous chemistry have focused primarily on implementing components, such as logic gates, that are composed into larger circuits, with each logic gate in the circuit corresponding to one or more molecular species. With this paradigm, designing and producing new molecular species is necessary to perform larger computations. An alternative approach begins by noticing that chemical systems on the small scale are fundamentally discrete and stochastic. In particular, the exact molecular counts of each molecular species present, is an intrinsically available form of information. This might appear to be a very weak form of information, perhaps quite difficult for computations to utilize. Indeed, it has been shown that error-free Turing universal computation is impossible in this setting. Nevertheless, we show a design of a chemical computer that achieves fast and reliable Turing-universal computation using molecular counts. Our scheme uses only a small number of different molecular species to do computation of arbitrary complexity. The total probability of error of the computation can be made arbitrarily small (but not zero) by adjusting the initial molecular counts of certain species. While physical implementations would be difficult, these results demonstrate that molecular counts can be a useful form of information for small molecular systems such as those operating within cellular environments.

287 citations


Journal ArticleDOI
TL;DR: In this paper, a modification of the standard Lagrangian was proposed to stabilize the Higgs mass against quadratically divergent radiative corrections, using ideas originally discussed by Lee and Wick in the context of a finite theory of quantum electrodynamics.
Abstract: We construct a modification of the standard model which stabilizes the Higgs mass against quadratically divergent radiative corrections, using ideas originally discussed by Lee and Wick in the context of a finite theory of quantum electrodynamics. The Lagrangian includes new higher derivative operators. We show that the higher derivative terms can be eliminated by introducing a set of auxiliary fields; this allows for convenient computation and makes the physical interpretation more transparent. The theory is thought to be unitary, but nevertheless, it does not satisfy the usual analyticity conditions.

227 citations


Journal ArticleDOI
TL;DR: This work presents a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(@t"m"i"xlog^2n) rounds of communication and computation, where @t" m" i"x is the mixing time of a random walk on the network.

218 citations


Journal ArticleDOI
TL;DR: This paper describes Divide and Conquer SLAM, which is an EKF SLAM algorithm in which the computational complexity per step is reduced from O(n 2) to O( n), and the total cost of SLAM is reduced to O3, from O3 to O2.
Abstract: In this paper, we show that all processes associated with the move-sense-update cycle of extended Kalman filter (EKF) Simultaneous Localization and Mapping (SLAM) can be carried out in time linear with the number of map features. We describe Divide and Conquer SLAM, which is an EKF SLAM algorithm in which the computational complexity per step is reduced from O(n 2) to O(n), and the total cost of SLAM is reduced from O(n 3) to O(n 2). Unlike many current large-scale EKF SLAM techniques, this algorithm computes a solution without relying on approximations or simplifications (other than linearizations) to reduce computational complexity. Also, estimates and covariances are available when needed by data association without any further computation. Furthermore, as the method works most of the time in local maps, where angular errors remain small, the effect of linearization errors is limited. The resulting vehicle and map estimates are more precise than those obtained with standard EKF SLAM. The errors with respect to the true value are smaller, and the computed state covariance is consistent with the real error in the estimation. Both simulated experiments and the Victoria Park dataset are used to provide evidence of the advantages of this algorithm.

206 citations


Journal ArticleDOI
TL;DR: A new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes is described.
Abstract: The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, efficient computation of the MS complex for large scale data remains a challenging problem. We describe a new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes. A new divide-and-conquer strategy allows for memory-efficient computation of the MS complex and simplification on-the-fly to control the size of the output. In addition to being able to handle various data formats, the framework supports implementation-specific optimizations, for example, for regular data. We present the complete characterization of critical point cancellations in all dimensions. This technique enables the topology based analysis of large data on off-the-shelf computers. In particular we demonstrate the first full computation of the MS complex for a 1 billion/10243 node grid on a laptop computer with 2 Gb memory.

201 citations


Journal ArticleDOI
TL;DR: This work presents a novel simple and efficient method for accurate and stable computation of RMF of a curve in 3D, which uses two reflections to compute each frame from its preceding one to yield a sequence of frames to approximate an exact RMF.
Abstract: Due to its minimal twist, the rotation minimizing frame (RMF) is widely used in computer graphics, including sweep or blending surface modeling, motion design and control in computer animation and robotics, streamline visualization, and tool path planning in CAD/CAM. We present a novel simple and efficient method for accurate and stable computation of RMF of a curve in 3D. This method, called the double reflection method, uses two reflections to compute each frame from its preceding one to yield a sequence of frames to approximate an exact RMF. The double reflection method has the fourth order global approximation error, thus it is much more accurate than the two currently prevailing methods with the second order approximation error—the projection method by Klok and the rotation method by Bloomenthal, while all these methods have nearly the same per-frame computational cost. Furthermore, the double reflection method is much simpler and faster than using the standard fourth order Runge-Kutta method to integrate the defining ODE of the RMF, though they have the same accuracy. We also investigate further properties and extensions of the double reflection method, and discuss the variational principles in design moving frames with boundary conditions, based on RMF.

179 citations


Journal ArticleDOI
TL;DR: The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions that is shown to depend on the running time of a minimum computation algorithm used as a subroutine.
Abstract: The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and ad hoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend, in general, to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme.

165 citations


Journal ArticleDOI
TL;DR: A mathematical model of triangle-mesh-modeled three-dimensional (3D) surface objects for digital holography is developed and Reconstruction of computer-generated holograms synthesized by using the developed model is demonstrated experimentally.
Abstract: We develop a mathematical model of triangle-mesh-modeled three-dimensional (3D) surface objects for digital holography. The proposed mathematical model includes the analytic angular spectrum representation of image light fields emitted from 3D surface objects with occlusion and the computation method for the developed light field representation. Reconstruction of computer-generated holograms synthesized by using the developed model is demonstrated experimentally.

159 citations



Journal ArticleDOI
14 Oct 2008-Chaos
TL;DR: In this article, the authors use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called “edge of chaos.” Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

Journal ArticleDOI
TL;DR: A new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix that can handle networks with arbitrarily connected neurons, which can be more efficient than commonly used multilayer perceptron networks.
Abstract: This paper describes a new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix. The algorithm can handle networks with arbitrarily connected neurons. The training speed is comparable with the Levenberg-Marquardt algorithm, which is currently considered by many as the fastest algorithm for neural network training. More importantly, it is shown that the computation of the Jacobian, which is required for second-order algorithms, has a similar computation complexity as the computation of the gradient for first-order learning methods. This new algorithm is implemented in the newly developed software, Neural Network Trainer, which has unique capabilities of handling arbitrarily connected networks. These networks with connections across layers can be more efficient than commonly used multilayer perceptron networks.

Journal ArticleDOI
TL;DR: It is proved that the Euclidean traveling salesman problem lies in the counting hierarchy, and it is conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and some preliminary results supporting this conjecture are presented.
Abstract: We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: (a) the Blum-Shub-Smale model of computation over the reals; and (b) a problem we call the “generic task of numerical computation,” which captures an aspect of doing numerical computation in floating point, similar to the “long exponent model” that has been studied in the numerical computing community. We show that both of these approaches hinge on the question of understanding the complexity of the following problem, which we call PosSLP: Given a division-free straight-line program producing an integer $N$, decide whether $N>0$. In the Blum-Shub-Smale model, polynomial-time computation over the reals (on discrete inputs) is polynomial-time equivalent to PosSLP when there are only algebraic constants. We conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and we present some preliminary results supporting this conjecture. The generic task of numerical computation is also polynomial-time equivalent to PosSLP. We prove that PosSLP lies in the counting hierarchy. Combining this with work of Tiwari, we obtain that the Euclidean traveling salesman problem lies in the counting hierarchy—the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE. In the course of developing the context for our results on arithmetic circuits, we present some new observations on the complexity of the arithmetic circuit identity testing (ACIT) problem. In particular, we show that if $n!$ is not ultimately easy, then ACIT has subexponential complexity.

Journal ArticleDOI
TL;DR: In this paper, the hourly temperature computation can be seen as a convolution in the time domain that is most efficiently evaluated by fast Fourier transform (FFT) and an additional substantial reduction in computing time is obtained by subsampling the analytical function at a few selected times according to a geometric sequence and then using a good quality interpolant such as the cubic spline.

Journal ArticleDOI
TL;DR: This work uses complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the range and different kinds of intrinsic computation across an entire class of system. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called ``edge of chaos''. Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

Patent
Eric Williamson1
28 Aug 2008
TL;DR: In this paper, the authors propose a promotion engine to identify a set of computation resources located in a cloud or other network and transmit the data request and subject data to the set of resources, which afford greater computation speed than the local machine hosting the requesting application.
Abstract: Embodiments relate to systems and methods for the promotion of calculations to cloud-based computation resources. One or more applications, such as spreadsheet applications, can prepare the calculation of a relatively large-scale computation, such as running statistical reports on large (e.g., greater than 1000 by 1000 cell) spreadsheets or other data objects. If the pending calculation is determined to be greater than a computation threshold for instance in computation intensity or data size, a computation request can be sent to a promotion engine. The promotion engine can identify a set of computation resources located in a cloud or other network and transmit the data request and subject data to the set of computation resources, which afford greater computation speed than the local machine hosting the requesting application. A set of results is returned from the cloud to the requesting application, thereby creating higher bandwidth and faster calculation times for the user.

Journal ArticleDOI
TL;DR: The results include unprecedented Direct Numerical Simulations of the onset and the evolution of multiple wavelength instabilities induced by ambient noise in aircraft vortex wakes at Re = 6000.

Patent
16 Sep 2008
TL;DR: In this article, a unit operator cell is constituted of a plurality of SOI transistors, writing data are stored in body areas SNA, SNB of the at least two SOI-transistors, and the storage NQ1, NQ2 are coupled in series or independently to a reading port RPRTB or RPRTA.
Abstract: PROBLEM TO BE SOLVED: To provide a semiconductor signal processor capable of carrying out quickly logic computation processing and arithmetic computation processing at a low electric power consumption in a narrow occupied area. SOLUTION: A unit operator cell is constituted of a plurality of SOI transistors, writing data are stored in body areas SNA, SNB of the at least two SOI transistors, and the storage SOI transistors NQ1, NQ2 are coupled in series or independently to a reading port RPRTB or RPRTA. An AND computation result or a NOT computation result of the storage data in the unit operator cell can be obtained by this manner, and the computation processing can be carried out only by writing and reading the data. COPYRIGHT: (C)2010,JPO&INPIT

Journal ArticleDOI
TL;DR: This paper proposes the kd-tree data structure, coupled with the mailbox technique, which is highly effective in handling the irregularly distribution of patches of the target, while the repeatedly intersection tests between the ray and the patch when using space division acceleration structures can be eliminated through the mailbox Technique.
Abstract: Ray tracing is of great use for computational electromagnetics, such as the well-known shooting and bouncing ray (SBR) method. In this paper, the kd-tree data structure, coupled with the mailbox technique, is proposed to accelerate the ray tracing in the SBR. The kd-tree is highly effective in handling the irregularly distribution of patches of the target, while the repeatedly intersection tests between the ray and the patch when using space division acceleration structures can be eliminated through the mailbox technique. Numerical results show excellent agreement with the measured data and the exact solution, and demonstrate that the kd-tree as well as the mailbox technique can greatly reduce the computation time.

Journal ArticleDOI
TL;DR: This paper develops a characterization of approximate simulation relations which can be used for hybrid systems approximation, and leads to effective algorithms for the computation of approximate Simulation relations.
Abstract: Approximate simulation relations have recently been introduced as a powerful tool for the approximation of discrete and continuous systems. In this paper, we extend this abstraction framework to hybrid systems. Using the notion of simulation functions, we develop a characterization of approximate simulation relations which can be used for hybrid systems approximation. For several classes of hybrid systems, this characterization leads to effective algorithms for the computation of approximate simulation relations. An application in the context of reachability analysis is shown.

01 Oct 2008
TL;DR: The IS-IS protocol is extended by specifying new information that an Intermediate System (router) can place in Link State Protocol Data Units (LSP) to support Traffic Engineering (TE).
Abstract: This document describes extensions to the Intermediate System to Intermediate System (IS-IS) protocol to support Traffic Engineering (TE). This document extends the IS-IS protocol by specifying new information that an Intermediate System (router) can place in Link State Protocol Data Units (LSP). This information describes additional details regarding the state of the network that are useful for traffic engineering computations. [STANDARDS-TRACK]

Journal ArticleDOI
TL;DR: The method is applied to the case of adiabatic Grover search and it is shown that performance better than classical is possible with a super-Ohmic environment, with no a priori knowledge of the energy spectrum.
Abstract: We study the effect of a thermal environment on adiabatic quantum computation using the Bloch-Redfield formalism. We show that in certain cases the environment can enhance the performance in two different ways: (i) by introducing a time scale for thermal mixing near the anticrossing that is smaller than the adiabatic time scale, and (ii) by relaxation after the anticrossing. The former can enhance the scaling of computation when the environment is super-Ohmic, while the latter can only provide a prefactor enhancement. We apply our method to the case of adiabatic Grover search and show that performance better than classical is possible with a super-Ohmic environment, with no a priori knowledge of the energy spectrum.

01 Jan 2008
TL;DR: A recent extension of exact computation, the so-called “soft exact approach,” has been proposed to ensure robustness in this setting, and general methods for treating degenerate inputs are described.
Abstract: Nonrobustness refers to qualitative or catastrophic failures in geometric algorithms arising from numerical errors. Section 45.1 provides background on these problems. Although nonrobustness is already an issue in “purely numerical” computation, the problem is compounded in “geometric computation.” In Section 45.2 we characterize such computations. Researchers trying to create robust geometric software have tried two approaches: making fixed-precision computation robust (Section 45.3), and making the exact approach viable (Section 45.4). Another source of nonrobustness is the phenomenon of degenerate inputs. General methods for treating degenerate inputs are described in Section 45.5. For some problems the exact approach may be expensive or infeasible. To ensure robustness in this setting, a recent extension of exact computation, the so-called “soft exact approach,” has been proposed. This is described in Section 45.6.

Journal ArticleDOI
TL;DR: A steady Darcy–Forchheimer flow in a bounded region is solved by means of piecewise constant velocities and nonconforming piecewise pressures by an alternating-directions algorithm and a priori error estimates of the scheme and convergence of the alternating-Directions algorithm are proved.
Abstract: We solve a steady Darcy–Forchheimer flow in a bounded region by means of piecewise constant velocities and nonconforming piecewise $${\mathbb{P}_1}$$ pressures For the computation, we solve the nonlinearity by an alternating-directions algorithm and we decouple the computation of the velocity from that of the pressure by a gradient algorithm We prove a priori error estimates of the scheme and convergence of the alternating-directions algorithm

Journal ArticleDOI
TL;DR: Although no model is yet firmly established, evidence suggests that computing pattern velocity from local-velocity estimates involves simple operations in the spatiotemporal frequency domain.
Abstract: Computational neuroscience combines theory and experiment to shed light on the principles and mechanisms of neural computation. This approach has been highly fruitful in the ongoing effort to understand velocity computation by the primate visual system. This Review describes the success of spatiotemporal-energy models in representing local-velocity detection. It shows why local-velocity measurements tend to differ from the velocity of the object as a whole. Certain cells in the middle temporal area are thought to solve this problem by combining local-velocity estimates to compute the overall pattern velocity. The Review discusses different models for how this might occur and experiments that test these models. Although no model is yet firmly established, evidence suggests that computing pattern velocity from local-velocity estimates involves simple operations in the spatiotemporal frequency domain.

Posted Content
TL;DR: It is shown how each Clifford circuit can be reduced to an equivalent, manifestly simulatable circuit (normal form), which provides a simple proof of the Gottesman-Knill theorem without resorting to stabilizer techniques.
Abstract: We study classical simulation of quantum computation, taking the Gottesman-Knill theorem as a starting point. We show how each Clifford circuit can be reduced to an equivalent, manifestly simulatable circuit (normal form). This provides a simple proof of the Gottesman-Knill theorem without resorting to stabilizer techniques. The normal form highlights why Clifford circuits have such limited computational power in spite of their high entangling power. At the same time, the normal form shows how the classical simulation of Clifford circuits fits into the standard way of embedding classical computation into the quantum circuit model. This leads to simple extensions of Clifford circuits which are classically simulatable. These circuits can be efficiently simulated by classical sampling ('weak simulation') even though the problem of exactly computing the outcomes of measurements for these circuits ('strong simulation') is proved to be #P-complete--thus showing that there is a separation between weak and strong classical simulation of quantum computation.

Journal ArticleDOI
TL;DR: It is envisioned that molecular computers that operate in a biological environment can be the basis of “smart drugs”, which are potent drugs that activate only if certain environmental conditions hold, and the research direction that set this vision and attempts to realize it are reviewed.

Journal ArticleDOI
TL;DR: This work describes an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface, which allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion.
Abstract: We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces.

Journal ArticleDOI
TL;DR: A scalable parallel algorithm to perform multimillion-atom molecular dynamics simulations, in which first principles-based reactive force fields (ReaxFF) describe chemical reactions, implemented on parallel computers based on a spatial decomposition scheme combined with distributed n-tuple data structures.