scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2008"



Journal ArticleDOI
TL;DR: This work shows a design of a chemical computer that achieves fast and reliable Turing-universal computation using molecular counts, and demonstrates that molecular counts can be a useful form of information for small molecular systems such as those operating within cellular environments.
Abstract: A highly desired part of the synthetic biology toolbox is an embedded chemical microcontroller, capable of autonomously following a logic program specified by a set of instructions, and interacting with its cellular environment. Strategies for incorporating logic in aqueous chemistry have focused primarily on implementing components, such as logic gates, that are composed into larger circuits, with each logic gate in the circuit corresponding to one or more molecular species. With this paradigm, designing and producing new molecular species is necessary to perform larger computations. An alternative approach begins by noticing that chemical systems on the small scale are fundamentally discrete and stochastic. In particular, the exact molecular counts of each molecular species present, is an intrinsically available form of information. This might appear to be a very weak form of information, perhaps quite difficult for computations to utilize. Indeed, it has been shown that error-free Turing universal computation is impossible in this setting. Nevertheless, we show a design of a chemical computer that achieves fast and reliable Turing-universal computation using molecular counts. Our scheme uses only a small number of different molecular species to do computation of arbitrary complexity. The total probability of error of the computation can be made arbitrarily small (but not zero) by adjusting the initial molecular counts of certain species. While physical implementations would be difficult, these results demonstrate that molecular counts can be a useful form of information for small molecular systems such as those operating within cellular environments.

287 citations


Journal ArticleDOI
TL;DR: This work presents a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(@t"m"i"xlog^2n) rounds of communication and computation, where @t" m" i"x is the mixing time of a random walk on the network.

218 citations


Journal ArticleDOI
TL;DR: A new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes is described.
Abstract: The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, efficient computation of the MS complex for large scale data remains a challenging problem. We describe a new algorithm and easily extensible framework for computing MS complexes for large scale data of any dimension where scalar values are given at the vertices of a closure-finite and weak topology (CW) complex, therefore enabling computation on a wide variety of meshes such as regular grids, simplicial meshes, and adaptive multiresolution (AMR) meshes. A new divide-and-conquer strategy allows for memory-efficient computation of the MS complex and simplification on-the-fly to control the size of the output. In addition to being able to handle various data formats, the framework supports implementation-specific optimizations, for example, for regular data. We present the complete characterization of critical point cancellations in all dimensions. This technique enables the topology based analysis of large data on off-the-shelf computers. In particular we demonstrate the first full computation of the MS complex for a 1 billion/10243 node grid on a laptop computer with 2 Gb memory.

201 citations


Journal ArticleDOI
TL;DR: This work presents a novel simple and efficient method for accurate and stable computation of RMF of a curve in 3D, which uses two reflections to compute each frame from its preceding one to yield a sequence of frames to approximate an exact RMF.
Abstract: Due to its minimal twist, the rotation minimizing frame (RMF) is widely used in computer graphics, including sweep or blending surface modeling, motion design and control in computer animation and robotics, streamline visualization, and tool path planning in CAD/CAM. We present a novel simple and efficient method for accurate and stable computation of RMF of a curve in 3D. This method, called the double reflection method, uses two reflections to compute each frame from its preceding one to yield a sequence of frames to approximate an exact RMF. The double reflection method has the fourth order global approximation error, thus it is much more accurate than the two currently prevailing methods with the second order approximation error—the projection method by Klok and the rotation method by Bloomenthal, while all these methods have nearly the same per-frame computational cost. Furthermore, the double reflection method is much simpler and faster than using the standard fourth order Runge-Kutta method to integrate the defining ODE of the RMF, though they have the same accuracy. We also investigate further properties and extensions of the double reflection method, and discuss the variational principles in design moving frames with boundary conditions, based on RMF.

179 citations


Journal ArticleDOI
TL;DR: The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions that is shown to depend on the running time of a minimum computation algorithm used as a subroutine.
Abstract: The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and ad hoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend, in general, to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme.

165 citations


Journal ArticleDOI
TL;DR: A mathematical model of triangle-mesh-modeled three-dimensional (3D) surface objects for digital holography is developed and Reconstruction of computer-generated holograms synthesized by using the developed model is demonstrated experimentally.
Abstract: We develop a mathematical model of triangle-mesh-modeled three-dimensional (3D) surface objects for digital holography. The proposed mathematical model includes the analytic angular spectrum representation of image light fields emitted from 3D surface objects with occlusion and the computation method for the developed light field representation. Reconstruction of computer-generated holograms synthesized by using the developed model is demonstrated experimentally.

159 citations


Journal ArticleDOI
14 Oct 2008-Chaos
TL;DR: In this article, the authors use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called “edge of chaos.” Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

150 citations


Journal ArticleDOI
TL;DR: A new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix that can handle networks with arbitrarily connected neurons, which can be more efficient than commonly used multilayer perceptron networks.
Abstract: This paper describes a new algorithm with neuron-by-neuron computation methods for the gradient vector and the Jacobian matrix. The algorithm can handle networks with arbitrarily connected neurons. The training speed is comparable with the Levenberg-Marquardt algorithm, which is currently considered by many as the fastest algorithm for neural network training. More importantly, it is shown that the computation of the Jacobian, which is required for second-order algorithms, has a similar computation complexity as the computation of the gradient for first-order learning methods. This new algorithm is implemented in the newly developed software, Neural Network Trainer, which has unique capabilities of handling arbitrarily connected networks. These networks with connections across layers can be more efficient than commonly used multilayer perceptron networks.

148 citations


Journal ArticleDOI
TL;DR: It is proved that the Euclidean traveling salesman problem lies in the counting hierarchy, and it is conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and some preliminary results supporting this conjecture are presented.
Abstract: We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: (a) the Blum-Shub-Smale model of computation over the reals; and (b) a problem we call the “generic task of numerical computation,” which captures an aspect of doing numerical computation in floating point, similar to the “long exponent model” that has been studied in the numerical computing community. We show that both of these approaches hinge on the question of understanding the complexity of the following problem, which we call PosSLP: Given a division-free straight-line program producing an integer $N$, decide whether $N>0$. In the Blum-Shub-Smale model, polynomial-time computation over the reals (on discrete inputs) is polynomial-time equivalent to PosSLP when there are only algebraic constants. We conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and we present some preliminary results supporting this conjecture. The generic task of numerical computation is also polynomial-time equivalent to PosSLP. We prove that PosSLP lies in the counting hierarchy. Combining this with work of Tiwari, we obtain that the Euclidean traveling salesman problem lies in the counting hierarchy—the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE. In the course of developing the context for our results on arithmetic circuits, we present some new observations on the complexity of the arithmetic circuit identity testing (ACIT) problem. In particular, we show that if $n!$ is not ultimately easy, then ACIT has subexponential complexity.

147 citations


Journal ArticleDOI
TL;DR: In this paper, the hourly temperature computation can be seen as a convolution in the time domain that is most efficiently evaluated by fast Fourier transform (FFT) and an additional substantial reduction in computing time is obtained by subsampling the analytical function at a few selected times according to a geometric sequence and then using a good quality interpolant such as the cubic spline.

Journal ArticleDOI
TL;DR: This work uses complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the range and different kinds of intrinsic computation across an entire class of system. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called ``edge of chaos''. Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

Patent
Eric Williamson1
28 Aug 2008
TL;DR: In this paper, the authors propose a promotion engine to identify a set of computation resources located in a cloud or other network and transmit the data request and subject data to the set of resources, which afford greater computation speed than the local machine hosting the requesting application.
Abstract: Embodiments relate to systems and methods for the promotion of calculations to cloud-based computation resources. One or more applications, such as spreadsheet applications, can prepare the calculation of a relatively large-scale computation, such as running statistical reports on large (e.g., greater than 1000 by 1000 cell) spreadsheets or other data objects. If the pending calculation is determined to be greater than a computation threshold for instance in computation intensity or data size, a computation request can be sent to a promotion engine. The promotion engine can identify a set of computation resources located in a cloud or other network and transmit the data request and subject data to the set of computation resources, which afford greater computation speed than the local machine hosting the requesting application. A set of results is returned from the cloud to the requesting application, thereby creating higher bandwidth and faster calculation times for the user.

Patent
16 Sep 2008
TL;DR: In this article, a unit operator cell is constituted of a plurality of SOI transistors, writing data are stored in body areas SNA, SNB of the at least two SOI-transistors, and the storage NQ1, NQ2 are coupled in series or independently to a reading port RPRTB or RPRTA.
Abstract: PROBLEM TO BE SOLVED: To provide a semiconductor signal processor capable of carrying out quickly logic computation processing and arithmetic computation processing at a low electric power consumption in a narrow occupied area. SOLUTION: A unit operator cell is constituted of a plurality of SOI transistors, writing data are stored in body areas SNA, SNB of the at least two SOI transistors, and the storage SOI transistors NQ1, NQ2 are coupled in series or independently to a reading port RPRTB or RPRTA. An AND computation result or a NOT computation result of the storage data in the unit operator cell can be obtained by this manner, and the computation processing can be carried out only by writing and reading the data. COPYRIGHT: (C)2010,JPO&INPIT

Journal ArticleDOI
TL;DR: The method is applied to the case of adiabatic Grover search and it is shown that performance better than classical is possible with a super-Ohmic environment, with no a priori knowledge of the energy spectrum.
Abstract: We study the effect of a thermal environment on adiabatic quantum computation using the Bloch-Redfield formalism. We show that in certain cases the environment can enhance the performance in two different ways: (i) by introducing a time scale for thermal mixing near the anticrossing that is smaller than the adiabatic time scale, and (ii) by relaxation after the anticrossing. The former can enhance the scaling of computation when the environment is super-Ohmic, while the latter can only provide a prefactor enhancement. We apply our method to the case of adiabatic Grover search and show that performance better than classical is possible with a super-Ohmic environment, with no a priori knowledge of the energy spectrum.

01 Jan 2008
TL;DR: A recent extension of exact computation, the so-called “soft exact approach,” has been proposed to ensure robustness in this setting, and general methods for treating degenerate inputs are described.
Abstract: Nonrobustness refers to qualitative or catastrophic failures in geometric algorithms arising from numerical errors. Section 45.1 provides background on these problems. Although nonrobustness is already an issue in “purely numerical” computation, the problem is compounded in “geometric computation.” In Section 45.2 we characterize such computations. Researchers trying to create robust geometric software have tried two approaches: making fixed-precision computation robust (Section 45.3), and making the exact approach viable (Section 45.4). Another source of nonrobustness is the phenomenon of degenerate inputs. General methods for treating degenerate inputs are described in Section 45.5. For some problems the exact approach may be expensive or infeasible. To ensure robustness in this setting, a recent extension of exact computation, the so-called “soft exact approach,” has been proposed. This is described in Section 45.6.

Journal ArticleDOI
TL;DR: A steady Darcy–Forchheimer flow in a bounded region is solved by means of piecewise constant velocities and nonconforming piecewise pressures by an alternating-directions algorithm and a priori error estimates of the scheme and convergence of the alternating-Directions algorithm are proved.
Abstract: We solve a steady Darcy–Forchheimer flow in a bounded region by means of piecewise constant velocities and nonconforming piecewise $${\mathbb{P}_1}$$ pressures For the computation, we solve the nonlinearity by an alternating-directions algorithm and we decouple the computation of the velocity from that of the pressure by a gradient algorithm We prove a priori error estimates of the scheme and convergence of the alternating-directions algorithm

Journal ArticleDOI
TL;DR: Although no model is yet firmly established, evidence suggests that computing pattern velocity from local-velocity estimates involves simple operations in the spatiotemporal frequency domain.
Abstract: Computational neuroscience combines theory and experiment to shed light on the principles and mechanisms of neural computation. This approach has been highly fruitful in the ongoing effort to understand velocity computation by the primate visual system. This Review describes the success of spatiotemporal-energy models in representing local-velocity detection. It shows why local-velocity measurements tend to differ from the velocity of the object as a whole. Certain cells in the middle temporal area are thought to solve this problem by combining local-velocity estimates to compute the overall pattern velocity. The Review discusses different models for how this might occur and experiments that test these models. Although no model is yet firmly established, evidence suggests that computing pattern velocity from local-velocity estimates involves simple operations in the spatiotemporal frequency domain.

Posted Content
TL;DR: It is shown how each Clifford circuit can be reduced to an equivalent, manifestly simulatable circuit (normal form), which provides a simple proof of the Gottesman-Knill theorem without resorting to stabilizer techniques.
Abstract: We study classical simulation of quantum computation, taking the Gottesman-Knill theorem as a starting point. We show how each Clifford circuit can be reduced to an equivalent, manifestly simulatable circuit (normal form). This provides a simple proof of the Gottesman-Knill theorem without resorting to stabilizer techniques. The normal form highlights why Clifford circuits have such limited computational power in spite of their high entangling power. At the same time, the normal form shows how the classical simulation of Clifford circuits fits into the standard way of embedding classical computation into the quantum circuit model. This leads to simple extensions of Clifford circuits which are classically simulatable. These circuits can be efficiently simulated by classical sampling ('weak simulation') even though the problem of exactly computing the outcomes of measurements for these circuits ('strong simulation') is proved to be #P-complete--thus showing that there is a separation between weak and strong classical simulation of quantum computation.

Journal ArticleDOI
TL;DR: It is envisioned that molecular computers that operate in a biological environment can be the basis of “smart drugs”, which are potent drugs that activate only if certain environmental conditions hold, and the research direction that set this vision and attempts to realize it are reviewed.

Journal ArticleDOI
TL;DR: This work describes an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface, which allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion.
Abstract: We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces.

Journal ArticleDOI
TL;DR: A scalable parallel algorithm to perform multimillion-atom molecular dynamics simulations, in which first principles-based reactive force fields (ReaxFF) describe chemical reactions, implemented on parallel computers based on a spatial decomposition scheme combined with distributed n-tuple data structures.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo program for the simulation of ion beam analysis data is presented, which combines mainly four features: (i) ion slowdown is computed separately from the main scattering/recoil event, which is directed towards the detector.
Abstract: A Monte Carlo program for the simulation of ion beam analysis data is presented. It combines mainly four features: (i) ion slowdown is computed separately from the main scattering/recoil event, which is directed towards the detector. (ii) A virtual detector, that is, a detector larger than the actual one can be used, followed by trajectory correction. (iii) For each collision during ion slowdown, scattering angle components are extracted form tables. (iv) Tables of scattering angle components, stopping power and energy straggling are indexed using the binary representation of floating point numbers, which allows logarithmic distribution of these tables without the computation of logarithms to access them. Tables are sufficiently fine-grained that interpolation is not necessary. Ion slowdown computation thus avoids trigonometric, inverse and transcendental function calls and, as much as possible, divisions. All these improvements make possible the computation of 10 7 collisions/s on current PCs. Results for transmitted ions of several masses in various substrates are well comparable to those obtained using SRIM-2006 in terms of both angular and energy distributions, as long as a sufficiently large number of collisions is considered for each ion. Examples of simulated spectrum show good agreement with experimental data, although a large detector rather than the virtual detector has to be used to properly simulate background signals that are due to plural collisions. The program, written in standard C, is open-source and distributed under the terms of the GNU General Public License.

Journal ArticleDOI
TL;DR: In this article, a reduced-basis approach is proposed to speed up the computation of a large number of cell problems without any loss of precision, which is a framework very well suited for model reduction attempts.
Abstract: We consider the computation of averaged coefficients for the homogenization of elliptic partial differential equations. In this problem, like in many multiscale problems, a large number of similar computations parameterized by the macroscopic scale is required at the microscopic scale. This is a framework very well suited for model reduction attempts. The purpose of this work is to show how the reduced-basis approach allows one to speed up the computation of a large number of cell problems without any loss of precision. The essential components of this reduced-basis approach are the a posteriori error estimation, which provides sharp error bounds for the outputs of interest, and an approximation process divided into offline and online stages, which decouples the generation of the approximation space and its use for Galerkin projections.

Journal ArticleDOI
TL;DR: The notion of computation is extended to include deciding subsets of the natural numbers, and a system that decides SubsetSum, a well-known NP-complete problem is presented.

Proceedings ArticleDOI
TL;DR: In this paper, the authors present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory.
Abstract: We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation.

Journal ArticleDOI
TL;DR: It is shown on examples that this algorithm is able to handle curves defined by high degree polynomials with large coefficients, to identify regions of interest and use the resulting structure for either efficient rendering of implicit curves, point localization or boolean operation computation.

Journal ArticleDOI
TL;DR: The sensitive pole algorithm is described, for the automatic computation of the eigenvalues (poles) most sensitive to parameter changes in large-scale system matrices, which can be used in many other fields of engineering to study the impact of parametric changes to linear system models.
Abstract: This paper describes a new algorithm, named the sensitive pole algorithm, for the automatic computation of the eigenvalues (poles) most sensitive to parameter changes in large-scale system matrices. The effectiveness and robustness of the algorithm in tracing root-locus plots is illustrated by numerical results from the small-signal stability analysis of realistic power system models. The algorithm can be used in many other fields of engineering that also study the impact of parametric changes to linear system models.

Proceedings ArticleDOI
24 Feb 2008
TL;DR: It is verified that effective and fast optical-path computation taking into account the physical impairments and wavelength continuity is successfully verified.
Abstract: Optical PCE and its interwork with the NMS were investigated for planning and operation of optical transparent networks. We successfully verified effective and fast optical-path computation taking into account the physical impairments and wavelength continuity.

Journal ArticleDOI
TL;DR: A novel algorithm that permits the fast and accurate computation of geometric moments on gray-scale images is presented in this paper, which constitutes an extension of the IBR algorithm, introduced in the past, which was applicable only for binary images.