scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2010"


01 Jan 2010
TL;DR: The work is giving estimations of the discrepancy between solutions of the initial and the homogenized problems for a one{dimensional second order elliptic operators with random coeecients satisfying strong or uniform mixing conditions by introducing graphs representing the domain of integration of the integrals in each term.
Abstract: The work is giving estimations of the discrepancy between solutions of the initial and the homogenized problems for a one{dimensional second order elliptic operators with random coeecients satisfying strong or uniform mixing conditions. We obtain several sharp estimates in terms of the corresponding mixing coeecient. Abstract. In the theory of homogenisation it is of particular interest to determine the classes of problems which are stable on taking the homogenisation limits. A notable situation where the limit enlarges the class of original problems is known as memory (nonlocal) eeects. A number of results in that direction has been obtained for linear problems. Tartar (1990) innitiated the study of the eeective equation corresponding to nonlinear equation: @ t u n + a n u 2 n = f: Signiicant progress has been hampered by the complexity of required computations needed in order to obtain the terms in power{series expansion. We propose a method which overcomes that diiculty by introducing graphs representing the domain of integration of the integrals in each term. The graphs are relatively simple, it is easy to calculate with them and they give us a clear image of the form of each term. The method allows us to discuss the form of the eeective equation and the convergence of power{series expansions. The feasibility of our method for other types of nonlinearities will be discussed as well.

550 citations


Journal ArticleDOI
TL;DR: Two tomography schemes that scale much more favourably than direct tomography with system size are presented, one of them requires unitary operations on a constant number of subsystems, whereas the other requires only local measurements together with more elaborate post-processing.
Abstract: Quantum state tomography--deducing quantum states from measured data--is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes unfeasible because the number of measurements and the amount of computation required to process them grows exponentially in the system size. Here, we present two tomography schemes that scale much more favourably than direct tomography with system size. One of them requires unitary operations on a constant number of subsystems, whereas the other requires only local measurements together with more elaborate post-processing. Both rely only on a linear number of experimental operations and post-processing that is polynomial in the system size. These schemes can be applied to a wide range of quantum states, in particular those that are well approximated by matrix product states. The accuracy of the reconstructed states can be rigorously certified without any a priori assumptions.

550 citations



Book ChapterDOI
TL;DR: A survey of the theory of the Lyapunov Characteristic Exponents (LCEs) for dynamical systems, as well as numerical techniques developed for the computation of the maximal, of few and of all of them, can be found in this article.
Abstract: We present a survey of the theory of the Lyapunov Characteristic Exponents (LCEs) for dynamical systems, as well as of the numerical techniques developed for the computation of the maximal, of few and of all of them. After some historical notes on the first attempts for the numerical evaluation of LCEs, we discuss in detail the multiplicative ergodic theorem of Oseledec (102), which pro- vides the theoretical basis for the computation of the LCEs. Then, we analyze the algorithm for the computation of the maximal LCE, whose value has been exten- sively used as an indicator of chaos, and the algorithm of the so-called standard method, developed by Benettin et al. (14), for the computation of many LCEs. We also consider different discrete and continuous methods for computing the LCEs based on the QR or the singular value decomposition techniques. Although we are mainly interested in finite-dimensional conservative systems, i.e., autonomous Hamiltonian systems and symplectic maps, we also briefly refer to the evaluation of LCEs of dissipative systems and time series. The relation of two chaos detection techniques, namely the fast Lyapunov indicator (FLI) and the generalized alignment index (GALI), to the computation of the LCEs is also discussed.

259 citations


Proceedings ArticleDOI
30 Nov 2010
TL;DR: The state-of-the-art framework providing high-level matrix computation primitives with MapReduce is explored through the case study approach, and these primitives are demonstrated with different computation engines to show the performance and scalability.
Abstract: Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.

236 citations


Journal ArticleDOI
TL;DR: It is demonstrated that flexible wall modeling plays an important role in accurate prediction of patient-specific hemodynamics in vascular fluid–structure interaction modeling when compared to the rigid arterial wall assumption.
Abstract: A computational vascular fluid-structure interaction framework for the simulation of patient-specific cerebral aneurysm configurations is presented. A new approach for the computation of the blood vessel tissue prestress is also described. Simulations of four patient-specific models are carried out, and quantities of hemodynamic interest such as wall shear stress and wall tension are studied to examine the relevance of fluid-structure interaction modeling when compared to the rigid arterial wall assumption. We demonstrate that flexible wall modeling plays an important role in accurate prediction of patient-specific hemodynamics. Discussion of the clinical relevance of our methods and results is provided.

224 citations


Book ChapterDOI
25 Jan 2010
TL;DR: This paper presents a family of protocols for multiparty computation with rational numbers using fixed-point representation that offers more efficient solutions for secure computation than other usual representations.
Abstract: Secure computation is a promising approach to business problems in which several parties want to run a joint application and cannot reveal their inputs. Secure computation preserves the privacy of input data using cryptographic protocols, allowing the parties to obtain the benefits of data sharing and at the same time avoid the associated risks. These business applications need protocols that support all the primitive data types and allow secure protocol composition and efficient application development. Secure computation with rational numbers has been a challenging problem. We present in this paper a family of protocols for multiparty computation with rational numbers using fixed-point representation. This approach offers more efficient solutions for secure computation than other usual representations.

213 citations


Book ChapterDOI
13 Sep 2010
TL;DR: This work considers a collection of related multiparty computation protocols that provide core operations for secure integer and fixed-point computation and presents techniques and building blocks that allow to improve the efficiency of these protocols, in order to meet the performance requirements of a broader range of applications.
Abstract: We consider a collection of related multiparty computation protocols that provide core operations for secure integer and fixed-point computation. The higher-level protocols offer integer truncation and comparison, which are typically the main performance bottlenecks in complex applications. We present techniques and building blocks that allow to improve the efficiency of these protocols, in order to meet the performance requirements of a broader range of applications. The protocols can be constructed using different secure computation methods. We focus on solutions for multiparty computation using secret sharing.

183 citations


Proceedings ArticleDOI
05 Jun 2010
TL;DR: New ways to simulate 2-party communication protocols to get protocols with potentially smaller communication and a direct sum theorem for randomized communication complexity are described.
Abstract: We describe new ways to simulate 2-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the inputs to the participating parties can be simulated by a new protocol involving at most ~O(√CI) bits of communication. If the protocol reveals I bits of information about the inputs to an observer that watches the communication in the protocol, we show how to carry out the simulation with ~O(I) bits of communication.These results lead to a direct sum theorem for randomized communication complexity. Ignoring polylogarithmic factors, we show that for worst case computation, computing n copies of a function requires √n times the communication required for computing one copy of the function. For average case complexity, given any distribution μ on inputs, computing n copies of the function on n inputs sampled independently according to μ requires √n times the communication for computing one copy. If μ is a product distribution, computing n copies on n independent inputs sampled according to μ requires n times the communication required for computing the function. We also study the complexity of computing the sum (or parity) of nevaluations of f, and obtain results analogous to those above.

182 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper traces the roots of stochastic computing from the Von Neumann era into its current form and proposes communications-inspired design techniques based on estimation and detection theory.
Abstract: Stochastic computation, as presented in this paper, exploits the statistical nature of application-level performance metrics, and matches it to the statistical attributes of the underlying device and circuit fabrics. Nanoscale circuit fabrics are viewed as noisy communication channels/networks. Communications-inspired design techniques based on estimation and detection theory are proposed. Stochastic computation advocates an explicit characterization and exploitation of error statistics at the architectural and system levels. This paper traces the roots of stochastic computing from the Von Neumann era into its current form. Design and CAD challenges are described.

153 citations


Proceedings ArticleDOI
21 Jun 2010
TL;DR: Central to the method is the automatic computation of an adaptive sensitivity parameter, increasing significantly the reliability and making the identification more robust in the presence of obtuse and acute angles.
Abstract: This paper presents a new technique for detecting sharp features on point-sampled geometry. Sharp features of different nature and possessing angles varying from obtuse to acute can be identified without any user interaction. The algorithm works directly on the point cloud, no surface reconstruction is needed. Given an unstructured point cloud, our method first computes a Gauss map clustering on local neighborhoods in order to discard all points which are unlikely to belong to a sharp feature. As usual, a global sensitivity parameter is used in this stage. In a second stage, the remaining feature candidates undergo a more precise iterative selection process. Central to our method is the automatic computation of an adaptive sensitivity parameter, increasing significantly the reliability and making the identification more robust in the presence of obtuse and acute angles. The algorithm is fast and does not depend on the sampling resolution, since it is based on a local neighbor graph computation.

Journal ArticleDOI
TL;DR: In this paper, a new on-grid dynamic multi-timescale (MTS) method is presented to increase significantly the computation efficiency involving multi-physical and chemical processes using detailed and reduced kinetic mechanisms.

Book ChapterDOI
14 Jun 2010
TL;DR: This work proposes a chemical implementation of stack machines -- a Turing-universal model of computation similar to Turing machines -- using DNA strand displacement cascades as the underlying chemical primitive, controlled by strand displacement logic.
Abstract: Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines -- a Turing-universal model of computation similar to Turing machines -- using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.

Proceedings ArticleDOI
12 Apr 2010
TL;DR: This paper studies the relationship between the region geometry and reachable set accuracy and proposes a method for constructing hybridization regions using tighter interpolation error bounds and presents some experimental results on a high-dimensional biological system to demonstrate the performance improvement.
Abstract: This paper is concerned with reachable set computation for non-linear systems using hybridization. The essence of hybridization is to approximate a non-linear vector field by a simpler (such as affine) vector field. This is done by partitioning the state space into small regions within each of which a simpler vector field is defined. This approach relies on the availability of methods for function approximation and for handling the resulting dynamical systems. Concerning function approximation using interpolation, the accuracy depends on the shapes and sizes of the regions which can compromise as well the speed of reachability computation since it may generate spurious classes of trajectories. In this paper we study the relationship between the region geometry and reachable set accuracy and propose a method for constructing hybridization regions using tighter interpolation error bounds. In addition, our construction exploits the dynamics of the system to adapt the orientation of the regions, in order to achieve better time-efficiency. We also present some experimental results on a high-dimensional biological system, to demonstrate the performance improvement.

Journal ArticleDOI
TL;DR: A first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers and obtains a self-consistent electronic-structure in a few hundred hours is developed.

Proceedings ArticleDOI
09 Jan 2010
TL;DR: A novel approach to predict the sequential computation time accurately and efficiently for large-scale parallel applications on non-existing target machines is proposed and a performance prediction framework, called PHANTOM, is implemented, which integrates the above computation-time acquisition approach with a trace-driven network simulator.
Abstract: For designers of large-scale parallel computers, it is greatly desired that performance of parallel applications can be predicted at the design phase. However, this is difficult because the execution time of parallel applications is determined by several factors, including sequential computation time in each process, communication time and their convolution. Despite previous efforts, it remains an open problem to estimate sequential computation time in each process accurately and efficiently for large-scale parallel applications on non-existing target machines.This paper proposes a novel approach to predict the sequential computation time accurately and efficiently. We assume that there is at least one node of the target platform but the whole target system need not be available. We make two main technical contributions. First, we employ deterministic replay techniques to execute any process of a parallel application on a single node at real speed. As a result, we can simply measure the real sequential computation time on a target node for each process one by one. Second, we observe that computation behavior of processes in parallel applications can be clustered into a few groups while processes in each group have similar computation behavior. This observation helps us reduce measurement time significantly because we only need to execute representative parallel processes instead of all of them.We have implemented a performance prediction framework, called PHANTOM, which integrates the above computation-time acquisition approach with a trace-driven network simulator. We validate our approach on several platforms. For ASCI Sweep3D, the error of our approach is less than 5% on 1024 processor cores. Compared to a recent regression-based prediction approach, PHANTOM presents better prediction accuracy across different platforms.

Journal ArticleDOI
TL;DR: A family of constructions for fault-tolerant quantum computation that are closely related to topological quantum computation, but for which the fault tolerance is implemented in software rather than coming from a physical medium is provided.

Journal ArticleDOI
28 Sep 2010-Chaos
TL;DR: The separable information is introduced, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome.
Abstract: Distributed computation can be described in terms of the fundamental operations of information storage, transfer, and modification. To describe the dynamics of information in computation, we need to quantify these operations on a local scale in space and time. In this paper we extend previous work regarding the local quantification of information storage and transfer, to explore how information modification can be quantified at each spatiotemporal point in a system. We introduce the separable information, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome. We apply this measure to cellular automata, where it is shown to be the first direct quantitative measure to provide evidence for the long-held conjecture that collisions between emergent particles therein are the dominant information modification events.

Dissertation
01 Jan 2010
TL;DR: This thesis proposes a unique approach to computationally-enabled form-finding procedures, and experimentally investigates how such processes contribute to novel ways of creating, distributing and depositing material forms.
Abstract: The institutionalized separation between form, structure and material, deeply embedded in modernist design theory, paralleled by a methodological partitioning between modeling, analysis and fabrication, resulted in geometric-driven form generation. Such prioritization of form over material was carried into the development and design logic of CAD. Today, under the imperatives and growing recognition of the failures and environmental liabilities of this approach, modern design culture is experiencing a shift to material aware design. Inspired by Nature’s strategies where form generation is driven by maximal performance with minimal resources through local material property variation, the research reviews, proposes and develops models and processes for a material-based approach in computationally enabled form-generation. Material-based Design Computation is developed and proposed as a set of computational strategies supporting the integration of form, material and structure by incorporating physical form-finding strategies with digital analysis and fabrication. In this approach, material precedes shape, and it is the structuring of material properties as a function of structural and environmental performance that generates design form. The thesis proposes a unique approach to computationally-enabled form-finding procedures, and experimentally investigates how such processes contribute to novel ways of creating, distributing and depositing material forms. Variable Property Design is investigated as a theoretical and technical framework by which to model, analyze and fabricate objects with graduated properties designed to correspond to multiple and continuously varied functional constraints. The following methods were developed as the enabling mechanisms of Material Computation: Tiling Behavior & Digital Anisotropy, Finite Element Synthesis, and Material Pixels. In order to implement this approach as a fabrication process, a novel fabrication technology, termed Variable Property Rapid Prototyping has been developed, designed and patented. Among the potential contributions is the achievement of a high degree of customization through material heterogeneity as compared to conventional design of components and assemblies. Experimental designs employing suggested theoretical and technical frameworks, methods and techniques are presented, discussed and demonstrated. They support product customization, rapid augmentation and variable property fabrication. Developed as approximations of natural formation processes, these design experiments demonstrate the contribution and the potential future of a new design and research field. Thesis Supervisor: William J. Mitchell Title: Alexander Dreyfoos Professor of Architecture and Media Arts and Sciences Department of Architecture, MIT

Journal ArticleDOI
TL;DR: The computation time required for high-resolution micromagnetic simulations of the magnetization dynamics in large magnetic samples can be reduced effectively by employing GPUs.
Abstract: We have adapted our finite element micromagnetic simulation software to the massively parallel architecture of graphical processing units (GPUs) with double-precision floating point accuracy. Using the example of Standard Problem #4 with different numbers of discretization points, we demonstrate the high speed performance of a single GPU compared with an OpenMP-parallelized version of the code using eight CPUs. The adaption of both the magnetostatic field calculation and the time integration of the Landau-Lifshitz-Gilbert equation routines can lead to a speedup factor of up to four. The gain in computation performance of the GPU code increases with increasing number of discretization nodes. The computation time required for high-resolution micromagnetic simulations of the magnetization dynamics in large magnetic samples can thus be reduced effectively by employing GPUs.

Journal ArticleDOI
TL;DR: In this paper, the Lyapunov exponent for random matrix products of positive matrices is studied and expressed in terms of associated complex functions, which leads to new explicit formulae for the LyAPunov exponents and to an efficient method for their computation.
Abstract: In this article we study the Lyapunov exponent for random matrix products of positive matrices and express them in terms of associated complex functions. This leads to new explicit formulae for the Lyapunov exponents and to an efficient method for their computation.

DOI
01 Jan 2010
TL;DR: A novel boundary handling scheme for incompressible fluids based on Smoothed Particle Hydrodynamics (SPH) is presented and an adaptive time-stepping approach is proposed that automatically estimates appropriate time steps independent of the scenario.
Abstract: We present a novel boundary handling scheme for incompressible fluids based on Smoothed Particle Hydrodynamics (SPH). In combination with the predictive-corrective incompressible SPH (PCISPH) method, the boundary handling scheme allows for larger time steps compared to existing solutions. Furthermore, an adaptive time-stepping approach is proposed. The approach automatically estimates appropriate time steps independent of the scenario. Due to its adaptivity, the overall computation time of dynamic scenarios is significantly reduced compared to simulations with constant time steps.

Journal ArticleDOI
TL;DR: A novel method, based on the computation of the analytic center of a polyhedron, for the selection of additive value functions that are compatible with holistic assessments of preferences is presented.

Journal ArticleDOI
TL;DR: This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU).
Abstract: Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer.

Book ChapterDOI
TL;DR: An overview over existing and new exact and approximate methods to calculate a potential field, analytical investigations for their exactness, and tests of their computation speed are given.
Abstract: The distance from a given position toward one or more destinations, exits, and way points is an important input variable in most models of pedestrian dynamics. Except for special cases without obstacles in a concave scenario—i.e. each position is visible from any other—the calculation of these distances is a non-trivial task. This is not a big problem as long as the model only demands the distances to be stored in a Static Floor Field (or Potential Field), which never changes throughout the whole simulation. Then a pre-calculation once before the simulation starts is sufficient. But if one wants to allow changes of the geometry during a simulation run—imagine doors or the blocking of a corridor due to some hazard—in the Distance Potential Field, calculation time matters strongly. We give an overview over existing and new exact and approximate methods to calculate a potential field, analytical investigations for their exactness, and tests of their computation speed. The advantages and drawbacks of the methods are discussed.

Journal ArticleDOI
TL;DR: In this article, the relation between vortex counting in two-dimensional supersymmetric field theories and the refined BPS invariants of the dual geometries was studied. But this relation was not considered in this paper.
Abstract: To every 3-manifold M one can associate a two-dimensional N=(2,2) supersymmetric field theory by compactifying five-dimensional N=2 super-Yang-Mills theory on M. This system naturally appears in the study of half-BPS surface operators in four-dimensional N=2 gauge theories on one hand, and in the geometric approach to knot homologies, on the other. We study the relation between vortex counting in such two-dimensional N=(2,2) supersymmetric field theories and the refined BPS invariants of the dual geometries. In certain cases, this counting can be also mapped to the computation of degenerate conformal blocks in two-dimensional CFT's. Degenerate limits of vertex operators in CFT receive a simple interpretation via geometric transitions in BPS counting.

Journal ArticleDOI
TL;DR: Two independent strategies are presented for reducing the computation time of multislice simulations of scanning transmission electron microscope images: optimal probe sampling, and the use of desktop graphics processing units.


Proceedings ArticleDOI
15 Dec 2010
TL;DR: The experiments suggest that good quality relighting and transport inversion are possible from a few dozen low-dynamic range photos, even for scenes with complex shadows, caustics, and other challenging lighting effects.
Abstract: We present a general framework for analyzing the transport matrix of a real-world scene at full resolution, without capturing many photos. The key idea is to use projectors and cameras to directly acquire eigenvectors and the Krylov subspace of the unknown transport matrix. To do this, we implement Krylov subspace methods partially in optics, by treating the scene as a "black box subroutine" that enables optical computation of arbitrary matrix-vector products. We describe two methods---optical Arnoldi to acquire a low-rank approximation of the transport matrix for relighting; and optical GMRES to invert light transport. Our experiments suggest that good quality relighting and transport inversion are possible from a few dozen low-dynamic range photos, even for scenes with complex shadows, caustics, and other challenging lighting effects.

Journal ArticleDOI
TL;DR: The hybrid coupling for planar fronts, a hybrid computation in space where individual electrons are followed in the region of high electric field and low density while the bulk of the electrons is approximated by densities, is developed.