scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2012"


Proceedings ArticleDOI
24 Dec 2012
TL;DR: A new physics engine tailored to model-based control, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers, which can compute both forward and inverse dynamics.
Abstract: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.

4,018 citations


Book
01 Jan 2012
TL;DR: The introduction to formal languages and automata wasolutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation.
Abstract: G' A. Linz, Peter. An introduction to formal languages and automata / Peter Linz'--3'd cd charrgcs ftrr the second edition wercl t)volutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation. Issuu solution manual to introduction to languages. Introduction theory computation 2nd edition solution manual sipser. Structural Theory of automata: solution manual of theory of computation. Kellison theory of interest pdf. Transformation, Sylvester's theorem(without proof), Solution of Second Order. Linear Differential Higher Engineering Mathematics by B.S. Grewal, 40th Edition, Khanna. Publication. 2. Introduction Of Automata Theory, Languages and computationHopcroft. Motwani&Ulman UNIX system Utilities manual. 4.

1,383 citations


Proceedings ArticleDOI
16 Oct 2012
TL;DR: In this paper, the authors provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions, including privacy, obliviousness, and authenticity.
Abstract: Garbled circuits, a classical idea rooted in the work of Yao, have long been understood as a cryptographic technique, not a cryptographic goal. Here we cull out a primitive corresponding to this technique. We call it a garbling scheme. We provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions. The most basic of these, privacy, suffices for two-party secure function evaluation (SFE) and private function evaluation (PFE). Starting from a PRF, we provide an efficient garbling scheme achieving privacy and we analyze its concrete security. We next consider obliviousness and authenticity, properties needed for private and verifiable outsourcing of computation. We extend our scheme to achieve these ends. We provide highly efficient blockcipher-based instantiations of both schemes. Our treatment of garbling schemes presages more efficient garbling, more rigorous analyses, and more modularly designed higher-level protocols.

483 citations


Journal ArticleDOI
20 Jan 2012-Science
TL;DR: An experimental demonstration of blind quantum computing in which the input, computation, and output all remain unknown to the computer is presented and the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server is exploited.
Abstract: Quantum computers, besides offering substantial computational speedups, are also expected to preserve the privacy of a computation. We present an experimental demonstration of blind quantum computing in which the input, computation, and output all remain unknown to the computer. We exploit the conceptual framework of measurement-based quantum computation that enables a client to delegate a computation to a quantum server. Various blind delegated computations, including one- and two-qubit gates and the Deutsch and Grover quantum algorithms, are demonstrated. The client only needs to be able to prepare and transmit individual photonic qubits. Our demonstration is crucial for unconditionally secure quantum cloud computing and might become a key ingredient for real-life applications, especially when considering the challenges of making powerful quantum computers widely available.

421 citations


Journal ArticleDOI
TL;DR: It is suggested that for new domains of investigation where there are no appropriate models of computation, it may be necessary to invent new formalisms to represent the systems under study.
Abstract: We recommend using the term Computation in conjunction with a well-defined model of computation whose semantics is clear and which matches the problem being investigated. Computer science already has a number of useful clearly defined models of computation whose behaviors and capabilities are well understood. We should use such models as part of any definition of the term computation. However, for new domains of investigation where there are no appropriate models it may be necessary to invent new formalisms to represent the systems under study.

366 citations


Journal ArticleDOI
TL;DR: It is shown that the basic scheme is inconsistent when moving surfaces are allowed to approach closer than twice the step size, and a remedy is developed based on excluding from the force computation all surface markers whose stencil overlaps with the stencil of a marker located on the surface of a collision partner.

338 citations


Journal ArticleDOI
TL;DR: The active information storage is introduced, which quantifies the information storage component that is directly in use in the computation of the next state of a process, and it is demonstrated that the local entropy rate is a useful spatiotemporal filter for information transfer structure.

165 citations


Reference EntryDOI
Geir Storvik1
31 Aug 2012
TL;DR: This chapter considers situations that are so complicated as to defy mathematical analysis or so large that they cannot solve the resulting mathematical expressions.
Abstract: Simulation involves using a model to produce results. The growing power of computers and the evolving simulation methodology have led to the recognition of computation as a third approach for advancing the natural sciences, together with theory and traditional experimentation. Many applications of simulation are based on purely deterministic models. If the model contains a stochastic element, we have stochastic simulation, which will be the issue in this article. Stochastic simulation is often called Monte Carlo sampling, especially in engineering and physics literature.

154 citations


Journal ArticleDOI
TL;DR: The theory of aberrations, techniques in optical system optimization, computation speed, precision fabrication of surfaces without symmetry, and extensions to the range of the surface slopes allowed in optical testing are described in this paper.
Abstract: A revolutionary optical surface is the result of developments in the theory of aberrations, techniques in optical system optimization, computation speed, precision fabrication of surfaces without symmetry, and extensions to the range of the surface slopes allowed in optical testing.

152 citations


Proceedings ArticleDOI
25 Feb 2012
TL;DR: The main contribution is to demonstrate that for this wide body of problems, there exist efficient internally deterministic algorithms, and moreover that these algorithms are natural to reason about and not complicated to code.
Abstract: The virtues of deterministic parallelism have been argued for decades and many forms of deterministic parallelism have been described and analyzed. Here we are concerned with one of the strongest forms, requiring that for any input there is a unique dependence graph representing a trace of the computation annotated with every operation and value. This has been referred to as internal determinism, and implies a sequential semantics---i.e., considering any sequential traversal of the dependence graph is sufficient for analyzing the correctness of the code. In addition to returning deterministic results, internal determinism has many advantages including ease of reasoning about the code, ease of verifying correctness, ease of debugging, ease of defining invariants, ease of defining good coverage for testing, and ease of formally, informally and experimentally reasoning about performance. On the other hand one needs to consider the possible downsides of determinism, which might include making algorithms (i) more complicated, unnatural or special purpose and/or (ii) slower or less scalable.In this paper we study the effectiveness of this strong form of determinism through a broad set of benchmark problems. Our main contribution is to demonstrate that for this wide body of problems, there exist efficient internally deterministic algorithms, and moreover that these algorithms are natural to reason about and not complicated to code. We leverage an approach to determinism suggested by Steele (1990), which is to use nested parallelism with commutative operations. Our algorithms apply several diverse programming paradigms that fit within the model including (i) a strict functional style (no shared state among concurrent operations), (ii) an approach we refer to as deterministic reservations, and (iii) the use of commutative, linearizable operations on data structures. We describe algorithms for the benchmark problems that use these deterministic approaches and present performance results on a 32-core machine. Perhaps surprisingly, for all problems, our internally deterministic algorithms achieve good speedup and good performance even relative to prior nondeterministic solutions.

141 citations


Journal ArticleDOI
TL;DR: In this paper, a vectorized version of the spherical harmonic transform (SHT) algorithm based on the Gauss-Legendre quadrature is proposed and implemented in the SHTns library, which includes scalar and vector transforms.
Abstract: In this paper, we report on very efficient algorithms for the spherical harmonic transform (SHT). Explicitly vectorized variations of the algorithm based on the Gauss-Legendre quadrature are discussed and implemented in the SHTns library which includes scalar and vector transforms. The main breakthrough is to achieve very efficient on-the-fly computations of the Legendre associated functions, even for very high resolutions, by taking advantage of the specific properties of the SHT and the advanced capabilities of current and future computers. This allows us to simultaneously and significantly reduce memory usage and computation time of the SHT. We measure the performance and accuracy of our algorithms. Even though the complexity of the algorithms implemented in SHTns are in $O(N^3)$ (where N is the maximum harmonic degree of the transform), they perform much better than any third party implementation, including lower complexity algorithms, even for truncations as high as N=1023. SHTns is available at this https URL as open source software.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: This paper shows that this distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology converges at the rate 1/k, which is faster than the convergence rate of the existing distributed methods for solving this problem.
Abstract: We present a distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology. The local objectives have distinct differentiable components, but they share a common nondifferentiable component, which has a favorable structure suitable for effective computation of the proximal operator. In our method, each agent iteratively updates its estimate of the global minimum by optimizing its local objective function, and exchanging estimates with others via communication in the network. Using Nesterov-type acceleration techniques and multiple communication steps per iteration, we show that this method converges at the rate 1/k (where k is the number of communication rounds between the agents), which is faster than the convergence rate of the existing distributed methods for solving this problem. The superior convergence rate of our method is also verified by numerical experiments.

Journal ArticleDOI
TL;DR: The special space-time computational techniques introduced recently are applied to computation of the aerodynamics of flapping wings, specifically locust wings, where the prescribed motion and deformation of the wings are based on digital data extracted from the videos of the locust in a wind tunnel.
Abstract: We present the special space-time computational techniques we have introduced recently for computation of flow problems with moving and deforming solid surfaces. The techniques have been designed in the context of the deforming-spatial-domain/stabilized space-time formulation, which was developed by the Team for Advanced Flow Simulation and Modeling for computation of flow problems with moving boundaries and interfaces. The special space-time techniques are based on using, in the space-time flow computations, non-uniform rational B-splines (NURBS) basis functions for the temporal representation of the motion and deformation of the solid surfaces and also for the motion and deformation of the volume meshes computed. This provides a better temporal representation of the solid surfaces and a more effective way of handling the volume-mesh motion. We apply these techniques to computation of the aerodynamics of flapping wings, specifically locust wings, where the prescribed motion and deformation of the wings are based on digital data extracted from the videos of the locust in a wind tunnel. We report results from the preliminary computations.

Journal ArticleDOI
TL;DR: In this article, two moment-independent importance measures of the basic variable are proposed respectively on the failure probability and distribution function of the output of a structure or system in reliability engineering, and then combining the high efficient state dependent parameter (SDP) method for the calculation of the conditional moments of the model output, a SDP solution is established to solve two moment independent importance measures.

Journal ArticleDOI
TL;DR: The results of this article substantially enlarge the theoretically tractable application domain of morphological computation in robotics, and also provide new paradigms for understanding control principles of biological organisms.
Abstract: The generation of robust periodic movements of complex nonlinear robotic systems is inherently difficult, especially, if parts of the robots are compliant. It has previously been proposed that complex nonlinear features of a robot, similarly as in biological organisms, might possibly facilitate its control. This bold hypothesis, commonly referred to as morphological computation, has recently received some theoretical support by Hauser et al. (Biol Cybern 105:355---370, doi: 10.1007/s00422-012-0471-0 , 2012). We show in this article that this theoretical support can be extended to cover not only the case of fading memory responses to external signals, but also the essential case of autonomous generation of adaptive periodic patterns, as, e.g., needed for locomotion. The theory predicts that feedback into the morphological computing system is necessary and sufficient for such tasks, for which a fading memory is insufficient. We demonstrate the viability of this theoretical analysis through computer simulations of complex nonlinear mass---spring systems that are trained to generate a large diversity of periodic movements by adapting the weights of a simple linear feedback device. Hence, the results of this article substantially enlarge the theoretically tractable application domain of morphological computation in robotics, and also provide new paradigms for understanding control principles of biological organisms.

Journal ArticleDOI
TL;DR: A new algorithm SG2 is proposed that is faster than the three others and offers the same level of accuracy than the most accurate, i.e., maximum error in solar vector for a multi-decadal time period, with an example of a 50-year period: 1980–2030.

Book
29 Nov 2012
TL;DR: A Little History Chemistry and Computation A Little Logic and Computations A Little Photochemistry and Luminescence Single Input-Single Output Systems Subject Index
Abstract: A Little History Chemistry and Computation A Little Logic and Computation A Little Photochemistry and Luminescence Single Input-Single Output Systems Subject Index

Book ChapterDOI
07 Oct 2012
TL;DR: This paper presents a new epipolar constraint for computing the rotation between two images independently of the translation, and shows for the first time how the constraint on the rotation has the advantage of remaining exact even in the case of translations converging to zero.
Abstract: In this paper, we present a new epipolar constraint for computing the rotation between two images independently of the translation. Against the common belief in the field of geometric vision that it is not possible to find one independently of the other, we show how this can be achieved by relatively simple two-view constraints. We use the fact that translation and rotation cause fundamentally different flow fields on the unit sphere centered around the camera. This allows to establish independent constraints on translation and rotation, and the latter is solved using the Grobner basis method. The rotation computation is completed by a solution to the cheiriality problem that depends neither on translation, nor on feature triangulations. Notably, we show for the first time how the constraint on the rotation has the advantage of remaining exact even in the case of translations converging to zero. We use this fact in order to remove the error caused by model selection via a non-linear optimization of rotation hypotheses. We show that our method operates in real-time and compare it to a standard existing approach in terms of both speed and accuracy.

Journal ArticleDOI
TL;DR: In this article, an analytical computation of the full gravity tensor from a polyhedral source of homogeneous density is presented, with emphasis on its algorithmic implementation, based on the subsequent transition of the general expressions from volume to surface and from surface to line integrals, defined along the closed polygons building each polyhedral face.
Abstract: The analytical computation of the full gravity tensor from a polyhedral source of homogeneous density is presented, with emphasis on its algorithmic implementation. The theoretical development is based on the subsequent transition of the general expressions from volume to surface and from surface to line integrals, defined along the closed polygons building each polyhedral face. However, the accurate numerical computation of the obtained transcendental expressions is linked with the relative position of the computation point and its corresponding projections on the plane of each face and on the line of each segment with respect to the polygons defining each face. Depending on this geometric setup, the application of the divergence theorem of Gauss leads to the appearance of additional correction terms, valid only for these boundary conditions and crucial for the correct numerical evaluation of the polyhedral-related gravity quantities at those locations of the computation point. A program in Fortran is su...

Posted Content
TL;DR: In this paper, a stable measure of sparsity s(x) is proposed, which is a sharp lower bound on the sparsity of the unknown signal x. The estimation procedure uses only a small number of linear measurements, does not rely on any sparsity assumptions and requires very little computation.
Abstract: In the theory of compressed sensing (CS), the sparsity ||x||_0 of the unknown signal x\in\R^p is commonly assumed to be a known parameter. However, it is typically unknown in practice. Due to the fact that many aspects of CS depend on knowing ||x||_0, it is important to estimate this parameter in a data-driven way. A second practical concern is that ||x||_0 is a highly unstable function of x. In particular, for real signals with entries not exactly equal to 0, the value ||x||_0=p is not a useful description of the effective number of coordinates. In this paper, we propose to estimate a stable measure of sparsity s(x):=||x||_1^2/||x||_2^2, which is a sharp lower bound on ||x||_0. Our estimation procedure uses only a small number of linear measurements, does not rely on any sparsity assumptions, and requires very little computation. A confidence interval for s(x) is provided, and its width is shown to have no dependence on the signal dimension p. Moreover, this result extends naturally to the matrix recovery setting, where a soft version of matrix rank can be estimated with analogous guarantees. Finally, we show that the use of randomized measurements is essential to estimating s(x). This is accomplished by proving that the minimax risk for estimating s(x) with deterministic measurements is large when n<

Journal ArticleDOI
TL;DR: A variational approach is applied to obtain a useful analytical bound to the quantum precision in the estimation of phase-shifts under phase diffusion, which shows that the estimation uncertainty cannot be smaller than a noise-dependent constant.
Abstract: The minimum achievable statistical uncertainty in the estimation of physical parameters is determined by the quantum Fisher information. Its computation for noisy systems is still a challenging problem. Using a variational approach, we present an equation for obtaining the quantum Fisher information, which has an explicit dependence on the mathematical description of the noise. This method is applied to obtain a useful analytical bound to the quantum precision in the estimation of phase-shifts under phase diffusion, which shows that the estimation uncertainty cannot be smaller than a noise-dependent constant.

Journal ArticleDOI
TL;DR: A novel trajectory computation algorithm to smooth piecewise linear collision-free trajectories computed by sample-based motion planners and a fast and reliable algorithm for collision checking between a robot and the environment along the B-spline trajectories.
Abstract: We present a novel trajectory computation algorithm to smooth piecewise linear collision-free trajectories computed by sample-based motion planners. Our approach uses cubic B-splines to generate trajectories that are C2 almost everywhere, except on a few isolated points. The algorithm performs local spline refinement to compute smooth, collision-free trajectories and it works well even in environments with narrow passages. We also present a fast and reliable algorithm for collision checking between a robot and the environment along the B-spline trajectories. We highlight the performance of our algorithm on complex benchmarks, including path computation for rigid and articulated models in cluttered environments.

Journal ArticleDOI
TL;DR: The cat swarm optimization (CSO) strategy is adopted to obtain the optimal or near optimal solution of the stego-image quality problem and the experimental results show that the proposed scheme can obtain a better solution with less computation time.

Book ChapterDOI
01 Jan 2012
TL;DR: This survey surveys recent work on the perspective reformulation approach that generates tight, tractable relaxations for convex mixed integer nonlin- ear programs (MINLP)s and discusses a variety of practical MINLPs whose relaxation can be strengthened via the perspective Reformulation.
Abstract: In this paper we survey recent work on the perspective reformulation approach that generates tight, tractable relaxations for convex mixed integer nonlin- ear programs (MINLP)s. This preprocessing technique is applicable to cases where the MINLP contains binary indicator variables that force continuous decision variables to take the value 0, or to belong to a convex set. We derive from first principles the perspective reformulation, and we discuss a variety of practical MINLPs whose relaxation can be strengthened via the perspective reformulation. The survey concludes with comments and computations comparing various algorithmic techniques for solving perspective reformulations.

Book ChapterDOI
19 Mar 2012
TL;DR: A general compiler that transforms any cryptographic scheme into a functionally equivalent scheme which is resilient to any continual leakage, and does not make use of public key encryption, which was required in all previous works.
Abstract: Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied: (i) in each observation the leakage is bounded, (ii) different parts of the computation leak independently, and (iii) the randomness that is used for certain operations comes from a simple (non-uniform) distribution. In contrast to earlier work on leakage resilient circuit compilers, which relied on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works.

Journal ArticleDOI
TL;DR: The algorithm mixes centralized and decentralized approaches dynamically at different scales to produce a fast, robust method that is accurate and scalable, and reduces both the global communication and unnecessary repeated computation.
Abstract: This paper introduces an approach that scales assignment algorithms to large numbers of robots and tasks. It is especially suitable for dynamic task allocations since both task locality and sparsity can be effectively exploited. We observe that an assignment can be computed through coarsening and partitioning operations on the standard utility matrix via a set of mature partitioning techniques and programs. The algorithm mixes centralized and decentralized approaches dynamically at different scales to produce a fast, robust method that is accurate and scalable, and reduces both the global communication and unnecessary repeated computation. An allocation results by operating on each partition: either the steps are repeated recursively to refine the generalized assignment, or each sub-problem may be solved by an existing algorithm. The results suggest that only a minor sacrifice in solution quality is needed for significant gains in efficiency. The algorithm is validated using extensive simulation experiments and the results show advantages over the traditional optimal assignment algorithms.

Journal ArticleDOI
TL;DR: An efficient algorithm that computes the Morse–Smale complex for 3D gray-scale images and allows for the computation of persistent homology for large data on commodity hardware is proposed.
Abstract: We propose an efficient algorithm that computes the Morse–Smale complex for 3D gray-scale images. This complex allows for an efficient computation of persistent homology since it is, in general, much smaller than the input data but still contains all necessary information. Our method improves a recently proposed algorithm to extract the Morse–Smale complex in terms of memory consumption and running time. It also allows for a parallel computation of the complex. The computational complexity of the Morse–Smale complex extraction solely depends on the topological complexity of the input data. The persistence is then computed using the Morse–Smale complex by applying an existing algorithm with a good practical running time. We demonstrate that our method allows for the computation of persistent homology for large data on commodity hardware.

Posted Content
TL;DR: In this article, it was shown that the universe can be regarded as a giant quantum computer, and that the quantum computational model of the universe explains a variety of observed phenomena not encompassed by the ordinary laws of physics.
Abstract: This article reviews the history of digital computation, and investigates just how far the concept of computation can be taken. In particular, I address the question of whether the universe itself is in fact a giant computer, and if so, just what kind of computer it is. I will show that the universe can be regarded as a giant quantum computer. The quantum computational model of the universe explains a variety of observed phenomena not encompassed by the ordinary laws of physics. In particular, the model shows that the the quantum computational universe automatically gives rise to a mix of randomness and order, and to both simple and complex systems.

Journal ArticleDOI
TL;DR: An effective integration approach for voxel‐based models of linear elasticity that drastically reduces the computational effort on cell level is presented and several benchmark problems show the potential of the proposed method in particular for heterogeneous material properties as common in biomedical applications based on computer tomography scans.
Abstract: The finite cell method is a fictitious domain approach based on hierarchical Ansatz spaces of higher order The method avoids time-consuming and often error-prone mesh-generation and favorably exploits Cartesian grids to embed structures of complex geometry in a simple-shaped computational domain thus shifting parts of the computational effort from mesh generation to the computation within the embedding finite cells of regular shape This paper presents an effective integration approach for voxel-based models of linear elasticity that drastically reduces the computational effort on cell level The applied strategy allows the pre-computation of an essential part of the cell matrices and vectors of higher order, representing stiffness and load, respectively Several benchmark problems show the potential of the proposed method in particular for heterogeneous material properties as common in biomedical applications based on computer tomography scans The applied strategy ensures a fast computation for time-critical simulations and even allows user-interactive simulations for models of moderate size at a high level of accuracy

Journal ArticleDOI
Roberto Grena1
TL;DR: Five algorithms for sun position computation, with validity from 2010 to 2110, are proposed and discussed, covering a wide range of possible applications and allowed to employ these algorithms even in long-term projects.