scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2015"


Posted Content
TL;DR: This paper applies a policy gradient algorithm for learning policies that optimize this loss function and proposes a regularization mechanism that encourages diversification of the dropout policy and presents encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.
Abstract: Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.

277 citations


Journal ArticleDOI
TL;DR: Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales as mentioned in this paper, which is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features.
Abstract: Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales. It is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features of the input. The computation of PH is an open area with numerous important and fascinating challenges. The field of PH computation is evolving rapidly, and new algorithms and software implementations are being updated and released at a rapid pace. The purposes of our article are to (1) introduce theory and computational methods for PH to a broad range of computational scientists and (2) provide benchmarks of state-of-the-art implementations for the computation of PH. We give a friendly introduction to PH, navigate the pipeline for the computation of PH with an eye towards applications, and use a range of synthetic and real-world data sets to evaluate currently available open-source implementations for the computation of PH. Based on our benchmarking, we indicate which algorithms and implementations are best suited to different types of data sets. In an accompanying tutorial, we provide guidelines for the computation of PH. We make publicly available all scripts that we wrote for the tutorial, and we make available the processed version of the data sets used in the benchmarking.

233 citations


Journal ArticleDOI
TL;DR: A numerical scheme geared for high performance computation of wall-bounded turbulent flows by using a two-dimensional (pencil) domain decomposition and utilizing the favourable scaling of the CFL time-step constraint as compared to the diffusive time- step constraint.

200 citations



Journal ArticleDOI
TL;DR: The validation of the FIDVC algorithm shows that the technique provides a unique, fast and effective experimental approach for measuring non-linear 3D deformations with high spatial resolution.
Abstract: Digital volume correlation (DVC), the three-dimensional (3D) extension of digital image correlation (DIC), measures internal 3D material displacement fields by correlating intensity patterns within interrogation windows. In recent years DVC algorithms have gained increased attention in experimental mechanics, material science, and biomechanics. In particular, the application of DVC algorithms to quantify cell-induced material deformations has generated a demand for user-friendly, and computationally efficient DVC approaches capable of detecting large, non-linear deformation fields. We address these challenges by presenting a fast iterative digital volume correlation method (FIDVC), which can be run on a personal computer with computation times on the order of 1–2 min. The FIDVC algorithm employs a unique deformation-warping scheme capable of capturing any general non-linear finite deformation. The validation of the FIDVC algorithm shows that our technique provides a unique, fast and effective experimental approach for measuring non-linear 3D deformations with high spatial resolution.

185 citations


Journal ArticleDOI
TL;DR: Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.
Abstract: Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.

172 citations


Book
26 Nov 2015
TL;DR: Stability analysis of delay models in biosciences Appendix Bibliography Index.
Abstract: Preface to the second edition Preface to the first edition List of symbols Acronyms Part I. Stability Analysis of Linear Time-Delay Systems 1. Spectral properties of linear time-delay systems 2. Computation of characteristic roots 3. Pseudospectra and robust stability analysis 4. Computation of H2 and H norms 5. Computation of stability regions in parameter spaces 6. Stability regions in delay-parameter spaces Part II. Stabilization and Robust Fixed-Order Control: 7. Stabilization using a direct eigenvalue optimization approach 8. Stabilizability with delayed feedback: a numerical case study 9. Optimization of H norms Part III. Applications: 10. Output feedback stabilization using delays as control parameters 11. Smith predictor for stable systems: delay sensitivity analysis 12. Controlling unstable systems using finite spectrum assignment 13. Congestion control algorithms in networks 14. Consensus problems with distributed delays, with traffic flow applications 15. Synchronization of delay-coupled oscillators 16. Stability analysis of delay models in biosciences Appendix Bibliography Index.

169 citations


Book ChapterDOI
27 Sep 2015
TL;DR: An experimental analysis on publicly available benchmarks shows that the new version of wasp outperforms the previous one, and the algorithms and the design choices for addressing several reasoning tasks in ASP.
Abstract: ASP solvers address several reasoning tasks that go beyond the mere computation of answer sets. Among them are cautious reasoning, for modeling query entailment, and optimum answer set computation, for supporting numerical optimization. This paper reports on the recent improvements of the solver wasp, and details the algorithms and the design choices for addressing several reasoning tasks in ASP. An experimental analysis on publicly available benchmarks shows that the new version of wasp outperforms the previous one. Comparing with the state-of-the-art solver clasp, the performance of wasp is competitive in the overall for number of solved instances and average execution time.

133 citations


Journal ArticleDOI
TL;DR: In this article, a gain-scheduling observer is proposed to estimate the sideslip angle with the yaw rate measurements by employing the vehicle dynamics, and the observer gain can be determined with off-line computation and on-line computations.

107 citations


Posted Content
TL;DR: D-Wave’s latest quantum annealer, the D-Wave 2X system, is evaluated on an array of problem classes and it is found that it performs well on several input classes relative to state of the art software solvers running single-threaded on a CPU.
Abstract: In the evaluation of quantum annealers, metrics based on ground state success rates have two major drawbacks. First, evaluation requires computation time for both quantum and classical processors that grows exponentially with problem size. This makes evaluation itself computationally prohibitive. Second, results are heavily dependent on the eects of analog noise on the quantum processors, which is an engineering issue that complicates the study of the underlying quantum annealing algorithm. We introduce a novel \time-to-target" metric which avoids these two issues by challenging software solvers to match the results obtained by a quantum annealer in a short amount of time. We evaluate D-Wave’s latest quantum annealer, the D-Wave 2X system, on an array of problem classes and nd that it performs well on several input classes relative to state of the art software solvers running single-threaded on a CPU.

106 citations


Proceedings ArticleDOI
25 May 2015
TL;DR: Experimental results demonstrate the effectiveness and efficiency of the proposed brain storm optimization algorithm in objective space.
Abstract: Brain storm optimization algorithm is a newly proposed swarm intelligence algorithm, which has two main operations, i.e., convergent operation and divergent operation. In the original brain storm optimization algorithm, a clustering algorithm is utilized to cluster individuals into clusters as the convergent operation which is time consuming because of distance calculation during clustering. In this paper, a new convergent operation is proposed to be implemented in the 1-dimensional objective space instead of in the solution space. As a consequence, its computation time will depend on only the population size, not the problem dimension, therefore, a big computation time saving can be obtained which makes it have good scalability. Experimental results demonstrate the effectiveness and efficiency of the proposed brain storm optimization algorithm in objective space.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a dynamic version of the successive shortest-path algorithm is introduced to solve the data association problem optimally while reusing computation, resulting in faster inference than standard solvers.
Abstract: One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

Journal ArticleDOI
TL;DR: In this paper, a functional renormalization group approach is proposed for the direct computation of real-time correlation functions, also applicable at finite temperature and density, and a general class of regulators that preserve the space-time symmetries, and allow the computation of correlation functions at complex frequencies.
Abstract: We put forward a functional renormalization group approach for the direct computation of real time correlation functions, also applicable at finite temperature and density. We construct a general class of regulators that preserve the space-time symmetries, and allows the computation of correlation functions at complex frequencies. This includes both imaginary time and real time, and allows in particular the use of the plethora of imaginary time results for the computation of real time correlation functions. We also discuss real time computation on the Keldysh contour with general spatial momentum regulators. Both setups give access to the general momentum and frequency dependence of correlation functions.

Journal ArticleDOI
TL;DR: The paDIC method, combining an inverse compositional Gauss–Newton algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU.



Posted Content
TL;DR: This work presents a novel approach based on a combination of verified blind quantum computation and Bell state self-testing that has dramatically reduced overhead, with resources scaling as only O(m4lnm) in the number of gates.
Abstract: As progress on experimental quantum processors continues to advance, the problem of verifying the correct operation of such devices is becoming a pressing concern. The recent discovery of protocols for verifying computation performed by entangled but non-communicating quantum processors holds the promise of certifying the correctness of arbitrary quantum computations in a fully device-independent manner. Unfortunately, all known schemes have prohibitive overhead, with resources scaling as extremely high degree polynomials in the number of gates constituting the computation. Here we present a novel approach based on a combination of verified blind quantum computation and Bell state self-testing. This approach has dramatically reduced overhead, with resources scaling as only O(m4lnm) in the number of gates.


Book ChapterDOI
16 Aug 2015
TL;DR: In this paper, the authors present SPDZ in the two-party setting, where constant-round protocols exist that remain fast even over slow networks and all concretely efficient fully-secure protocols, such as SPDZ, require many rounds of communication.
Abstract: Recently, there has been huge progress in the field of concretely efficient secure computation, even while providing security in the presence of malicious adversaries. This is especially the case in the two-party setting, where constant-round protocols exist that remain fast even over slow networks. However, in the multi-party setting, all concretely efficient fully-secure protocols, such as SPDZ, require many rounds of communication.

Journal ArticleDOI
TL;DR: This article is dedicated to the rapid computation of separable expansions for the approximation of random fields and provides an a posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace.
Abstract: This article is dedicated to the rapid computation of separable expansions for the approximation of random fields. We consider approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. Especially, we provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples are provided to validate and quantify the presented methods.


Journal ArticleDOI
TL;DR: In this paper, an asymmetric bisection-based M T2 calculation algorithm is described, which is shown to achieve better precision than the fastest and most popular existing bisectionbased methods.
Abstract: An M T2 calculation algorithm is described. It is shown to achieve better precision than the fastest and most popular existing bisection-based methods. Most importantly, it is also the first algorithm to be able to reliably calculate asymmetric M T2 to machine-precision, at speeds comparable to the fastest commonly used symmetric calculators.

Journal ArticleDOI
TL;DR: This work defines a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and shows there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include .
Abstract: From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that , where is a classical complexity class (known to be included in , hence ). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in . This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, , is equal to . Given only the assumption of tomographic locality, the inclusion in still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ?classical oracle?. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include .

Journal ArticleDOI
TL;DR: This work gives a new theoretical solution to a leading-edge experimental challenge, namely to the verification of quantum computations in the regime of high computational complexity, using a reduction to an entanglement-based protocol and showing that verification could be achieved at minimal cost compared to performing the computation.
Abstract: We give a new theoretical solution to a leading-edge experimental challenge, namely to the verification of quantum computations in the regime of high computational complexity. Our results are given in the language of quantum interactive proof systems. Specifically, we show that any language in $\mathsf{BQP}$ has a quantum interactive proof system with a polynomial-time classical verifier (who can also prepare random single-qubit pure states), and a quantum polynomial-time prover. Here, soundness is unconditional--i.e., it holds even for computationally unbounded provers. Compared to prior work achieving similar results, our technique does not require the encoding of the input or of the computation; instead, we rely on encryption of the input (together with a method to perform computations on encrypted inputs), and show that the random choice between three types of input (defining a computational run, versus two types of test runs) suffices. Because the overhead is very low for each run (it is linear in the size of the circuit), this shows that verification could be achieved at minimal cost compared to performing the computation. As a proof technique, we use a reduction to an entanglement-based protocol; to the best of our knowledge, this is the first time this technique has been used in the context of verification of quantum computations, and it enables a relatively straightforward analysis.


Journal ArticleDOI
TL;DR: This work implemented GPU-based full-waveform inversion using the wavefield reconstruction strategy, and adopted the Clayton-Enquist absorbing boundary to maintain the efficiency of the GPU computation.
Abstract: The graphics processing unit (GPU) has become a popular device for seismic imaging and inversion due to its superior speed-up performance. We implemented GPU-based full-waveform inversion using the wavefield reconstruction strategy. Because computation on the GPU was much faster than CPU-GPU data communication, in our implementation, the boundaries of the forward modeling were saved on the device to avert the issue of data transfer between the host and device. We adopted the Clayton-Enquist absorbing boundary to maintain the efficiency of the GPU computation. A hybrid nonlinear conjugate gradient algorithm combined with the parallel reduction scheme was used to do computation in GPU blocks. The numerical results confirmed the validity of our implementation.

Proceedings ArticleDOI
14 Jan 2015
TL;DR: A new elementary algebraic theory of quantum computation, built from unitary gates and measurement is presented, and an equational theory for a quantum programming language is extracted from thegebraic theory.
Abstract: We develop a new framework of algebraic theories with linear parameters, and use it to analyze the equational reasoning principles of quantum computing and quantum programming languages. We use the framework as follows: we present a new elementary algebraic theory of quantum computation, built from unitary gates and measurement; we provide a completeness theorem or the elementary algebraic theory by relating it with a model from operator algebra; we extract an equational theory for a quantum programming language from the algebraic theory; we compare quantum computation with other local notions of computation by investigating variations on the algebraic theory.

Journal ArticleDOI
TL;DR: This study shows that, when used carefully, the adaptive mass scaling constitutes an efficient method to reduce the CPU computation time and should be considered for the development of future models.

Journal ArticleDOI
TL;DR: Using the recurrence relation with respect to variable x instead of order n in computation of Meixner’s discrete orthogonal polynomials and the image block representation for binary images and intensity slice representation for gray-scale images is presented.
Abstract: Recently, the discrete orthogonal moments have been introduced in image analysis and pattern recognition. They have better capacities than the continuous orthogonal moments but the use of these moments as feature descriptors is limited by their high computational cost. As a solution to this problem, we present in this paper an approach for fast computation of Meixner's discrete orthogonal moments. By using the recurrence relation with respect to variable x instead of order n in computation of Meixner's discrete orthogonal polynomials and the image block representation for binary images and intensity slice representation for gray-scale images. The acceleration of the computation time of Meixner moments is due to an innovative image representation where the image is described by a number of homogenous rectangular blocks instead of individual pixels. A novel set of invariant moments based on the Meixner discrete orthogonal moments is also proposed. These invariant moments are derived algebraically from the geometric invariant momenst and their computation is accelerated using image representation scheme. The presented algorithms are tested in several well-known computer vision datasets, regarding computational time, image reconstruction, invariability, and classification. The performances of Meixner invariant moments used as pattern features for a pattern classification application are compared with Hu, Tchebichef, dual-Hahn, and Krawtchouk invariant moments.