scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2015"


Journal ArticleDOI
TL;DR: In this paper, the authors investigate problems of numerical simulations precision and stochastic errors accumulation in solving problems of detonation or deflagration combustion of gas mixtures in rocket engines.

425 citations


Posted Content
TL;DR: This paper applies a policy gradient algorithm for learning policies that optimize this loss function and proposes a regularization mechanism that encourages diversification of the dropout policy and presents encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.
Abstract: Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.

277 citations


Journal ArticleDOI
TL;DR: Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales as mentioned in this paper, which is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features.
Abstract: Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales. It is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features of the input. The computation of PH is an open area with numerous important and fascinating challenges. The field of PH computation is evolving rapidly, and new algorithms and software implementations are being updated and released at a rapid pace. The purposes of our article are to (1) introduce theory and computational methods for PH to a broad range of computational scientists and (2) provide benchmarks of state-of-the-art implementations for the computation of PH. We give a friendly introduction to PH, navigate the pipeline for the computation of PH with an eye towards applications, and use a range of synthetic and real-world data sets to evaluate currently available open-source implementations for the computation of PH. Based on our benchmarking, we indicate which algorithms and implementations are best suited to different types of data sets. In an accompanying tutorial, we provide guidelines for the computation of PH. We make publicly available all scripts that we wrote for the tutorial, and we make available the processed version of the data sets used in the benchmarking.

233 citations


Journal ArticleDOI
TL;DR: A numerical scheme geared for high performance computation of wall-bounded turbulent flows by using a two-dimensional (pencil) domain decomposition and utilizing the favourable scaling of the CFL time-step constraint as compared to the diffusive time- step constraint.

200 citations



Journal ArticleDOI
TL;DR: The main purpose of this paper is to present an overview of the neural network applications in wind energy systems and indicate the potential of ANN as a useful tool for windEnergy systems.
Abstract: Neural networks approaches are becoming useful as an alternate way to classical methods. As a computation and learning paradigm, they are presented as a different modeling approach to solve complicated problems. They have been used to solve complicated practical problems in various areas, such as engineering, medicine, business, manufacturing, military etc. They have also been applied for modeling, identification, optimization, prediction, forecasting, evaluation, classification, and control of complex systems. During the last three decades, artificial neural network have been extensively employed in numerous fields of science and technology. They are not programmed in the conventional procedure but they are trained using data exemplifying the behaviour of a system. This study presents various applications of neural networks used in wind energy systems. The applications of neural networks in wind energy systems could be grouped in three major categories: forecasting and prediction, prediction and control, identification and evaluation. The main purpose of this paper is to present an overview of the neural network applications in wind energy systems. Published literature presented in this study indicate the potential of ANN as a useful tool for wind energy systems. Author strongly believes that this survey will be very much useful to the researchers, scientific engineers working in this area to find out the relevant references and current state of the field.

192 citations


Journal ArticleDOI
TL;DR: In this paper, a unified theory for different kinds of light beams exhibiting classical entanglement is presented, and several possible extensions of the concept are discussed. But the authors do not discuss the physics underlying this intriguing aspect of classical optics.
Abstract: When two or more degrees of freedom become coupled in a physical system, a number of observables of the latter cannot be represented by mathematical expressions separable with respect to the different degrees of freedom. In recent years it appeared clear that these expressions may display the same mathematical structures exhibited by multiparty entangled states in quantum mechanics. In this work, we investigate the occurrence of such structures in optical beams, a phenomenon that is often referred to as ‘classical entanglement’. We present a unified theory for different kinds of light beams exhibiting classical entanglement and we indicate several possible extensions of the concept. Our results clarify and shed new light upon the physics underlying this intriguing aspect of classical optics.

190 citations


Journal ArticleDOI
TL;DR: The validation of the FIDVC algorithm shows that the technique provides a unique, fast and effective experimental approach for measuring non-linear 3D deformations with high spatial resolution.
Abstract: Digital volume correlation (DVC), the three-dimensional (3D) extension of digital image correlation (DIC), measures internal 3D material displacement fields by correlating intensity patterns within interrogation windows. In recent years DVC algorithms have gained increased attention in experimental mechanics, material science, and biomechanics. In particular, the application of DVC algorithms to quantify cell-induced material deformations has generated a demand for user-friendly, and computationally efficient DVC approaches capable of detecting large, non-linear deformation fields. We address these challenges by presenting a fast iterative digital volume correlation method (FIDVC), which can be run on a personal computer with computation times on the order of 1–2 min. The FIDVC algorithm employs a unique deformation-warping scheme capable of capturing any general non-linear finite deformation. The validation of the FIDVC algorithm shows that our technique provides a unique, fast and effective experimental approach for measuring non-linear 3D deformations with high spatial resolution.

185 citations


Journal ArticleDOI
TL;DR: Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.
Abstract: Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.

172 citations


Book
26 Nov 2015
TL;DR: Stability analysis of delay models in biosciences Appendix Bibliography Index.
Abstract: Preface to the second edition Preface to the first edition List of symbols Acronyms Part I. Stability Analysis of Linear Time-Delay Systems 1. Spectral properties of linear time-delay systems 2. Computation of characteristic roots 3. Pseudospectra and robust stability analysis 4. Computation of H2 and H norms 5. Computation of stability regions in parameter spaces 6. Stability regions in delay-parameter spaces Part II. Stabilization and Robust Fixed-Order Control: 7. Stabilization using a direct eigenvalue optimization approach 8. Stabilizability with delayed feedback: a numerical case study 9. Optimization of H norms Part III. Applications: 10. Output feedback stabilization using delays as control parameters 11. Smith predictor for stable systems: delay sensitivity analysis 12. Controlling unstable systems using finite spectrum assignment 13. Congestion control algorithms in networks 14. Consensus problems with distributed delays, with traffic flow applications 15. Synchronization of delay-coupled oscillators 16. Stability analysis of delay models in biosciences Appendix Bibliography Index.

169 citations


Book ChapterDOI
27 Sep 2015
TL;DR: An experimental analysis on publicly available benchmarks shows that the new version of wasp outperforms the previous one, and the algorithms and the design choices for addressing several reasoning tasks in ASP.
Abstract: ASP solvers address several reasoning tasks that go beyond the mere computation of answer sets. Among them are cautious reasoning, for modeling query entailment, and optimum answer set computation, for supporting numerical optimization. This paper reports on the recent improvements of the solver wasp, and details the algorithms and the design choices for addressing several reasoning tasks in ASP. An experimental analysis on publicly available benchmarks shows that the new version of wasp outperforms the previous one. Comparing with the state-of-the-art solver clasp, the performance of wasp is competitive in the overall for number of solved instances and average execution time.

Journal ArticleDOI
TL;DR: The conformal bootstrap program for three dimensional conformal field theories with N=2 supersymmetry is implemented and universal constraints on the spectrum of operator dimensions in these theories are found.
Abstract: We implement the conformal bootstrap program for three dimensional conformal field theories with N=2 supersymmetry and find universal constraints on the spectrum of operator dimensions in these theories. By studying the bounds on the dimension of the first scalar appearing in the operator product expansion of a chiral and an antichiral primary, we find a kink at the expected location of the critical three dimensional N=2 Wess-Zumino model, which can be thought of as a supersymmetric analog of the critical Ising model. Focusing on this kink, we determine, to high accuracy, the low-lying spectrum of operator dimensions of the theory, as well as the stress-tensor two-point function. We find that the latter is in an excellent agreement with an exact computation.

Journal ArticleDOI
TL;DR: In this article, the performance of variational multiscale models (VMS) in the large eddy simulation (LES) of turbulent flows is studied, and the results show the tremendous potential of VMS for the numerical simulation of turbulence.

Book ChapterDOI
04 Jan 2015
TL;DR: This work designs provably efficient fully-distributed algorithms for computing PageRank using traditional matrix-vector multiplication style iterative methods, which may not always adapt well to the distributed setting owing to communication bandwidth restrictions and convergence rates.
Abstract: Over the last decade, PageRank has gained importance in a wide range of applications and domains, ever since it first proved to be effective in determining node importance in large graphs (and was a pioneering idea behind Google’s search engine). In distributed computing alone, PageRank vectors, or more generally random walk based quantities have been used for several different applications ranging from determining important nodes, load balancing, search, and identifying connectivity structures. Surprisingly, however, there has been little work towards designing provably efficient fully-distributed algorithms for computing PageRank. The difficulty is that traditional matrix-vector multiplication style iterative methods may not always adapt well to the distributed setting owing to communication bandwidth restrictions and convergence rates.

Journal ArticleDOI
TL;DR: In this article, a gain-scheduling observer is proposed to estimate the sideslip angle with the yaw rate measurements by employing the vehicle dynamics, and the observer gain can be determined with off-line computation and on-line computations.

Posted Content
TL;DR: D-Wave’s latest quantum annealer, the D-Wave 2X system, is evaluated on an array of problem classes and it is found that it performs well on several input classes relative to state of the art software solvers running single-threaded on a CPU.
Abstract: In the evaluation of quantum annealers, metrics based on ground state success rates have two major drawbacks. First, evaluation requires computation time for both quantum and classical processors that grows exponentially with problem size. This makes evaluation itself computationally prohibitive. Second, results are heavily dependent on the eects of analog noise on the quantum processors, which is an engineering issue that complicates the study of the underlying quantum annealing algorithm. We introduce a novel \time-to-target" metric which avoids these two issues by challenging software solvers to match the results obtained by a quantum annealer in a short amount of time. We evaluate D-Wave’s latest quantum annealer, the D-Wave 2X system, on an array of problem classes and nd that it performs well on several input classes relative to state of the art software solvers running single-threaded on a CPU.

Proceedings ArticleDOI
25 May 2015
TL;DR: Experimental results demonstrate the effectiveness and efficiency of the proposed brain storm optimization algorithm in objective space.
Abstract: Brain storm optimization algorithm is a newly proposed swarm intelligence algorithm, which has two main operations, i.e., convergent operation and divergent operation. In the original brain storm optimization algorithm, a clustering algorithm is utilized to cluster individuals into clusters as the convergent operation which is time consuming because of distance calculation during clustering. In this paper, a new convergent operation is proposed to be implemented in the 1-dimensional objective space instead of in the solution space. As a consequence, its computation time will depend on only the population size, not the problem dimension, therefore, a big computation time saving can be obtained which makes it have good scalability. Experimental results demonstrate the effectiveness and efficiency of the proposed brain storm optimization algorithm in objective space.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a dynamic version of the successive shortest-path algorithm is introduced to solve the data association problem optimally while reusing computation, resulting in faster inference than standard solvers.
Abstract: One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

Journal ArticleDOI
TL;DR: While ordinary conformal prediction has high computational cost for functional data, this paper uses the inductive conformal predictor, together with several novel choices of conformity scores, to simplify the computation.
Abstract: This paper applies conformal prediction techniques to compute simultaneous prediction bands and clustering trees for functional data. These tools can be used to detect outliers and clusters. Both our prediction bands and clustering trees provide prediction sets for the underlying stochastic process with a guaranteed finite sample behavior, under no distributional assumptions. The prediction sets are also informative in that they correspond to the high density region of the underlying process. While ordinary conformal prediction has high computational cost for functional data, we use the inductive conformal predictor, together with several novel choices of conformity scores, to simplify the computation. Our methods are illustrated on some real data examples.

Journal ArticleDOI
TL;DR: In this paper, a functional renormalization group approach is proposed for the direct computation of real-time correlation functions, also applicable at finite temperature and density, and a general class of regulators that preserve the space-time symmetries, and allow the computation of correlation functions at complex frequencies.
Abstract: We put forward a functional renormalization group approach for the direct computation of real time correlation functions, also applicable at finite temperature and density. We construct a general class of regulators that preserve the space-time symmetries, and allows the computation of correlation functions at complex frequencies. This includes both imaginary time and real time, and allows in particular the use of the plethora of imaginary time results for the computation of real time correlation functions. We also discuss real time computation on the Keldysh contour with general spatial momentum regulators. Both setups give access to the general momentum and frequency dependence of correlation functions.

Journal ArticleDOI
TL;DR: In this article, a simplified approach is proposed for the computation of the critical speed of track-embankment-ground systems, which is compared to those achieved by detailed methods, also presented in this paper.
Abstract: The dynamic amplification effects of the response due to a moving load on the surface of an elastic solid has been object of research for more than a century. However, if in the beginning of the last century the problem had only theoretical interest, this is no longer true. Indeed, the recent advancements in the rolling stock, which can now reach speeds higher than 500 km/h, brought this kind of problems to the engineering practice, mainly to high speed railway engineering. The present paper approaches this problem focusing on railway engineering. The departing point is the theoretical formulation of the critical speed problem of a moving load on the surface of an elastic solid. From the usage of 2.5D detailed models it was possible to understand the influence of the embankment and track properties on the critical speed. However, to avoid complex numerical models, which are very demanding from the computational point of view, a simplified approach is proposed for the computation of the critical speed of track–embankment–ground systems. The results of the simplified approach are compared to those achieved by detailed methods, also presented in this paper, and the proposed expedite methodology is found to be very accurate.

Journal ArticleDOI
TL;DR: The paDIC method, combining an inverse compositional Gauss–Newton algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU.

Proceedings ArticleDOI
12 Oct 2015
TL;DR: Agarwal et al. as mentioned in this paper proposed a new technique for enforcing consistency of the inputs used by the party who garbles the circuits, which has both theoretical and practical advantages over previous methods.
Abstract: Recently, several new techniques were presented to dramatically improve key parts of secure two-party computation (2PC) protocols that use the cut-and-choose paradigm on garbled circuits for 2PC with security against malicious adversaries. These include techniques for reducing the number of garbled circuits (Lindell 13, Huang et al. 13, Lindell and Riva 14, Huang et al. 14) and techniques for reducing the overheads besides garbled circuits (Mohassel and Riva 13, Shen and Shelat~13). We design a highly optimized protocol in the offline/online setting that makes use of all state-of-the-art techniques, along with several new techniques that we introduce. A crucial part of our protocol is a new technique for enforcing consistency of the inputs used by the party who garbles the circuits. This technique has both theoretical and practical advantages over previous methods. We present a prototype implementation of our new protocol. This is the first implementation of the amortized cut-and-choose technique of Lindell and Riva (Crypto 2014). Our prototype achieves a speed of just 7 ms in the online stage} and just 74 ms in the offline stage per 2PC invoked, for securely computing AES in the presence of malicious adversaries (using 9 threads on a 2.9GHz machine). We note that no prior work has gone below one second overall on average for the secure computation of AES for malicious adversaries (nor below 20ms in the online stage). Our implementation securely evaluates SHA-256 (which is a much bigger circuit) with 33 ms online time and $206$ ms offline time, per 2PC invoked.

Journal ArticleDOI
TL;DR: This study proposes a coordinated brake control CBC strategy for multiple vehicles to minimize the risk of rear-end collision using model predictive control MPC framework and results show that CBC strategy has the best performance among these four strategies.
Abstract: The vehicular collision can lead to serious casualties and traffic congestions, especially multiple-vehicle collision. Most recent studies mainly focused on collision warning and avoidance strategies for two consecutive vehicles, but only a few on multiple-vehicle situations. This study proposes a coordinated brake control CBC strategy for multiple vehicles to minimize the risk of rear-end collision using model predictive control MPC framework. The objective is to minimize total impact energy by determining the desired braking force, where the impact energy is defined as the relative kinetic energy for a consecutive pair of vehicles. Under the MPC framework, this problem is further converted to a quadratic programming at each time step for numerical computations. To compare the performance, three other control strategies, i.e. direct brake control DBC, driver reaction based brake control DRBC and linear quadratic regulator LQR control are also considered in this paper. The simulation results, in both a typical scenario and a huge number of scenarios under stochastic situations, show that CBC strategy has the best performance among these four strategies. The proposed CBC strategy has the potential to avoid the collision among a group of vehicles, and to mitigate the impact in cases where the collision is unavoidable.



Posted Content
TL;DR: This work presents a novel approach based on a combination of verified blind quantum computation and Bell state self-testing that has dramatically reduced overhead, with resources scaling as only O(m4lnm) in the number of gates.
Abstract: As progress on experimental quantum processors continues to advance, the problem of verifying the correct operation of such devices is becoming a pressing concern. The recent discovery of protocols for verifying computation performed by entangled but non-communicating quantum processors holds the promise of certifying the correctness of arbitrary quantum computations in a fully device-independent manner. Unfortunately, all known schemes have prohibitive overhead, with resources scaling as extremely high degree polynomials in the number of gates constituting the computation. Here we present a novel approach based on a combination of verified blind quantum computation and Bell state self-testing. This approach has dramatically reduced overhead, with resources scaling as only O(m4lnm) in the number of gates.


Book ChapterDOI
16 Aug 2015
TL;DR: In this paper, the authors present SPDZ in the two-party setting, where constant-round protocols exist that remain fast even over slow networks and all concretely efficient fully-secure protocols, such as SPDZ, require many rounds of communication.
Abstract: Recently, there has been huge progress in the field of concretely efficient secure computation, even while providing security in the presence of malicious adversaries. This is especially the case in the two-party setting, where constant-round protocols exist that remain fast even over slow networks. However, in the multi-party setting, all concretely efficient fully-secure protocols, such as SPDZ, require many rounds of communication.

Journal ArticleDOI
TL;DR: It is demonstrated that simple molecular logic systems (a combination of a pH sensor, a photo acid generator, and a pH buffer spread on paper) without any organization can achieve this relatively complex computational goal with good fidelity.
Abstract: Genetically engineered bacteria and reactive DNA networks detect edges of objects, as done in our retinas and as also found within computer vision. We now demonstrate that simple molecular logic systems (a combination of a pH sensor, a photo acid generator, and a pH buffer spread on paper) without any organization can achieve this relatively complex computational goal with good fidelity. This causes a jump in the complexity achievable by molecular logic-based computation and extends its applicability. The molecular species involved in light dose-driven “off–on–off” fluorescence is diverted in the “on” state by proton diffusion from irradiated to unirradiated regions where it escapes a strong quencher, thus visualizing the edge of a mask.