scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2013"


Journal ArticleDOI
15 Feb 2013-Science
TL;DR: A quantum boson-sampling machine (QBSM) is constructed to sample the output distribution resulting from the nonclassical interference of photons in an integrated photonic circuit, a problem thought to be exponentially hard to solve classically.
Abstract: Although universal quantum computers ideally solve problems such as factoring integers exponentially more efficiently than classical machines, the formidable challenges in building such devices motivate the demonstration of simpler, problem-specific algorithms that still promise a quantum speedup. We constructed a quantum boson-sampling machine (QBSM) to sample the output distribution resulting from the nonclassical interference of photons in an integrated photonic circuit, a problem thought to be exponentially hard to solve classically. Unlike universal quantum computation, boson sampling merely requires indistinguishable photons, linear state evolution, and detectors. We benchmarked our QBSM with three and four photons and analyzed sources of sampling inaccuracy. Scaling up to larger devices could offer the first definitive quantum-enhanced computation.

862 citations


Journal ArticleDOI
15 Feb 2013-Science
TL;DR: The central premise of boson sampling was tested, experimentally verifying that three-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a six-mode integrated optical circuit.
Abstract: Quantum computers are unnecessary for exponentially efficient computation or simulation if the Extended Church-Turing thesis is correct. The thesis would be strongly contradicted by physical devices that efficiently perform tasks believed to be intractable for classical computers. Such a task is boson sampling: sampling the output distributions of n bosons scattered by some passive, linear unitary process. We tested the central premise of boson sampling, experimentally verifying that three-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a six-mode integrated optical circuit. We find the protocol to be robust, working even with the unavoidable effects of photon loss, non-ideal sources, and imperfect detection. Scaling this to large numbers of photons should be a much simpler task than building a universal quantum computer.

671 citations


Journal ArticleDOI
TL;DR: The basic aspects of quantum error correction and fault-tolerance are examined largely through detailed examples, which are more relevant to experimentalists today and in the near future.
Abstract: Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

625 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper derives differentially private versions of stochastic gradient descent, and test them empirically to show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.
Abstract: Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.

549 citations


Journal ArticleDOI
15 Feb 2013-Science
TL;DR: The construction of a scalable quantum computer architecture based on multiple interacting quantum walkers could, in principle, be used as an architecture for building a scaled quantum computer with no need for time-dependent control.
Abstract: A quantum walk is a time-homogeneous quantum-mechanical process on a graph defined by analogy to classical random walk. The quantum walker is a particle that moves from a given vertex to adjacent vertices in quantum superposition. We consider a generalization to interacting systems with more than one walker, such as the Bose-Hubbard model and systems of fermions or distinguishable particles with nearest-neighbor interactions, and show that multiparticle quantum walk is capable of universal quantum computation. Our construction could, in principle, be used as an architecture for building a scalable quantum computer with no need for time-dependent control.

413 citations


Journal ArticleDOI
TL;DR: It is experimentally demonstrated that, even with annealing times eight orders of magnitude longer than the predicted single-qubit decoherence time, the probabilities of performing a successful computation are similar to those expected for a fully coherent system.
Abstract: Efforts to develop useful quantum computers have been blocked primarily by environmental noise. Quantum annealing is a scheme of quantum computation that is predicted to be more robust against noise, because despite the thermal environment mixing the system's state in the energy basis, the system partially retains coherence in the computational basis, and hence is able to establish well-defined eigenstates. Here we examine the environment's effect on quantum annealing using 16 qubits of a superconducting quantum processor. For a problem instance with an isolated small-gap anticrossing between the lowest two energy levels, we experimentally demonstrate that, even with annealing times eight orders of magnitude longer than the predicted single-qubit decoherence time, the probabilities of performing a successful computation are similar to those expected for a fully coherent system. Moreover, for the problem studied, we show that quantum annealing can take advantage of a thermal environment to achieve a speedup factor of up to 1,000 over a closed system.

246 citations


Journal ArticleDOI
TL;DR: A drastic redesign of the algorithm, and moving all the major computation parts into GPU, is reached, reaching a speed of 12s per molecular dynamics step for a 512 atom system using 256GPU cards.

229 citations


Book
01 Jun 2013
TL;DR: Write a function in a programming language of your choice that takes a (32-bit IEEE format) float and returns a float with the property that: given zero, infinity or a positive normalised floating-point number then its result is the smallest normalised Floating Point Number greater than its argument.
Abstract: (a) Write a function in a programming language of your choice that takes a (32-bit IEEE format) float and returns a float with the property that: given zero, infinity or a positive normalised floating-point number then its result is the smallest normalised floating-point number (or infinity if this is not possible) greater than its argument. You may assume functions f2irep and irep2f which map between a float and the same bit pattern held in a 32-bit integer. [6 marks] (b) Briefly explain how this routine can be extended also to deal with negative floating-point values, remembering that the result should always be greater than the argument. [2 marks] (c) Define the notions of rounding error and truncation error of a floating-point computation involving a parameter h that mathematically should tend to zero. [2 marks] (d) Given a function f implementing a differentiable function that takes a floating-point argument and gives a floating-point result, a programmer implements a function f (x) ≈ f (x + h) − f (x − h) 2h to compute its derivative. Using a Taylor expansion or otherwise, estimate how rounding and truncation errors depend on h. You may assume that all mathematical derivatives of f are within an order of magnitude of 1.0. [8 marks] (e) Suggest a good value for h given a double-precision floating-point format that represents approximately 15 significant decimal figures. [2 marks]

229 citations


Journal ArticleDOI
TL;DR: Quokka as discussed by the authors is a simulation tool for 3D solar cell simulation, which is based on the full set of charge carrier transport equations, i.e., quasi-neutrality and conductive boundaries.
Abstract: Details of Quokka, which is a freely available fast 3-D solar cell simulation tool, are presented. Simplifications to the full set of charge carrier transport equations, i.e., quasi-neutrality and conductive boundaries, result in a model that is computationally inexpensive without a loss of generality. Details on the freely available finite volume implementation in MATLAB are given, which shows computation times on the order of seconds to minutes for a full I-V curve sweep on a conventional personal computer. As an application example, the validity of popular analytical models of partial rear contact cells is verified under varying conditions. Consequently, it is observed that significant errors can occur if these analytical models are used to derive local recombination properties from effective lifetime measurements of test structures.

227 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that measurement-based quantum computations (MBQC) which compute a nonlinear Boolean function with a high probability are contextual under natural assumptions for qubit systems, and the class of contextual MBQC includes an example which is of practical interest and has a superpolynomial speedup over the best known classical algorithm.
Abstract: We show, under natural assumptions for qubit systems, that measurement-based quantum computations (MBQCs) which compute a nonlinear Boolean function with a high probability are contextual The class of contextual MBQCs includes an example which is of practical interest and has a superpolynomial speedup over the best-known classical algorithm, namely, the quantum algorithm that solves the ``discrete log'' problem

207 citations


Journal ArticleDOI
TL;DR: In this paper, a vectorized version of the spherical harmonic transform (SHT) algorithm based on the Gauss-Legendre quadrature has been implemented in the SHTns library, which includes scalar and vector transforms.
Abstract: [1] In this paper, we report on very efficient algorithms for spherical harmonic transform (SHT). Explicitly vectorized variations of the algorithm based on the Gauss-Legendre quadrature are discussed and implemented in the SHTns library, which includes scalar and vector transforms. The main breakthrough is to achieve very efficient on-the-fly computations of the Legendre-associated functions, even for very high resolutions, by taking advantage of the specific properties of the SHT and the advanced capabilities of current and future computers. This allows us to simultaneously and significantly reduce memory usage and computation time of the SHT. We measure the performance and accuracy of our algorithms. Although the complexity of the algorithms implemented in SHTns are in ON3 (where N is the maximum harmonic degree of the transform), they perform much better than any third-party implementation, including lower-complexity algorithms, even for truncations as high as N = 1023. SHTns is available at https://bitbucket.org/nschaeff/shtns as open source software.

Journal ArticleDOI
TL;DR: This paper proposes an analog computation scheme that allows for an efficient estimate of linear and nonlinear functions over the wireless multiple-access channel and analyses the estimation error for two function examples to show the potential for huge performance gains over time- and code-division multiple- access based computation schemes.
Abstract: Wireless sensor network applications often involve the computation of pre-defined functions of the measurements such as for example the arithmetic mean or maximum value. Standard approaches to this problem separate communication from computation: digitized sensor readings are transmitted interference-free to a fusion center that reconstructs each sensor reading and subsequently computes the sought function value. Such separation-based computation schemes are generally highly inefficient as a complete reconstruction of individual sensor readings at the fusion center is not necessary to compute a function of them. In particular, if the mathematical structure of the channel is suitably matched (in some sense) to the function of interest, then channel collisions induced by concurrent transmissions of different nodes can be beneficially exploited for computation purposes. This paper proposes an analog computation scheme that allows for an efficient estimate of linear and nonlinear functions over the wireless multiple-access channel. A match between the channel and the function being evaluated is thereby achieved via some pre-processing on the sensor readings and post-processing on the superimposed signals observed by the fusion center. After analyzing the estimation error for two function examples, simulations are presented to show the potential for huge performance gains over time- and code-division multiple-access based computation schemes.

Book ChapterDOI
03 Mar 2013
TL;DR: Signatures of Correct Computation is introduced, a new model for verifying dynamic computations in cloud settings and it is shown that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works.
Abstract: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function's public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.

Journal ArticleDOI
TL;DR: It is shown that the noise-assisted MEMD (NA-MEMD) approach, which utilizes the dyadic filter bank property of MEMD, provides a solution to the above problems when used to calculate standard EMD.
Abstract: A noise-assisted approach in conjunction with multivariate empirical mode decomposition (MEMD) algorithm is proposed for the computation of empirical mode decomposition (EMD), in order to produce localized frequency estimates at the accuracy level of instantaneous frequency. Despite many advantages of EMD, such as its data driven nature, a compact decomposition, and its inherent ability to process nonstationary data, it only caters for signals with a sufficient number of local extrema. In addition, EMD is prone to mode-mixing and is designed for univariate data. We show that the noise-assisted MEMD (NA-MEMD) approach, which utilizes the dyadic filter bank property of MEMD, provides a solution to the above problems when used to calculate standard EMD. The method is also shown to alleviate the effects of noise interference in univariate noise-assisted EMD algorithms which directly add noise to the data. The efficacy of the proposed method, in terms of improved frequency localization and reduced mode-mixing, is demonstrated via simulations on electroencephalogram (EEG) data sets, over two paradigms in brain-computer interface (BCI).

Journal ArticleDOI
TL;DR: The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis.
Abstract: The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are per- formed for real sites in the western United States. The results also provide some in- sights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

Journal ArticleDOI
TL;DR: A phenomenological model and algorithm is developed and shown that it can generate realistic morphologies of several distinct neuronal types, and discusses the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry.
Abstract: Dendritic morphology constrains brain activity, as it determines first which neuronal circuits are possible and second which dendritic computations can be performed over a neuron's inputs. It is known that a range of chemical cues can influence the final shape of dendrites during development. Here, we investigate the extent to which self-referential influences, cues generated by the neuron itself, might influence morphology. To this end, we developed a phenomenological model and algorithm to generate virtual morphologies, which are then compared to experimentally reconstructed morphologies. In the model, branching probability follows a Galton-Watson process, while the geometry is determined by "homotypic forces" exerting influence on the direction of random growth in a constrained space. We model three such homotypic forces, namely an inertial force based on membrane stiffness, a soma-oriented tropism, and a force of self avoidance, as directional biases in the growth algorithm. With computer simulations we explored how each bias shapes neuronal morphologies. We show that based on these principles, we can generate realistic morphologies of several distinct neuronal types. We discuss the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry.

Book ChapterDOI
03 Mar 2013
TL;DR: It is shown that the expected guarantees of synchronous computation can be achieved given functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks, and that previous similar models can all be expressed within this new framework.
Abstract: In synchronous networks, protocols can achieve security guarantees that are not possible in an asynchronous world: they can simultaneously achieve input completeness (all honest parties' inputs are included in the computation) and guaranteed termination (honest parties do not 'hang' indefinitely). In practice truly synchronous networks rarely exist, but synchrony can be emulated if channels have (known) bounded latency and parties have loosely synchronized clocks. The widely-used framework of universal composability (UC) is inherently asynchronous, but several approaches for adding synchrony to the framework have been proposed. However, we show that the existing proposals do not provide the expected guarantees. Given this, we propose a novel approach to defining synchrony in the UC framework by introducing functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks. We show that the expected guarantees of synchronous computation can be achieved given these functionalities, and that previous similar models can all be expressed within our new framework.

Book ChapterDOI
01 Jan 2013
TL;DR: Two examples of human computation systems, online social networks and Wikipedia, are used to illustrate how these can be described and compared in terms of information and computation.
Abstract: In this chapter, concepts related to information and computation are reviewed in the context of human computation. A brief introduction to information theory and different types of computation is given. Two examples of human computation systems, online social networks and Wikipedia, are used to illustrate how these can be described and compared in terms of information and computation.

Proceedings Article
05 Dec 2013
TL;DR: Experiments on both synthetic and real-world data show that the proposed algorithm can easily scale up to networks of millions of nodes while significantly improves over previous state-of-the-arts in terms of the accuracy of the estimated influence and the quality of the selected nodes in maximizing the influence.
Abstract: If a piece of information is released from a media site, can we predict whether it may spread to one million web pages, in a month ? This influence estimation problem is very challenging since both the time-sensitive nature of the task and the requirement of scalability need to be addressed simultaneously. In this paper, we propose a randomized algorithm for influence estimation in continuous-time diffusion networks. Our algorithm can estimate the influence of every node in a network with |V| nodes and |e| edges to an accuracy of e using n = O(1/e2) randomizations and up to logarithmic factors O(n|e| + n|V|) computations. When used as a subroutine in a greedy influence maximization approach, our proposed algorithm is guaranteed to find a set of C nodes with the influence of at least (1 - 1/e) OPT -2Ce, where OPT is the optimal value. Experiments on both synthetic and real-world data show that the proposed algorithm can easily scale up to networks of millions of nodes while significantly improves over previous state-of-the-arts in terms of the accuracy of the estimated influence and the quality of the selected nodes in maximizing the influence.

Journal ArticleDOI
TL;DR: New distributed algorithms for the computation of the k- coreness of a network, a process also known as k-core decomposition, are proposed and an exhaustive experimental analysis on real-world data sets is provided.
Abstract: Several novel metrics have been proposed in recent literature in order to study the relative importance of nodes in complex networks. Among those, k-coreness has found a number of applications in areas as diverse as sociology, proteinomics, graph visualization, and distributed system analysis and design. This paper proposes new distributed algorithms for the computation of the k-coreness of a network, a process also known as k-core decomposition. This technique 1) allows the decomposition, over a set of connected machines, of very large graphs, when size does not allow storing and processing them on a single host, and 2) enables the runtime computation of k-cores in “live” distributed systems. Lower bounds on the algorithms complexity are given, and an exhaustive experimental analysis on real-world data sets is provided.

Journal ArticleDOI
TL;DR: In this article, the KCS container ship was investigated in calm water and regular head seas by means of EFD and CFD, and the experimental study was conducted in FORCE Technology's towing tank in Denmark.
Abstract: The KCS container ship was investigated in calm water and regular head seas by means of EFD and CFD. The experimental study was conducted in FORCE Technology’s towing tank in Denmark, and the CFD study was conducted using the URANS codes CFDSHIP-IOWA and Star-CCM+ plus the potential theory code AEGIR. Three speeds were covered and the wave conditions were chosen in order to study the ship’s response in waves under resonance and maximum exciting conditions. In the experiment, the heave and pitch motions and the resistance were measured together with wave elevation of the incoming wave. The model test was designed and conducted in order to enable UA assessment of the measured data. The results show that the ship responds strongly when the resonance and maximum exciting conditions are met. With respect to experimental uncertainty, the level for calm water is comparable to PMM uncertainties for maneuvering testing while the level is higher in waves. Concerning the CFD results, the computation shows a very complex and time-varying flow pattern. For the integral quantities, a comparison between EFD and CFD shows that the computed motions and resistance in calm water is in fair agreement with the measurement. In waves, the motions are still in fair agreement with measured data, but larger differences are observed for the resistance. The mean resistance is reasonable, but the first order amplitude of the resistance time history is underpredicted by CFD. Finally, it seems that the URANS codes are in closer agreement with the measurements compared to the potential theory.

Journal ArticleDOI
TL;DR: Stable computation of differentiation matrices and scattered node stencils based onGaussian radial basis functions based on Gaussian radial based functions is shown.
Abstract: Stable computation of differentiation matrices and scattered node stencils based on Gaussian radial basis functions

05 Jan 2013
TL;DR: Differential computation as discussed by the authors extends traditional incremental computation to allow arbitrarily nested iteration, and explains how differential computation can be efficiently implemented in the context of a declarative data-parallel dataflow language.
Abstract: Existing computational models for processing continuously changing input data are unable to efficiently support iterative queries except in limited special cases. This makes it difficult to perform complex tasks, such as social-graph analysis on changing data at interactive timescales, which would greatly benefit those analyzing the behavior of services like Twitter. In this paper we introduce a new model called differential computation, which extends traditional incremental computation to allow arbitrarily nested iteration, and explain—with reference to a publicly available prototype system called Naiad—how differential computation can be efficiently implemented in the context of a declarative data-parallel dataflow language. The resulting system makes it easy to program previously intractable algorithms such as incrementally updated strongly connected components, and integrate them with data transformation operations to obtain practically relevant insights from real data streams.

Journal ArticleDOI
TL;DR: It is concluded that neural computation is sui generis, which means that computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.

Proceedings ArticleDOI
15 Apr 2013
TL;DR: The work for the server is immense, in general; it is practical only for hand-compiled computations that can be expressed in special forms as discussed by the authors, and a core protocol that achieves the efficiency of the best manually constructed protocols yet applies to most computations.
Abstract: The area of proof-based verified computation (outsourced computation built atop probabilistically checkable proofs and cryptographic machinery) has lately seen renewed interest. Although recent work has made great strides in reducing the overhead of naive applications of the theory, these schemes still cannot be considered practical. A core issue is that the work for the server is immense, in general; it is practical only for hand-compiled computations that can be expressed in special forms.This paper addresses that problem. Provided one is willing to batch verification, we develop a protocol that achieves the efficiency of the best manually constructed protocols in the literature yet applies to most computations. We show that Quadratic Arithmetic Programs, a new formalism for representing computations efficiently, can yield a particularly efficient PCP that integrates easily into the core protocols, resulting in a server whose work is roughly linear in the running time of the computation. We implement this protocol in the context of a system, called Zaatar, that includes a compiler and a GPU implementation. Zaatar is almost usable for real problems---without special-purpose tailoring. We argue that many (but not all) of the next research questions in verified computation are questions in secure systems.

Journal ArticleDOI
TL;DR: It is shown that this new algorithm to derive efficiently the uncertainty bounds for the estimated modes at all model orders in the stabilization diagram is both computationally and memory efficient, reducing the computational burden by two orders of magnitude in the model order.

Journal ArticleDOI
TL;DR: A new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials is developed that outperforms the MATLAB function polyeig in terms of both stability and efficiency.
Abstract: We develop a new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials. It incorporates scaling of the problem parameters prior to the computation of eigenvalues, a choice of linearization with favorable conditioning and backward stability properties, and a preprocessing step that reveals and deflates the zero and infinite eigenvalues contributed by singular leading and trailing matrix coefficients. The algorithm is backward-stable for quadratics that are not too heavily damped. Numerical experiments show that our MATLAB implementation of the algorithm, quadeig, outperforms the MATLAB function polyeig in terms of both stability and efficiency.

Proceedings Article
01 Jan 2013
TL;DR: In this article, the authors develop efficient solutions for computation with real numbers in floating point representation, as well as more complex operations such as square root, logarithm, and exponentiation.
Abstract: Secure computation undeniably received a lot of attention in the recent years, with the shift toward cloud computing offering a new incentive for secure computation and outsourcing. Surprisingly little attention, however, has been paid to computation with non-integer data types. To narrow this gap, in this work we develop efficient solutions for computation with real numbers in floating point representation, as well as more complex operations such as square root, logarithm, and exponentiation. Our techniques are information-theoretically secure, do not use expensive cryptographic techniques, and can be applied to a variety of settings. Our experimental results also show that the techniques exhibit rather fast performance and in some cases outperform operations on integers.

Journal ArticleDOI
TL;DR: In this paper, a 2D inversion algorithm for field-based timeedomain (TD)induced polarization (IP) surveys is proposed, which is based on a 2-D complex conductivity kernel that is computed over a range of off-requencies and converted to TD through the fast Hankel transform.
Abstract: SUMMARY Field-basedtimedomain(TD)inducedpolarization(IP)surveysareusuallymodelledbytaking into account only the integral chargeability, thus disregarding spectral content. Furthermore, the effect of the transmitted waveform is commonly neglected, biasing inversion results. Given these limitations of conventional approaches, a new 2-D inversion algorithm has been developed using the full voltage decay of the IP response, together with an accurate description of the transmitter waveform and receiver transfer function. This allows reconstruction of the spectral information contained in the TD decay series. The inversion algorithm is based around a 2-D complex conductivity kernel that is computedoverarangeoffrequenciesandconvertedtotheTDthroughafastHankeltransform. Two key points in the implementation ensure that computation times are minimized. First, the speed of the Jacobian computation, time transformed from frequency domain through the same transformation adopted for the forward response is optimized. Secondly, the reduction of the number of frequencies where the forward response and Jacobian are calculated: cubic splines are used to interpolate the responses to the frequency sampling necessary in the fast Hankel transform. These features, together with parallel computation, ensure inversion times comparable with those of direct current algorithms. Thealgorithmhasbeendevelopedinalaterallyconstrainedinversionscheme,andhandles both smooth and layered inversions; the latter being helpful in sedimentary environments, where quasi-layered models often represent the actual geology more accurately than smooth minimum-structure models. In the layered inversion approach, a general method to derive the thickness derivative from the complex conductivity Jacobian is also proposed. One synthetic example of layered inversion and one field example of smooth inversion show the capability of the algorithm and illustrates a complete uncertainty analysis of the model parameters. With this new algorithm, in situ TD IP measurements give access to the spectral content of the polarization processes, opening up new applications in environmental and hydrogeophysical investigations.

Book ChapterDOI
13 Jul 2013
TL;DR: This paper presents a method for solving forall-exists quantified Horn clauses extended with well-foundedness conditions, based on a counterexample-guided abstraction refinement scheme to discover witnesses for existentially quantified variables.
Abstract: Temporal verification of universal (i.e., valid for all computation paths) properties of various kinds of programs, e.g., procedural, multi-threaded, or functional, can be reduced to finding solutions for equations in form of universally quantified Horn clauses extended with well-foundedness conditions. Dealing with existential properties (e.g., whether there exists a particular computation path), however, requires solving forall-exists quantified Horn clauses, where the conclusion part of some clauses contains existentially quantified variables. For example, a deductive approach to CTL verification reduces to solving such clauses. In this paper we present a method for solving forall-exists quantified Horn clauses extended with well-foundedness conditions. Our method is based on a counterexample-guided abstraction refinement scheme to discover witnesses for existentially quantified variables. We also present an application of our solving method to automation of CTL verification of software, as well as its experimental evaluation.