scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2014"


Book ChapterDOI
01 Jan 2014
TL;DR: This chapter provides an overview of the fundamentals of algorithms and their links to self-organization, exploration, and exploitation.
Abstract: Algorithms are important tools for solving problems computationally. All computation involves algorithms, and the efficiency of an algorithm largely determines its usefulness. This chapter provides an overview of the fundamentals of algorithms and their links to self-organization, exploration, and exploitation. A brief history of recent nature-inspired algorithms for optimization is outlined in this chapter.

8,285 citations


Book
05 Aug 2014
TL;DR: This book is an excellent introduction to finite elements, iterative linear solvers and scientific computing and contains theoretical problems and practical exercises that focus on theory and computation.
Abstract: The intended readership includes graduate students and researchers in engineering, numerical analysis, applied mathematics and interdisciplinary scientific computing. The publisher describes the book as follows: * An excellent introduction to finite elements, iterative linear solvers and scientific computing * Contains theoretical problems and practical exercises * All methods and examples use freely available software * Focuses on theory and computation, not theory for computation * Describes approximation methods and numerical linear algebra

925 citations


Journal ArticleDOI
13 May 2014
TL;DR: The techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation.
Abstract: Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.

405 citations


Journal ArticleDOI
TL;DR: In this article, a resource theory analogous to the theory of entanglement has been developed for fault-tolerant stabilizer computation and two quantitative measures for the amount of non-stabilizer resources are introduced.
Abstract: Recent results on the non-universality of fault-tolerant gate sets underline the critical role of resource states, such as magic states, to power scalable, universal quantum computation. Here we develop a resource theory, analogous to the theory of entanglement, that is relevant for fault-tolerant stabilizer computation. We introduce two quantitative measures?monotones?for the amount of non-stabilizer resource. As an application we give absolute bounds on the efficiency of magic state distillation. One of these monotones is the sum of the negative entries of the discrete Wigner representation of a quantum state, thereby resolving a long-standing open question of whether the degree of negativity in a quasi-probability representation is an operationally meaningful indicator of quantum behavior.

280 citations


Journal ArticleDOI
TL;DR: This paper introduces new SCEs based on finite-state machines based on FSMs for the task of digital image processing and compares the error tolerance, hardware area, and latency of stochastic implementations to those of conventional deterministic implementations using binary radix encoding.
Abstract: Maintaining the reliability of integrated circuits as transistor sizes continue to shrink to nanoscale dimensions is a significant looming challenge for the industry. Computation on stochastic bit streams, which could replace conventional deterministic computation based on a binary radix, allows similar computation to be performed more reliably and often with less hardware area. Prior work discussed a variety of specific stochastic computational elements (SCEs) for applications such as artificial neural networks and control systems. Recently, very promising new SCEs have been developed based on finite-state machines (FSMs). In this paper, we introduce new SCEs based on FSMs for the task of digital image processing. We present five digital image processing algorithms as case studies of practical applications of the technique. We compare the error tolerance, hardware area, and latency of stochastic implementations to those of conventional deterministic implementations using binary radix encoding. We also provide a rigorous analysis of a particular function, namely the stochastic linear gain function, which had only been validated experimentally in prior work.

224 citations


Journal ArticleDOI
TL;DR: The proposed parallel solution (P-SBAS) is based on a dual-level parallelization approach and encompasses combined parallelization strategies, which are fully discussed in this paper and confirm the effectiveness of the proposed parallel computing solution.
Abstract: The aim of this paper is to design a novel parallel computing solution for the processing chain implementing the Small BAseline Subset (SBAS) Differential SAR Interferometry (DInSAR) technique. The proposed parallel solution (P-SBAS) is based on a dual-level parallelization approach and encompasses combined parallelization strategies, which are fully discussed in this paper. Moreover, the main methodological aspects of the proposed approach and their implications are also addressed. Finally, an experimental analysis, aimed at quantitatively evaluating the computational efficiency of the implemented parallel prototype, with respect to appropriate metrics, has been carried out on real data; this analysis confirms the effectiveness of the proposed parallel computing solution. In the current scenario, characterized by huge SAR archives relevant to the present and future SAR missions, the P-SBAS processing chain can play a key role to effectively exploit these big data volumes for the comprehension of the surface deformation dynamics of large areas of Earth.

170 citations


Journal ArticleDOI
TL;DR: This work proposes and demonstrates an acoustic switch based on a driven chain of spherical particles with a nonlinear contact force and realizes the OR and AND acoustic logic elements by exploiting the nonlinear dynamical effects of the granular chain.
Abstract: Electrical flow control devices are fundamental components in electrical appliances and computers; similarly, optical switches are essential in a number of communication, computation and quantum information-processing applications. An acoustic counterpart would use an acoustic (mechanical) signal to control the mechanical energy flow through a solid material. Although earlier research has demonstrated acoustic diodes or circulators, no acoustic switches with wide operational frequency ranges and controllability have been realized. Here we propose and demonstrate an acoustic switch based on a driven chain of spherical particles with a nonlinear contact force. We experimentally and numerically verify that this switching mechanism stems from a combination of nonlinearity and bandgap effects. We also realize the OR and AND acoustic logic elements by exploiting the nonlinear dynamical effects of the granular chain. We anticipate these results to enable the creation of novel acoustic devices for the control of mechanical energy flow in high-performance ultrasonic devices.

165 citations


Journal ArticleDOI
TL;DR: A novel, real-time algorithm to accurately approximate the generalized penetration depth (PDg) between two overlapping rigid or articulated models is presented, based on iterative, constrained optimization on the contact space, defined by the overlapping objects.
Abstract: We present a novel, real-time algorithm to accurately approximate the generalized penetration depth (PDg) between two overlapping rigid or articulated models. Given the high complexity of computing PDg, our algorithm approximates PDg based on iterative, constrained optimization on the contact space, defined by the overlapping objects. The main ingredient of our algorithm is a novel and general formulation of distance metric, the object norm, in a configuration space for articulated models, and a compact closed-form solution for it. Then, we perform constrained optimization, by linearizing the contact constraint, and minimizing the object norm under such a constraint. In practice, our algorithm can compute locally optimal PDg for rigid or articulated models consisting of tens of thousands of triangles in tens of milliseconds. We also suggest three applications using PDg computation: retraction-based motion planning, physically-based animation, and data-driven grasping.

148 citations


Book ChapterDOI
07 Dec 2014
TL;DR: This work defines generic constructions of the Threshold Implementation technique and proves their security against higher orders, and provides 1st, 2nd and 3rd-order DPA-resistant implementations of the block cipher KATAN- 32.
Abstract: Higher-order differential power analysis attacks are a serious threat for cryptographic hardware implementations. In particular, glitches in the circuit make it hard to protect the implementation with masking. The existing higher-order masking countermeasures that guarantee security in the presence of glitches use multi-party computation techniques and require a lot of resources in terms of circuit area and randomness. The Threshold Implementation method is also based on multi-party computation but it is more area and randomness efficient. Moreover, it typically requires less clock-cycles since all parties can operate simultaneously. However, so far it is only provable secure against 1st-order DPA. We address this gap and extend the Threshold Implementation technique to higher orders. We define generic constructions and prove their security. To illustrate the approach, we provide 1st, 2nd and 3rd-order DPA-resistant implementations of the block cipher KATAN- 32. Our analysis of 300 million power traces measured from an FPGA implementation supports the security proofs.

139 citations


Journal Article
TL;DR: It is shown that measurement-based quantum computations which compute a nonlinear Boolean function with a high probability are contextual, and an example which is of practical interest and has a superpolynomial speedup over the best-known classical algorithm.
Abstract: We show, under natural assumptions for qubit systems, that measurement-based quantum computations (MBQCs) which compute a nonlinear Boolean function with a high probability are contextual. The class of contextual MBQCs includes an example which is of practical interest and has a superpolynomial speedup over the best-known classical algorithm, namely, the quantum algorithm that solves the ``discrete log'' problem.

132 citations


Journal ArticleDOI
TL;DR: A way of finding energy representations with large classical gaps between ground and first excited states, efficient algorithms for mapping non-compatible Ising models into the hardware, and the use of decomposition methods for problems that are too large to fit in hardware are proposed.
Abstract: This paper discusses techniques for solving discrete optimization problems using quantum annealing. Practical issues likely to affect the computation include precision limitations, finite temperature, bounded energy range, sparse connectivity, and small numbers of qubits. To address these concerns we propose a way of finding energy representations with large classical gaps between ground and first excited states, efficient algorithms for mapping non-compatible Ising models into the hardware, and the use of decomposition methods for problems that are too large to fit in hardware. We validate the approach by describing experiments with D-Wave quantum hardware for low density parity check decoding with up to 1000 variables.

Journal ArticleDOI
TL;DR: This work proposes here the first reservoir computer based on a fully passive nonlinearity, namely the saturable absorption of a semiconductor mirror, which constitutes an important step towards the development of ultrafast low-consumption analog computers.
Abstract: Reservoir computing is a new bio-inspired computation paradigm It exploits a dynamical system driven by a time-dependent input to carry out computation For efficient information processing, only a few parameters of the reservoir needs to be tuned, which makes it a promising framework for hardware implementation Recently, electronic, opto-electronic and all-optical experimental reservoir computers were reported In those implementations, the nonlinear response of the reservoir is provided by active devices such as optoelectronic modulators or optical amplifiers By contrast, we propose here the first reservoir computer based on a fully passive nonlinearity, namely the saturable absorption of a semiconductor mirror Our experimental setup constitutes an important step towards the development of ultrafast low-consumption analog computers

Journal ArticleDOI
TL;DR: In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths.
Abstract: We present the space---time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

Journal ArticleDOI
TL;DR: This paper introduces a formal framework that can be used to determine whether a physical system is performing a computation, and introduces the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems.
Abstract: Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper we introduce a formal framework that can be used to determine whether or not a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, drawing the comparison with the use of mathematical models to represent physical objects in experimental science. This powerful formulation allows a precise description of the similarities between experiments, computation, simulation, and technology, leading to our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions that must be satisfied in order for computation to be occurring, and illustrate these with a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We define the critical notion of a ‘computational entity’, and show the role this plays in defining when computing is taking place in physical systems.

Book ChapterDOI
05 Jan 2014
TL;DR: It is demonstrated that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems that at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms.
Abstract: Persistent homology is a popular and powerful tool for capturing topological features of data. Advances in algorithms for computing persistent homology have reduced the computation time drastically -- as long as the algorithm does not exhaust the available memory. Following up on a recently presented parallel method for persistence computation on shared memory systems [1], we demonstrate that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems. Our algorithmic design ensures that the data is distributed over the nodes without redundancy; this permits the computation of much larger instances than on a single machine. Moreover, we observe that the parallelism at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms. In our experiments, we were able to compute the persistent homology of filtrations with more than a billion (109) elements within seconds on a cluster with 32 nodes using less than 6GB of memory per node.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a new mean dynamic topography (MDT) for the Mediterranean Sea, SMDT-MED-2014 (Synthetic Mean Dynamic Topography of the MEDiterranean sea) which was computed using extended data sets and refined processing.
Abstract: . The accurate knowledge of the ocean's mean dynamic topography (MDT) is a crucial issue for a number of oceanographic applications and, in some areas of the Mediterranean Sea, important limitations have been found pointing to the need of an upgrade. We present a new MDT that was computed for the Mediterranean Sea. It profits from improvements made possible by the use of extended data sets and refined processing. The updated data set spans the 1993–2012 period and consists of drifter velocities, altimetry data, hydrological profiles and model data. The methodology is similar to the previous MDT by Rio et al. (2007). However, in Rio et al. (2007) no hydrological profiles had been taken into account. This required the development of dedicated processing. A number of sensitivity studies have been carried out to obtain the most accurate MDT as possible. The main results from these sensitivity studies are the following: moderate impact to the choice of correlation scales but almost negligible sensitivity to the choice of the first guess (model solution). A systematic external validation to independent data has been made to evaluate the performance of the new MDT. Compared to previous versions, SMDT-MED-2014 (Synthetic Mean Dynamic Topography of the MEDiterranean sea) features shorter-scale structures, which results in an altimeter velocity variance closer to the observed velocity variance and, at the same time, gives better Taylor skills.

Journal ArticleDOI
TL;DR: It is concluded that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets.
Abstract: We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.

Journal ArticleDOI
TL;DR: A modification of the fast-marching algorithm, which solves the anisotropic eikonal equation associated to an arbitrary continuous Riemannian metric on a two- or three-dimensional domain, and proves the convergence of the algorithm and illustrates its efficiency by numerical experiments.
Abstract: We introduce a modification of the Fast Marching Algorithm, which solves the generalized eikonal equation associated to an arbitrary continuous riemannian metric, on a two or three dimensional box domain. The algorithm has a logarithmic complexity in the maximum anisotropy ratio of the riemannian metric, which allows to handle extreme anisotropies for a reduced numerical cost. We establish that the output of the algorithm converges towards the viscosity solution of continuous problem, as the discretization step tends to zero. The algorithm is based on the computation at each grid point of a reduced basis of the unit lattice, with respect to the symmetric positive definite matrix encoding the desired anisotropy at this point.

Journal ArticleDOI
TL;DR: This work uses the tensor train (TT) format for vectors and matrices to overcome the curse of dimensionality and make storage and computational cost feasible, and approximate several low-lying eigenvectors simultaneously in the block version of the TT format.

Journal ArticleDOI
TL;DR: It is shown that the PSBG enables optical computation of the spatial Laplace operator of the electromagnetic field components of the incident beam and the possibility of the formation of Laguerre-Gaussian mode.
Abstract: Diffraction of a 3D optical beam on a multilayer phase-shifted Bragg grating (PSBG) is considered. It is shown that the PSBG enables optical computation of the spatial Laplace operator of the electromagnetic field components of the incident beam. The computation of the Laplacian is performed in reflection at normal incidence. As a special case, the parameters of the PSBG transforming the incident Gaussian beam into a Laguerre-Gaussian mode of order (1,0) are obtained. Presented numerical results demonstrate high quality of the Laplace operator computation and confirm the possibility of the formation of Laguerre-Gaussian mode. We expect the proposed applications to be useful for all-optical data processing.

Book ChapterDOI
01 Jan 2014
TL;DR: An updated QPmR algorithm implementation for computation and analysis of the spectrum of quasi-polynomials is presented and the algorithm is demonstrated by three examples.
Abstract: An updated QPmR algorithm implementation for computation and analysis of the spectrum of quasi-polynomials is presented. The objective is to compute all the zeros of a quasi-polynomial located in a given region of the complex plane. The root-finding task is based on mapping the quasi-polynomial in the complex plane. Consequently, utilizing spectrum distribution diagram of the quasi-polynomial, the asymptotic exponentials of the retarded chains are determined. If the quasi-polynomial is of neutral type, the spectrum of associated exponential polynomial is assessed, supplemented by determining the safe upper bound of its spectrum. Next to the outline of the computational tools involved in QPmR, its Matlab implementation is presented. Finally, the algorithm is demonstrated by three examples.

Posted Content
TL;DR: This paper introduces a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers and an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time.
Abstract: One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in significantly faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrarily length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

Journal ArticleDOI
TL;DR: It is concluded that DCA is a considerable alternative to divided-antenna mode ATI, while the TanDEM-X results demonstrate the true potential of the ATI technique at near-optimum baselines.
Abstract: All existing examples of current measurements by spaceborne synthetic aperture radar (SAR) along-track (AT) interferometry (ATI) have suffered from short baselines and corresponding low sensitivities. Theoretically, the best data quality at X-band is expected at effective baselines on the order of 30 m, i.e., 30 times as long as the baselines of the divided-antenna modes of TerraSAR-X. In early 2012, we had a first opportunity to obtain data at near-optimum baselines from the TanDEM-X satellite formation. In this paper, we analyze two TanDEM-X interferograms acquired over the Pentland Firth (Scotland) with effective AT baselines of 25 and 40 m. For comparison, we consider a TerraSAR-X dual-receive-antenna (DRA)-mode interferogram with an effective baseline of 1.15 m, as well as velocity fields obtained by Doppler centroid analysis (DCA) of single-antenna data from the same three scenes. We show that currents derived from the TanDEM-X interferograms have a residual noise level of 0.1 m/s at an effective resolution of about 33 m × 33 m, while DRA-mode data must be averaged over 1000 m × 1000 m to reach the same level of accuracy. A comparison with reference currents from a 1-km resolution numerical tide computation system shows good agreement in all three cases. The DCA-based currents are found to be less accurate than the ATI-based ones but close to short-baseline ATI results in quality. We conclude that DCA is a considerable alternative to divided-antenna mode ATI, while our TanDEM-X results demonstrate the true potential of the ATI technique at near-optimum baselines.

Book ChapterDOI
03 Sep 2014
TL;DR: In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes, however, this alone might not be enough if the authors want MPC protocols to be used in practice.
Abstract: In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation.

Journal ArticleDOI
TL;DR: The current state of computational genetic circuits is reviewed and artificial gene circuits that perform digital and analog computation are described and new directions for engineering biological circuits capable of computation are suggested.

Book ChapterDOI
TL;DR: The nature of distributed computation has long been a topic of interest in complex systems science, physics, artificial life and bioinformatics and has been postulated to be associated with the capability to support universal computation.
Abstract: The nature of distributed computation has long been a topic of interest in complex systems science, physics, artificial life and bioinformatics. In particular, emergent complex behavior has often been described from the perspective of computation within the system (Mitchell 1998b,a) and has been postulated to be associated with the capability to support universal computation (Langton 1990; Wolfram 1984c; Casti 1991).

Posted Content
TL;DR: It is demonstrated that GraphX achieves comparable performance as specialized graph computation systems, while outperforming them in end-to-end graph pipelines, achieving a balance between expressiveness, performance, and ease of use.
Abstract: From social networks to language modeling, the growing scale and importance of graph data has driven the development of numerous new graph-parallel systems (e.g., Pregel, GraphLab). By restricting the computation that can be expressed and introducing new techniques to partition and distribute the graph, these systems can efficiently execute iterative graph algorithms orders of magnitude faster than more general data-parallel systems. However, the same restrictions that enable the performance gains also make it difficult to express many of the important stages in a typical graph-analytics pipeline: constructing the graph, modifying its structure, or expressing computation that spans multiple graphs. As a consequence, existing graph analytics pipelines compose graph-parallel and data-parallel systems using external storage systems, leading to extensive data movement and complicated programming model. To address these challenges we introduce GraphX, a distributed graph computation framework that unifies graph-parallel and data-parallel computation. GraphX provides a small, core set of graph-parallel operators expressive enough to implement the Pregel and PowerGraph abstractions, yet simple enough to be cast in relational algebra. GraphX uses a collection of query optimization techniques such as automatic join rewrites to efficiently implement these graph-parallel operators. We evaluate GraphX on real-world graphs and workloads and demonstrate that GraphX achieves comparable performance as specialized graph computation systems, while outperforming them in end-to-end graph pipelines. Moreover, GraphX achieves a balance between expressiveness, performance, and ease of use.

Journal ArticleDOI
Cong Li1, Haibin Duan1
TL;DR: The simulated annealing mechanism is adopted in the SAPIO algorithm for maximizing the value of EPF and a series of comparative experiments with standard Genetic Algorithm, Particle Swarm Optimization, Artificial Bee Colony Optimization and PIO algorithms demonstrate the robustness and effectiveness of the algorithm.

Book ChapterDOI
11 May 2014
TL;DR: In this paper, secure two-party computation (2PC) has been demonstrated to be feasible in practice, but all efficient general-computation 2PC protocols require multiple rounds of interaction between the two players.
Abstract: In recent years, secure two-party computation (2PC) has been demonstrated to be feasible in practice. However, all efficient general-computation 2PC protocols require multiple rounds of interaction between the two players. This property restricts 2PC to be only relevant to scenarios where both players can be simultaneously online, and where communication latency is not an issue.

01 May 2014
TL;DR: In this paper, the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation are reviewed and a new direction for engineering biological circuits capable of computation is suggested.
Abstract: Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene networks that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation.