scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2016"


Journal ArticleDOI
TL;DR: The PyCBC search as mentioned in this paper performs a matched-filter search for binary merger signals using a bank of gravitational-wave template waveforms, which is able to measure false-alarm rates as low as one per million years, required for confident detection of signals.
Abstract: We describe the PyCBC search for gravitational waves from compactobject binary coalescences in advanced gravitational-wave detector data. The search was used in the first Advanced LIGO observing run and unambiguously identified two black hole binary mergers, GW150914 and GW151226. At its core, the PyCBC search performs a matched-filter search for binary merger signals using a bank of gravitational-wave template waveforms. We provide a complete description of the search pipeline including the steps used to mitigate the effects of noise transients in the data, identify candidate events and measure their statistical significance. The analysis is able to measure false-alarm rates as low as one per million years, required for confident detection of signals. Using data from initial LIGO’s sixth science run, we show that the new analysis reduces the background noise in the search, giving a 30% increase in sensitive volume for binary neutron star systems over previous searches.

453 citations


Posted Content
TL;DR: The proposed Bitwise Neural Network (BNN) is especially suitable for resource-constrained environments, since it replaces either floating or fixed-point arithmetic with significantly more efficient bitwise operations.
Abstract: Based on the assumption that there exists a neural network that efficiently represents a set of Boolean functions between all binary inputs and outputs, we propose a process for developing and deploying neural networks whose weight parameters, bias terms, input, and intermediate hidden layer output signals, are all binary-valued, and require only basic bit logic for the feedforward pass. The proposed Bitwise Neural Network (BNN) is especially suitable for resource-constrained environments, since it replaces either floating or fixed-point arithmetic with significantly more efficient bitwise operations. Hence, the BNN requires for less spatial complexity, less memory bandwidth, and less power consumption in hardware. In order to design such networks, we propose to add a few training schemes, such as weight compression and noisy backpropagation, which result in a bitwise network that performs almost as well as its corresponding real-valued network. We test the proposed network on the MNIST dataset, represented using binary features, and show that BNNs result in competitive performance while offering dramatic computational savings.

210 citations


Journal ArticleDOI
TL;DR: The bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products.
Abstract: The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix- $4$ (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- $8$ Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate $2$ -bit adder is deliberately designed for calculating the sum of $1\times$ and $2\times$ of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the $2$ -bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed $16\times 16$ bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.

162 citations


Proceedings ArticleDOI
27 Jun 2016
TL;DR: This paper proposes to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages and proposes to map the original image to compact binary codes via carefully designed deep convolutional neural networks and the hashing function fitting can be solved by training binary CNN classifiers.
Abstract: In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary codes which serve as the labels of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental/interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.

140 citations


Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed algorithm performs competitively and in some cases is superior to the existing algorithms.

57 citations


Journal ArticleDOI
TL;DR: This work has used a novel and entirely automatic evolutionary optimization algorithm written in the python programming language to fit the two most important interaction parameters for more than 1100 binary mixtures, resulting in a reasonable total running time for this large set ofbinary mixtures.
Abstract: In the highest-accuracy mixture models available today, these being the multi-fluid Helmholtz-energy-explicit formulations, there are a number of binary interaction parameters that must be obtained through correlation or estimation schemes. These binary interaction parameters are used to shape the thermodynamic surface and yield higher-fidelity predictions of various thermodynamic properties including vapor-liquid equilibria and homogeneous p-v-T data, among others. In this work, we have used a novel and entirely automatic evolutionary optimization algorithm written in the python programming language to fit the two most important interaction parameters for more than 1100 binary mixtures. This fitting algorithm can be run on multiple processors in parallel, resulting in a reasonable total running time for this large set of binary mixtures. For more than 830 of the binary pairs, the median absolute relative error in bubble-point pressure is less than 5%. The source code for the fitter is provided as supplem...

55 citations


Journal ArticleDOI
20 Jul 2016
TL;DR: This approach extends the ideas of unum arithmetic introduced two years ago by breaking completely from the IEEE float-type format, resulting in fixed bit size values, fixed execution time, no exception values or “gradual underflow” issues, no wasted bit patterns, and no redundant representations.
Abstract: If we are willing to give up compatibility with IEEE 754 floats and design a number format with goals appropriate for 2016, we can achieve several goals simultaneously: Extremely high energy efficiency and information-per-bit, no penalty for decimal operations instead of binary, rigorous bounds on answers without the overly pessimistic bounds produced by interval methods, and unprecedented high speed up to some precision. This approach extends the ideas of unum arithmetic introduced two years ago by breaking completely from the IEEE float-type format, resulting in fixed bit size values, fixed execution time, no exception values or “gradual underflow” issues, no wasted bit patterns, and no redundant representations (like “negative zero”). As an example of the power of this format, a difficult 12-dimensional nonlinear robotic kinematics problem that has defied solvers to date is quickly solvable with absolute bounds. Also unlike interval methods, it becomes possible to operate on arbitrary disconnected subsets of the real number line with the same speed as operating on a simple bound.

45 citations


Journal ArticleDOI
TL;DR: This paper encodes one computer-generated standard 8 bit sinusoidal fringe pattern into multiple binary patterns (sequence) with designed temporal-spatial binary encoding tactics and develops an experimental system aiming to solve the problem of fast and accurate 3D measurement.
Abstract: Balancing the accuracy and speed for 3D surface measurement of object is crucial in many important applications. Binary encoding pattern utilizing the high-speed image switching rate of digital mirror device (DMD)-based projector could be used as the candidate for fast even high-speed 3D measurement, but current most schemes only enable the measurement speed, which limit their application scopes. In this paper, we present a binary encoding method and develop an experimental system aiming to solve such a situation. Our approach encodes one computer-generated standard 8 bit sinusoidal fringe pattern into multiple binary patterns (sequence) with designed temporal-spatial binary encoding tactics. The binary pattern sequence is then high-speed and in-focus projected onto the surface of tested object, and then captured by means of temporal-integration imaging to form one sinusoidal fringe image. Further the combination of phase-shifting technique and temporal phase unwrapping algorithm leads to fast and accurate 3D measurement. The systematic accuracy better than 0.08mm is achievable. The measurement results with mask and palm are given to confirm the feasibility.

43 citations


Patent
05 Apr 2016
TL;DR: In this paper, a plurality of arithmetic logic units each having an accumulator and an integer arithmetic unit that receives and performs integer arithmetic operations on integer inputs and accumulates integer results of a series of integer arithmetic operation into the accumulator as an integer accumulated value.
Abstract: An apparatus includes a plurality of arithmetic logic units each having an accumulator and an integer arithmetic unit that receives and performs integer arithmetic operations on integer inputs and accumulates integer results of a series of the integer arithmetic operations into the accumulator as an integer accumulated value. A register is programmable with an indication of a number of fractional bits of the integer accumulated values and an indication of a number of fractional bits of integer outputs. A first bit width of the accumulator is greater than twice a second bit width of the integer outputs. A plurality of adjustment units scale and saturate the first bit width integer accumulated values to generate the second bit width integer outputs based on the indications of the number of fractional bits of the integer accumulated values and outputs programmed into the register.

40 citations


Book ChapterDOI
TL;DR: This implementation and following discussion provides a practitioner’s view of what might be accomplished in this framework and further research on model enhancements or different model structures needs to be undertaken to improve its usefulness in comparison to the current industrial domain.
Abstract: Portfolio optimization in a quantum computing paradigm is explored. The D-Wave adiabatic quantum computation optimization system is used to determine an optimal portfolio of stocks using binary selection. The stock returns, variances and covariances are modeled in the graph-theoretic maximum independent set (MIS) and weighted maximum independent set (WMIS) structures. These structures are mapped into the Ising model representation of the underlying D-Wave optimizer. The results show different stock selections over a range of predetermined risk thresholds and underlying models. This implementation and following discussion provides a practitioner’s view of what might be accomplished in this framework. The particular models used in the implementations have restricted appeal but do link the financial engineering domain to the quantum computing optimization domain. Further research on model enhancements or different model structures needs to be undertaken to improve its usefulness in comparison to the current industrial domain.

39 citations


Journal ArticleDOI
TL;DR: In this article, a hardware architecture of scalar multiplication based on Montgomery ladder algorithm for binary elliptic curve cryptography is presented, where the point addition and point doubling are performed in parallel by only three pipelined digit-serial finite field multipliers.

Posted Content
TL;DR: In this article, it was shown that there are infinitely many prime numbers which do not have the digit $a_0$ in their decimal expansion, and the proof is based on obtaining suitable ''Type I'' and ''Type II'' arithmetic information for use in Harman's sieve to control the minor arcs.
Abstract: Let $a_0\in\{0,\dots,9\}$. We show there are infinitely many prime numbers which do not have the digit $a_0$ in their decimal expansion. The proof is an application of the Hardy-Littlewood circle method to a binary problem, and rests on obtaining suitable `Type I' and `Type II' arithmetic information for use in Harman's sieve to control the minor arcs. This is obtained by decorrelating Diophantine conditions which dictate when the Fourier transform of the primes is large from digital conditions which dictate when the Fourier transform of numbers with restricted digits is large. These estimates rely on a combination of the geometry of numbers, the large sieve and moment estimates obtained by comparison with a Markov process.

Journal ArticleDOI
TL;DR: A novel approach called PRE-Bin is proposed that automatically extracts binary-type fields of binary protocols based on fine-grained bits and outperforms the existing algorithms.
Abstract: Protocol message format extraction is a principal process of automatic network protocol reverse engineering when target protocol specifications are not available. However, binary protocol reverse engineering has been a new challenge in recent years for approaches that traditionally have dealt with text-based protocols rather than binary protocols. In this study, the authors propose a novel approach called PRE-Bin that automatically extracts binary-type fields of binary protocols based on fine-grained bits. First, a silhouette coefficient is introduced into the hierarchical clustering to confirm the optimal clustering number of binary frames. Second, a modified multiple sequence alignment algorithm, in which the matching process and back-tracing rules are redesigned, is also proposed to analyse binary field features. Finally, a Bayes decision model is invoked to describe field features and determine bit-oriented field boundaries. The maximum a posteriori criterion is leveraged to complete an optimal protocol format estimation of binary field boundaries. The authors implemented a prototype system of PRE-Bin to infer the specification of binary protocols from actual traffic traces. Experimental results indicate that PRE-Bin effectively extracts binary fields and outperforms the existing algorithms.

Journal ArticleDOI
TL;DR: In this paper, a large grid of binary evolution models simulated with the SeBa code is used to estimate the probability of a binary undergoing mass transfer being interrupted by an interloping single or more often a binary, over the course of the cluster lifetime.
Abstract: Binary mass transfer is at the forefront of some of the most exciting puzzles of modern astrophysics, including Type Ia supernovae, gamma-ray bursts, and the formation of most observed exotic stellar populations. Typically, the evolution is assumed to proceed in isolation, even in dense stellar environments such as star clusters. In this paper, we test the validity of this assumption via the analysis of a large grid of binary evolution models simulated with the SeBa code. For every binary, we calculate analytically the mean time until another single or binary star comes within the mean separation of the mass-transferring binary, and compare this time-scale to the mean time for stable mass transfer to occur. We then derive the probability for each respective binary to experience a direct dynamical interruption. The resulting probability distribution can be integrated to give an estimate for the fraction of binaries undergoing mass transfer that are expected to be disrupted as a function of the host cluster properties. We find that for lower-mass clusters ($\lesssim 10^4$ M$_{\odot}$), on the order of a few to a few tens of percent of binaries undergoing mass-transfer are expected to be interrupted by an interloping single, or more often binary, star, over the course of the cluster lifetime, whereas in more massive globular clusters we expect $\ll$ 1% to be interrupted. Furthermore, using numerical scattering experiments performed with the FEWBODY code, we show that the probability of interruption increases if perturbative fly-bys are considered as well, by a factor $\sim 2$.

Journal ArticleDOI
TL;DR: An improved method for the generation of mock data is introduced and it is shown that the ability to recover the signal parameters has improved by an order of magnitude when compared to the results of the first mock data and science challenge.
Abstract: The Einstein Telescope is a conceived third-generation gravitational-wave detector that is envisioned to be an order of magnitude more sensitive than advanced LIGO, Virgo, and Kagra, which would be able to detect gravitational-wave signals from the coalescence of compact objects with waveforms starting as low as 1 Hz. With this level of sensitivity, we expect to detect sources at cosmological distances. In this paper we introduce an improved method for the generation of mock data and analyze it with a new low-latency compact binary search pipeline called gstlal. We present the results from this analysis with a focus on low-frequency analysis of binary neutron stars. Despite compact binary coalescence signals lasting hours in the Einstein Telescope sensitivity band when starting at 5 Hz, we show that we are able to discern various overlapping signals from one another. We also determine the detection efficiency for each of the analysis runs conducted and show a proof of concept method for estimating the number signals as a function of redshift. Finally, we show that our ability to recover the signal parameters has improved by an order of magnitude when compared to the results of the first mock data and science challenge. For binary neutron stars we are able to recover the total mass and chirp mass to within 0.5% and 0.05%, respectively.

Journal ArticleDOI
TL;DR: In this paper, the presence of a stellar companion as a possible mechanism of material depletion in the inner region of transition disks was investigated, which would rule out an ongoing planetary formation process in distances comparable to the binary separation.
Abstract: Using Non-Redundant Mask interferometry (NRM), we searched for binary companions to objects previously classified as Transitional Disks (TD). These objects are thought to be an evolutionary stage between an optically thick disk and optically thin disk. We investigate the presence of a stellar companion as a possible mechanism of material depletion in the inner region of these disks, which would rule out an ongoing planetary formation process in distances comparable to the binary separation. For our detection limits, we implement a new method of completeness correction using a combination of randomly sampled binary orbits and Bayesian inference. The selected sample of 24 TDs belong to the nearby and young star forming regions: Ophiuchus ($\sim$ 130 pc), Taurus-Auriga ($\sim$ 140 pc) and IC348 ( $\sim$ 220 pc). These regions are suitable to resolve faint stellar companions with moderate to high confidence levels at distances as low as 2 au from the central star. With a total of 31 objects, including 11 known TDs and circumbinary disks from the literature, we have found that a fraction of 0.38 $\pm$ 0.09 of the SEDs of these objects are likely due to the tidal interaction between a close binary and its disk, while the remaining SEDs are likely the result of other internal processes such as photoevaporation, grain growth, planet disk interactions. In addition, we detected four companions orbiting outside the area of the truncation radii and we propose that the IR excesses of these systems are due to a disk orbiting a secondary companion

Journal ArticleDOI
TL;DR: The experimental results on three real hyperspectral images show the better performance of BCFE compared to some popular and state-of-the-art feature extraction methods, from the accuracy and computation time point of views, in a small sample size situation.

Journal ArticleDOI
TL;DR: An optical type absolute shaft encoder coding pattern is presented, which is able to provide error free positional information for a high-speed rotating object and exhibits two times more resolution than conventional binary or gray code pattern resolution.
Abstract: In this paper, an optical type absolute shaft encoder coding pattern is presented. Compared with the conventional binary or gray code pattern resolution ( ${2}^{n}$ ), the proposed code exhibits two times more resolution ( $2^{n^{2}}$ ). Even, to represent high density positional information of rotating object, it is required very less number of tracks. This pattern sequence is based on $n$ -by-2 matrices of binary codes where consecutive two columns of each code represented as high–low order bits. To retrieve the encoded position from these coding patterns, an extra pulse track was added at the outside of coded tracks. Moreover, it was used for providing a synchronized clock pulse into a simple photodetector-based sequential logic circuitry; it is the decoding unit of this encoder. Thus, depending on these synchronized pulses, the proposed encoder is able to provide error free positional information for a high-speed rotating object. Eventually, in order to prove this concept, a prototype is designed and tested, where a satisfactory performance was achieved.

Journal ArticleDOI
TL;DR: A hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter that includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm.
Abstract: The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters.


Proceedings ArticleDOI
TL;DR: In this article, the authors proposed a method to learn visual features online for improving loop-closure detection and place recognition, based on bag-of-words frameworks, using a pair of matched features from two consecutive frames, such that the codeword has temporally-derived perspective invariance to camera motion.
Abstract: This paper proposes a simple yet effective approach to learn visual features online for improving loop-closure detection and place recognition, based on bag-of-words frameworks. The approach learns a codeword in bag-of-words model from a pair of matched features from two consecutive frames, such that the codeword has temporally-derived perspective invariance to camera motion. The learning algorithm is efficient: the binary descriptor is generated from the mean image patch, and the mask is learned based on discriminative projection by minimizing the intra-class distances among the learned feature and the two original features. A codeword for bag-of-words models is generated by packaging the learned descriptor and mask, with a masked Hamming distance defined to measure the distance between two codewords. The geometric properties of the learned codewords are then mathematically justified. In addition, hypothesis constraints are imposed through temporal consistency in matched codewords, which improves precision. The approach, integrated in an incremental bag-of-words system, is validated on multiple benchmark data sets and compared to state-of-the-art methods. Experiments demonstrate improved precision/recall outperforming state of the art with little loss in runtime.

Journal ArticleDOI
TL;DR: This work includes the study of the effect of the number of bits on the received power, minimum bit error rate, and maximum Q-factor in all-optical logic gates.
Abstract: All-optical logic gates are designed to extend the existing design to a higher number of bits, to use the same gate in multifunctions, and to add new gate designs. This type of gate is based on semiconductor optical amplifier (SOA) nonlinearities, since the SOA can provide a strong change of the refractive index together with high gain. The SOA is used with a Mach–Zehnder interferometer (MZI) forming an SOA-MZI structure which is used to perform the logic gates XOR, NOR, OR, and XNOR. Two binary input data signals are used with different number of bits (4, 6, 8, and 16 bit) at 10 Gbps. This work includes the study of the effect of the number of bits on the received power, minimum bit error rate, and maximum Q-factor.

Journal ArticleDOI
TL;DR: In this article, the authors theoretically investigate the dynamics of modulation instability in two-dimensional spin-orbit coupled Bose-Einstein condensates (BECs) for equal densities of pseudo-spin components.
Abstract: We theoretically investigate the dynamics of modulation instability (MI) in two-dimensional spin-orbit coupled Bose-Einstein condensates (BECs). The analysis is performed for equal densities of pseudo-spin components. Different combination of the signs of intra- and inter-component interaction strengths are considered, with a particular emphasize on repulsive interactions. We observe that the unstable modulation builds from originally miscible condensates, depending on the combination of the signs of the intra- and inter-component interaction strengths. The repulsive intra- and inter-component interactions admit instability and the MI immiscibility condition is no longer significant. Influence of interaction parameters such as spin-orbit and Rabi coupling on MI are also investigated. The spin-orbit coupling (SOC) inevitably contributes to instability regardless of the nature of the interaction. In the case of attractive interaction, SOC manifest in enhancing the MI. Thus, a comprehensive study of MI in two-dimensional spin-orbit coupled binary BECs of pseudo-spin components is presented.

Journal ArticleDOI
TL;DR: In this paper, the hierarchical superstructures assembled by binary mixed homopolymer-grafted nanoparticles are investigated by using a self-consistent field theory (SCFT).
Abstract: Hierarchical superstructures assembled by binary mixed homopolymer-grafted nanoparticles are investigated by using a self-consistent field theory (SCFT). Our results demonstrate that grafting mixed homopolymer brushes provides an effective way to program the spatial lattice arrangement of the nanoparticles. For the polymer-grafted nanoparticles with specific interaction parameter and total grafting density, the unusual non-close-packed simple cubic (SC) crystal lattice is obtained at small spherical core/polymer size ratios (R/(Nb) 1, the nanoparticle arrangement transforms into a body-centered cubic (BCC) crystal lattice. Meanwhile, some unconventional microphases are formed in the polymer matrix, such as the tetragonal cylinder and simple cubic sphere phases. Furthermore, the two-dimensional (2D) model calculations reveal that the binary hairy nanoparticles prefer to arrange into the lattice in a way they can maintain the free energy-minimizing morphology as...

Journal ArticleDOI
TL;DR: Two successive methods that are a modified iterative Fresnel algorithm for designing the binary phase pattern and the intensity addition for the speckle reduction are proposed to improve the reconstructed image quality in a 3D display system using binary phase modulator.
Abstract: To improve the reconstructed image quality in a 3D display system using binary phase modulator, we propose two successive methods that are a modified iterative Fresnel algorithm for designing the binary phase pattern and the intensity addition for the speckle reduction. Numerical and experimental results show the effectiveness of the proposed methods by increasing the number of iteration for optimizing the binary phase distribution and the number of intensity addition.

Journal ArticleDOI
TL;DR: A novel implementation of VQ using stochastic circuits is proposed and its performance is evaluated against conventional binary designs, and it outperforms the conventional binary design in terms of TPA for a reduced compression quality.
Abstract: Vector quantization (VQ) is a general data compression technique that has a scalable implementation complexity and potentially a high compression ratio. In this paper, a novel implementation of VQ using stochastic circuits is proposed and its performance is evaluated against conventional binary designs. The stochastic and binary designs are compared for the same compression quality, and the circuits are synthesized for an industrial 28-nm cell library. The effects of varying the sequence length of the stochastic representation are studied with respect to throughput per area (TPA) and energy per operation (EPO). The stochastic implementations are shown to have higher EPOs than the conventional binary implementations due to longer latencies. When a shorter encoding sequence with 512 bits is used to obtain a lower quality compression measured by the $L^{1}$ -norm, squared $L^{2}$ -norm, and third-law errors, the TPA ranges from 1.16 to 2.56 times than that of the binary implementation with the same compression quality. Thus, although the stochastic implementation underperforms for a high compression quality, it outperforms the conventional binary design in terms of TPA for a reduced compression quality. By exploiting the progressive precision feature of a stochastic circuit, a readily scalable processing quality can be attained by halting the computation after different numbers of clock cycles.

Journal ArticleDOI
TL;DR: In this article, the constituent binary B-Mo system is re-modelled in order to reproduce genuine homogeneity ranges of the molybdenum borides, and the elaborated thermodynamic description is further applied to calculate selected phase equilibria as to provide a comparison between calculated and experimental results.

Posted Content
TL;DR: A new characterization of the minimum interaction rate needed for achieving the maximum key rate (MIMK) is given, and the conjecture by Tyagi and Narayan regarding the MIMK for binary sources is resolved, and a new conjecture for binary symmetric sources is proposed.
Abstract: The basic two-terminal common randomness (CR) and key generation models are considered, where the communication between the terminals may be limited, and in particular may not be enough to achieve the maximum CR/key rate. We introduce general notions of $XY$-absolutely continuity and $XY$-concave function, and characterize the first order CR/key-communication tradeoff in terms of the evaluation of the $XY$-concave envelope of a functional defined on a set of distributions, which is simpler than the multi-letter characterization. Two extreme cases are given special attention. First, in the regime of very small communication rates, the CR bits per interaction bit (CRBIB) and key bits per interaction bit (KBIB) are expressed with a new "symmetrical strong data processing constant", defined as the minimum of a parameter such that a certain information-theoretic functional touches its $XY$-concave envelope at a given source distribution. We also provide a computationally friendly strong converse bound for CRBIB and a similar (but not necessarily strong) one for KBIB in terms of the supremum of the maximal correlation coefficient over a set of distributions. The proof uses hypercontractivity and properties of the R\'enyi divergence. A criterion the tightness of the bound is given with applications to binary symmetric sources. Second, a new characterization of the minimum interaction rate needed for achieving the maximum key rate (MIMK) is given, and we resolve a conjecture by Tyagi and Narayan \cite{tyagi2013common} regarding the MIMK for binary sources. We also propose a new conjecture for binary symmetric sources.

Posted Content
TL;DR: It is found that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on $\dots002$.
Abstract: The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on $000000\dots002 \times 000000\dots002$ while succeeding at $2 \times 2$. These failure modes are reminiscent of adversarial examples.

Journal ArticleDOI
TL;DR: In this paper, a large grid of binary evolution models simulated with the SeBa code is used to estimate the probability of a binary undergoing mass transfer being interrupted by an interloping single or more often a binary, over the course of the cluster lifetime.
Abstract: Binary mass transfer is at the forefront of some of the most exciting puzzles of modern astrophysics, including Type Ia supernovae, gamma-ray bursts, and the formation of most observed exotic stellar populations. Typically, the evolution is assumed to proceed in isolation, even in dense stellar environments such as star clusters. In this paper, we test the validity of this assumption via the analysis of a large grid of binary evolution models simulated with the SeBa code. For every binary, we calculate analytically the mean time until another single or binary star comes within the mean separation of the mass-transferring binary, and compare this time-scale to the mean time for stable mass transfer to occur. We then derive the probability for each respective binary to experience a direct dynamical interruption. The resulting probability distribution can be integrated to give an estimate for the fraction of binaries undergoing mass transfer that are expected to be disrupted as a function of the host cluster properties. We find that for lower-mass clusters ($\lesssim 10^4$ M$_{\odot}$), on the order of a few to a few tens of percent of binaries undergoing mass-transfer are expected to be interrupted by an interloping single, or more often binary, star, over the course of the cluster lifetime, whereas in more massive globular clusters we expect $\ll$ 1% to be interrupted. Furthermore, using numerical scattering experiments performed with the FEWBODY code, we show that the probability of interruption increases if perturbative fly-bys are considered as well, by a factor $\sim 2$.