scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2018"


Journal ArticleDOI
TL;DR: In this paper, the authors consider isolated binary evolution and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS.
Abstract: The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf–Rayet stellar-evolutionary phases. We find that ∼1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

133 citations


Journal ArticleDOI
TL;DR: Lamberts et al. as mentioned in this paper presented the first combination of a high-resolution cosmological simulation of a Milky Way-mass galaxy with a binary population synthesis model in this context, which provided a cosmologically realistic star formation history for the galaxy, its stellar halo, and satellites.
Abstract: Author(s): Lamberts, A; Garrison-Kimmel, S; Hopkins, PF; Quataert, E; Bullock, JS; Faucher-Giguere, CA; Wetzel, A; Keres, D; Drango, K; Sanderson, RE | Abstract: Binary black holes are the primary endpoint of massive stars. Their properties provide a unique opportunity to constrain binary evolution, which remains poorly understood. We predict the main properties of binary black holes and their merger products in/around the Milky Way. We present the first combination of a high-resolution cosmological simulation of a Milky Way-mass galaxy with a binary population synthesis model in this context. The hydrodynamic simulation, taken from the FIRE project, provides a cosmologically realistic star formation history for the galaxy, its stellar halo, and satellites. During post-processing, we apply a metallicity-dependent evolutionary model to the star particles to produce individual binary black holes. We find that 7 × 105 binary black holes have merged in the model Milky Way, and 1.2 × 106 binaries are still present, with a mean mass of 28M⊙. Because the black hole progenitors are strongly biased towards low-metallicity stars, half reside in the stellar halo and satellites and a third were formed outside the main galaxy. The numbers and mass distribution of the merged systems is broadly compatible with the LIGO/Virgo detections. Our simplified binary evolution models predict that LISA will detect more than 20 binary black holes, but that electromagnetic observations will be challenging. Our method will allow for constraints on the evolution of massive binaries based on comparisons between observations of compact objects and the predictions of varying binary evolution models. We provide online data of our star formation model and binary black hole distribution.

93 citations


Journal ArticleDOI
TL;DR: Results prove the ability of the proposed HBBEPSO algorithm to search the feature space for optimal feature combinations.

89 citations


Journal ArticleDOI
TL;DR: The PyCBC Inference module as discussed by the authors implements Bayesian inference for binary black hole mergers and shows that the posterior parameter distributions obtained used their new code agree well with the published estimates of binary black holes in the first LIGO-Virgo observing run.
Abstract: We introduce new modules in the open-source PyCBC gravitational- wave astronomy toolkit that implement Bayesian inference for compact-object binary mergers. We review the Bayesian inference methods implemented and describe the structure of the modules. We demonstrate that the PyCBC Inference modules produce unbiased estimates of the parameters of a simulated population of binary black hole mergers. We show that the posterior parameter distributions obtained used our new code agree well with the published estimates for binary black holes in the first LIGO-Virgo observing run.

65 citations


Posted Content
TL;DR: The Computational Relativity CoRe database as mentioned in this paper contains 367 waveforms from numerical simulations that are consistent with general relativity and that employ constraint satisfying initial data in hydrodynamical equilibrium.
Abstract: We present the Computational Relativity CoRe collaboration's public database of gravitational waveforms from binary neutron star mergers. The database currently contains 367 waveforms from numerical simulations that are consistent with general relativity and that employ constraint satisfying initial data in hydrodynamical equilibrium. It spans 164 physically distinct configuration with different binary parameters (total binary mass, mass-ratio, initial separation, eccentricity, and stars' spins) and simulated physics. Waveforms computed at multiple grid resolutions and extraction radii are provided for controlling numerical uncertainties. We also release an exemplary set of 18 hybrid waveforms constructed with a state-of-art effective-one-body model spanning the frequency band of advanced gravitational-wave detectors. We outline present and future applications of the database to gravitational-wave astronomy.

53 citations


Posted Content
TL;DR: SAFE as discussed by the authors is a self-attentive neural network (SAFE) based approach for binary similarity problem, which works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions and is more general as it works on stripped binaries and on multiple architectures.
Abstract: The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Advanced techniques for binary similarity recently gained momentum as they can be applied in several fields, such as copyright disputes, malware analysis, vulnerability detection, etc., and thus have an immediate practical impact. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. However, embeddings are usually derived from binary code using manual feature extraction, that may fail in considering important function characteristics, or may consider features that are not important for the binary similarity problem. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions (i.e., it does not incur in the computational overhead of building or manipulating control flow graphs), and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications (e.g. semantic-based binary function search).

52 citations


Journal ArticleDOI
TL;DR: A practical, compact, and more quantum-resistant variant of the BLISS Ideal Lattice Signature Scheme is developed and it is demonstrated that arithmetic decoding from an uniform source to target distribution is also an optimal non-uniform sampling method in the sense that a minimal amount of true random bits is required.
Abstract: We describe new arithmetic coding techniques and side-channel blinding countermeasures for lattice-based cryptography. Using these techniques, we develop a practical, compact, and more quantum-resistant variant of the BLISS Ideal Lattice Signature Scheme. We first show how the BLISS parameters and hash-based random oracle can be modified to be more secure against quantum pre-image attacks while optimizing signature size. Arithmetic Coding offers an information theoretically optimal compression for stationary and memoryless sources, such as the discrete Gaussian distributions often present in lattice-based cryptography. We show that this technique gives better signature sizes than the previously proposed advanced Huffman-based signature compressors. We further demonstrate that arithmetic decoding from an uniform source to target distribution is also an optimal non-uniform sampling method in the sense that a minimal amount of true random bits is required. Performance of this new Binary Arithmetic Coding sampler is comparable to other practical samplers. The same code, tables, or circuitry can be utilized for both tasks, eliminating the need for separate sampling and compression components. We then describe simple randomized blinding techniques that can be applied to anti-cyclic polynomial multiplication to mask timing- and power consumption side-channels in ring arithmetic. We further show that the Gaussian sampling process can also be blinded by a split-and-permute techniques as an effective countermeasure against side-channel attacks.

48 citations


Journal ArticleDOI
TL;DR: The formalism of a previous paper is extended to include the effects of flybys and instantaneous perturbations such as supernovae on the long-term secular evolution of hierarchical multiple systems with an arbitrary number of bodies and hierarchy, provided that the system is composed of nested binary orbits.
Abstract: We extend the formalism of a previous paper to include the effects of flybys and instantaneous perturbations such as supernovae on the long-term secular evolution of hierarchical multiple systems with an arbitrary number of bodies and hierarchy, provided that the system is composed of nested binary orbits. To model secular encounters, we expand the Hamiltonian in terms of the ratio of the separation of the perturber with respect to the barycentre of the multiple system, to the separation of the widest orbit. Subsequently, we integrate over the perturber orbit numerically or analytically. We verify our method for secular encounters, and illustrate it with an example. Furthermore, we describe a method to compute instantaneous orbital changes to multiple systems, such as asymmetric supernovae and impulsive encounters. The secular code, with implementation of the extensions described in this paper, is publicly available within AMUSE, and we provide a number of simple example scripts to illustrate its usage for secular and impulsive encounters, and asymmetric supernovae. The extensions presented in this paper are a next step toward efficiently modeling the evolution of complex multiple systems embedded in star clusters.

45 citations


Proceedings ArticleDOI
18 Apr 2018
TL;DR: XNORBIN as mentioned in this paper is a flexible accelerator for binary CNNs with computation tightly coupled to memory for aggressive data reuse supporting even non-trivial network topologies with large feature map volumes.
Abstract: Deploying state-of-the-art CNNs requires power-hungry processors and off-chip memory. This precludes the implementation of CNNs in low-power embedded systems. Recent research shows CNNs sustain extreme quantization, binarizing their weights and intermediate feature maps, thereby saving 8–32x memory and collapsing energy-intensive sum-of-products into XNOR-and-popcount operations. We present XNORBIN, a flexible accelerator for binary CNNs with computation tightly coupled to memory for aggressive data reuse supporting even non-trivial network topologies with large feature map volumes. Implemented in UMC 65nm technology XNORBIN achieves an energy efficiency of 95 TOp/s/W and an area efficiency of 2.0TOp/s/MGE at 0.8 V.

41 citations


Book ChapterDOI
08 Sep 2018
TL;DR: This work introduces a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression and intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework.
Abstract: How to economically cluster large-scale multi-view images is a long-standing problem in computer vision. To tackle this challenge, we introduce a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression. We intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework. In particular, common binary representations are learned by exploiting both sharable and individual information across multiple views to capture their underlying correlations. Meanwhile, cluster assignment with robust binary centroids is also performed via effective discrete optimization under \(\ell _{21}\)-norm constraint. By this means, heavy continuous-valued Euclidean distance computations can be successfully reduced by efficient binary XOR operations during the clustering procedure. To our best knowledge, HSIC is the first binary clustering work specifically designed for scalable multi-view image clustering. Extensive experimental results on four large-scale image datasets show that HSIC consistently outperforms the state-of-the-art approaches, whilst significantly reducing computational time and memory footprint.

41 citations


Proceedings ArticleDOI
Yueqi Duan, Ziwei Wang1, Jiwen Lu, Xudong Lin1, Jie Zhou 
18 Jun 2018
TL;DR: A deep reinforcement learning model is designed to learn the structure of the graph for bitwise interaction mining, reducing the uncertainty of binary codes by maximizing the mutual information with inputs and related bits, so that the ambiguous bits receive additional instruction from thegraph for confident binarization.
Abstract: In this paper, we propose a GraphBit method to learn deep binary descriptors in a directed acyclic graph unsupervisedly, representing bitwise interactions as edges between the nodes of bits. Conventional binary representation learning methods enforce each element to be binarized into zero or one. However, there are elements lying in the boundary which suffer from doubtful binarization as "ambiguous bits". Ambiguous bits fail to collect effective information for confident binarization, which are unreliable and sensitive to noise. We argue that there are implicit inner relationships between bits in binary descriptors, where the related bits can provide extra instruction as prior knowledge for ambiguity elimination. Specifically, we design a deep reinforcement learning model to learn the structure of the graph for bitwise interaction mining, reducing the uncertainty of binary codes by maximizing the mutual information with inputs and related bits, so that the ambiguous bits receive additional instruction from the graph for confident binarization. Due to the reliability of the proposed binary codes with bitwise interaction, we obtain an average improvement of 9.64%, 8.84% and 3.22% on the CIFAR-10, Brown and HPatches datasets respectively compared with the state-of-the-art unsupervised binary descriptors.

Proceedings ArticleDOI
17 Jun 2018
TL;DR: This paper develops novel code constructions that are applicable to binary matrix- vector multiplication via a variant of the Four-Russians method called the Mailman algorithm, and presents a trade-off between the communication and computation cost of distributed coded matrix-vector multiplication for general, possibly non-binary, matrices.
Abstract: Recent work has developed coding theoretic approaches to add redundancy to distributed matrix-vector multiplications with the goal of speeding up the computation by mitigating the straggler effect in distributed computing. In this paper, we consider the case where the matrix comes from a small (e.g., binary) alphabet, where a variant of a popular method called the “Four-Russians method” is known to have significantly lower computational complexity as compared with the usual matrix-vector multiplication algorithm. We develop novel code constructions that are applicable to binary matrix-vector multiplication via a variant of the Four-Russians method called the Mailman algorithm. Specifically, in our constructions, the encoded matrices have a low alphabet that ensures lower computational complexity, as well as good straggler tolerance. We also present a trade-off between the communication and computation cost of distributed coded matrix-vector multiplication for general, possibly non-binary, matrices.

Journal ArticleDOI
TL;DR: A new performance parameter, the average of nonzero correlations between normalized columns, is proved to perform better than the known known coherence parameter when used to estimate the performance of binary matrices with high compression ratios.
Abstract: For an $m\times n$ binary matrix with $d$ nonzero elements per column, it is interesting to identify the minimal column degree $d$ that corresponds to the best recovery performance Consider this problem is hard to be addressed with currently known performance parameters, we propose a new performance parameter, the average of nonzero correlations between normalized columns The parameter is proved to perform better than the known coherence parameter, namely the maximum correlation between normalized columns, when used to estimate the performance of binary matrices with high compression ratios $n/m$ and low column degrees $d$ By optimizing the proposed parameter, we derive an ideal column degree $d=\lceil \sqrt{m}\rceil$ , around which the best recovery performance is expected to be obtained This is verified by simulations Given the ideal number $d$ of nonzero elements in each column, we further determine their specific distribution by minimizing the coherence with a greedy method The resulting binary matrices achieve comparable or even better recovery performance than random binary matrices

Journal ArticleDOI
TL;DR: It is demonstrated that PCA autonomously discovers order-parameter-like quantities that report on phase transitions, mitigating the need for a priori construction or identification of a suitable order parameter-thus streamlining the routine analysis of phase behavior.
Abstract: We outline how principal component analysis can be applied to particle configuration data to detect a variety of phase transitions in off-lattice systems, both in and out of equilibrium. Specifically, we discuss its application to study (1) the nonequilibrium random organization (RandOrg) model that exhibits a phase transition from quiescent to steady-state behavior as a function of density, (2) orientationally and positionally driven equilibrium phase transitions for hard ellipses, and (3) a compositionally driven demixing transition in the non-additive binary Widom-Rowlinson mixture.

Journal ArticleDOI
TL;DR: The publicly available code dart_board is described which combines rapid binary evolution codes, typically used in traditional BPS, with modern Markov chain Monte Carlo methods and can be applied to model a variety of stellar binary populations including the merging compact object binaries recently detected by gravitational wave observatories.
Abstract: By employing Monte Carlo random sampling, traditional binary population synthesis (BPS) offers a substantial improvement in efficiency over brute force, grid-based studies. Even so, BPS models typically require a large number of simulation realizations, a computationally expensive endeavor, to generate statistically robust results. Recent advances in statistical methods have led us to revisit the traditional approach to BPS. In this work we describe our publicly available code dart_board which combines rapid binary evolution codes, typically used in traditional BPS, with modern Markov chain Monte Carlo methods. dart_board takes a novel approach that treats the initial binary parameters and the supernova kick vector as model parameters. This formulation has several advantages, including the ability to model either populations of systems or individual binaries, the natural inclusion of observational uncertainties, and the flexible addition of new constraints which are problematic to include using traditional BPS. After testing our code with mock systems, we demonstrate the flexibility of dart_board by applying it to three examples: (i) a generic population of high mass X-ray binaries (HMXBs), (ii) the population of HMXBs in the Large Magellanic Cloud (LMC) in which the spatially resolved star formation history is used as a prior, and (iii) one particular HMXB in the LMC, Swift J0513.4-6547, in which we include observations of the system's component masses and orbital period. Although this work focuses on HMXBs, dart_board can be applied to a variety of stellar binaries including the recent detections by gravitational wave observatories of merging compact object binaries.

Journal ArticleDOI
Siyang Sun1, Yingjie Yin1, Xingang Wang1, De Xu1, Wenqi Wu1, Qingyi Gu1 
TL;DR: A fast object detection algorithm based on binary deep convolution neural networks (CNNs) that results in 62 times faster convolutional operations and 32 times memory saving in theory and easy to be implemented in embedded computing systems because of the binary operation for convolution and low memory requirement.

Proceedings Article
03 Jul 2018
TL;DR: This work proposes two new splitting procedures that provably achieve nearoptimal impurity and reports experiments that provide evidence that the proposed methods are interesting candidates to be employed in splitting nominal attributes with many values during decision tree/random forest induction.
Abstract: The problem of splitting attributes is one of the main steps in the construction of decision trees. In order to decide the best split, impurity measures such as Entropy and Gini are widely used. In practice, decision-tree inducers use heuristics for finding splits with small impurity when they consider nominal attributes with a large number of distinct values. However, there are no known guarantees for the quality of the splits obtained by these heuristics. To fill this gap, we propose two new splitting procedures that provably achieve nearoptimal impurity. We also report experiments that provide evidence that the proposed methods are interesting candidates to be employed in splitting nominal attributes with many values during decision tree/random forest induction.

Posted Content
TL;DR: A novel set of reversible modular multipliers applicable to quantum computing, derived from three classical techniques: 1) traditional integer division, 2) Montgomery residue arithmetic, and 3) Barrett reduction are presented.
Abstract: We present a novel set of reversible modular multipliers applicable to quantum computing, derived from three classical techniques: 1) traditional integer division, 2) Montgomery residue arithmetic, and 3) Barrett reduction. Each multiplier computes an exact result for all binary input values, while maintaining the asymptotic resource complexity of a single (non-modular) integer multiplier. We additionally conduct an empirical resource analysis of our designs in order to determine the total gate count and circuit depth of each fully constructed circuit, with inputs as large as 2048 bits. Our comparative analysis considers both circuit implementations which allow for arbitrary (controlled) rotation gates, as well as those restricted to a typical fault-tolerant gate set.

Journal ArticleDOI
01 Jul 2018
TL;DR: The simulation results show that the proposed approximate dividers offer extensive saving in terms of power dissipation, circuit complexity, and delay, while only incurring in a small degradation in accuracy thus making them possibly suitable and interesting to some applications and domains such as low power/mobile computing.
Abstract: Approximate high radix dividers (HR-AXDs) are proposed and investigated in this paper. High-radix division is reviewed and inexact computing is introduced at different levels. Design parameters such as number of bits (N) and radix (r) are considered in the analysis; the replacement of exact cells with inexact cells in a binary signed-digit adder is introduced by utilizing different replacement schemes. Cell truncation and error compensation are also proposed to further extend inexact computation. Circuit-level performance and the error characteristics of the inexact high radix dividers are analyzed for the proposed designs. The combined assessment of the normal error distance, power dissipation, and delay is investigated and applications of approximate high-radix dividers are treated in detail. The simulation results show that the proposed approximate dividers offer extensive saving in terms of power dissipation, circuit complexity, and delay, while only incurring in a small degradation in accuracy thus making them possibly suitable and interesting to some applications and domains such as low power/mobile computing.

Journal ArticleDOI
04 Jun 2018
TL;DR: A lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions and a self-calibration algorithm to correct the misalignment of the binary mask is discussed.
Abstract: The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.

Journal ArticleDOI
TL;DR: It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound) and provides a useful methodology to analyze the performance of such an approach.
Abstract: The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer–Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer–Rao bound). Evolutions of the Cramer–Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

Journal ArticleDOI
TL;DR: This work considers a special class of two-stage stochastic integer programming problems with binary variables appearing in both stages and demonstrates how these DPs can be formed by use of binary decision diagrams, which then yield traditional Benders inequalities that can be strengthened based on observations regarding the structure of the underlying DPs.
Abstract: We consider a special class of two-stage stochastic integer programming problems with binary variables appearing in both stages. The class of problems we consider constrains the second-stage variables to belong to the intersection of sets corresponding to first-stage binary variables that equal one. Our approach seeks to uncover strong dual formulations to the second-stage problems by transforming them into dynamic programming (DP) problems parameterized by first-stage variables. We demonstrate how these DPs can be formed by use of binary decision diagrams, which then yield traditional Benders inequalities that can be strengthened based on observations regarding the structure of the underlying DPs. We demonstrate the efficacy of our approach on a set of stochastic traveling salesman problems.

Journal ArticleDOI
TL;DR: In this article, a technique for imaging binary stars from speckle data is presented based upon the computation of the cross-correlation between the specckle frames and their square, which may be considered as a simple, easy to implement, complementary computation to the autocorrelation function of Labeyrie's technique for a rapid determination of the position angle of binary systems.
Abstract: We present in this paper a technique for imaging binary stars from speckle data. This technique is based upon the computation of the cross-correlation between the speckle frames and their square. This may be considered as a simple, easy to implement, complementary computation to the autocorrelation function of Labeyrie's technique for a rapid determination of the position angle of binary systems. Angular separation, absolute position angle and relative photometry of binary stars can be derived from this technique. We show an application to the bright double star zeta Sge observed at the 2m Telescope Bernard Lyot.

Journal ArticleDOI
TL;DR: The proposed construction is simple-to-implement, and is shown to be accomplished in polynomial time, paving the way to settle the long-standing design problem of binary sequences with an optimal growth of the auto-correlation PSL.
Abstract: Binary sequence sets with asymptotically optimal auto/cross correlation peak sidelobe level (PSL) growth have been known in the literature for a long time, and their construction has been studied both analytically and numerically. In contrast, it has been a long-standing problem whether we can construct a family of binary sequences whose auto-correlation PSL grows in an optimal manner. In this paper, we devise a construction method of binary sequences with asymptotically optimal PSL growth from the sequence sets with good correlation properties. A key component of the design follows from the observation that if the PSL of the sequence set grows optimally or nearly optimally , then the PSL of the constructed binary sequence will experience a similar growth as a consequence. The proposed construction is simple-to-implement, and is shown to be accomplished in polynomial time. With such a construction, we not only bridge the gap between analytical construction and computational search, but also pave the way to settle the long-standing design problem of binary sequences with an optimal growth of the auto-correlation PSL.

Journal ArticleDOI
TL;DR: In this article, the authors apply a recently developed numerical algorithm to demonstrate that parts of the Majumdar-Papapetrou binary black hole shadow exhibit the Wada property.
Abstract: A key goal of the Event Horizon Telescope is to observe the shadow cast by a black hole. Recent simulations have shown that binary black holes, the progenitors of gravitational waves, present shadows with fractal structure. Here we study the binary shadow structure using techniques from nonlinear dynamics, recognising shadows as exit basins of open Hamiltonian dynamical systems. We apply a recently-developed numerical algorithm to demonstrate that parts of the Majumdar–Papapetrou binary black hole shadow exhibit the Wada property: any point of the boundary of one basin is also on the boundary of at least two additional basins. We show that the algorithm successfully distinguishes between the fractal and regular (i.e., non-fractal) parts of the binary shadow.

Book ChapterDOI
Qinghao Hu1, Gang Li1, Peisong Wang1, Yifan Zhang1, Jian Cheng1 
08 Sep 2018
TL;DR: A novel semi-binary decomposition method which decomposes a matrix into two binary matrices and a diagonal matrix, which shows that the proposed method can achieve \(\sim9\(\times \) speed-ups while reducing the consumption of on-chip memory and dedicated multipliers significantly.
Abstract: Recently binary weight networks have attracted lots of attentions due to their high computational efficiency and small parameter size. Yet they still suffer from large accuracy drops because of their limited representation capacity. In this paper, we propose a novel semi-binary decomposition method which decomposes a matrix into two binary matrices and a diagonal matrix. Since the matrix product of binary matrices has more numerical values than binary matrix, the proposed semi-binary decomposition has more representation capacity. Besides, we propose an alternating optimization method to solve the semi-binary decomposition problem while keeping binary constraints. Extensive experiments on AlexNet, ResNet-18, and ResNet-50 demonstrate that our method outperforms state-of-the-art methods by a large margin (5% higher in top1 accuracy). We also implement binary weight AlexNet on FPGA platform, which shows that our proposed method can achieve \(\sim \)9\(\times \) speed-ups while reducing the consumption of on-chip memory and dedicated multipliers significantly.

Journal ArticleDOI
TL;DR: New constructions of binary and ternary locally repairable codes (LRCs) using cyclic codes and their concatenation are proposed, and the similar method of the binary case is applied to construct the Ternary LRCs with good parameters.
Abstract: New constructions of binary and ternary locally repairable codes (LRCs) using cyclic codes and their concatenation are proposed. The proposed binary LRCs with $d=4$ and some $r$ and with $d\ge 5$ and some $n$ are shown to be optimal in terms of the upper bounds. In addition, the similar method of the binary case is applied to construct the ternary LRCs with good parameters.

Journal ArticleDOI
TL;DR: This work proposes a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions and presents a key enabling technique for highly efficient DCNN inference without GPUs.
Abstract: Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging While their versatility and high quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort This limits their practical applications for image guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU) We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy and time preserving binary operators and population counts Our approach, which is demonstrated using a fully-convolutional network (FCN) for CT pancreas segmentation leads to more than 10-fold reduced memory requirements and we provide a concept for sub-second inference without GPUs Our ternary approximation obtains high accuracies (without any post-processing) with a Dice overlap of 710% that are statistically equivalent to using networks with high-precision weights and activations We further demonstrate the significant improvements reached in comparison to binary quantisation and without our proposed ternary hyperbolic tangent continuation We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications It has also great promise for improving accuracies in large-scale medical data retrieval

Proceedings ArticleDOI
15 Feb 2018
TL;DR: This work presents a novel method which first converts binary numbers to unary using thermometer encoders, then uses a "scaling network»» followed by voting gates that the authors call "alternator logic»», followed by an adder tree to convert the numbers back to the binary format.
Abstract: The binary number representation has dominated digital logic for decades due to its compact storage requirements. However, since the number system is positional, it needs to "unpack»» bits, perform computations, and repack the bits back to binary (\emphe.g., partial products in multiplication).An alternative representation is the unary number system: we use N bits, out of which the first M are 1 and the rest are 0 to represent the value $M/N$. We present a novel method which first converts binary numbers to unary using thermometer encoders, then uses a "scaling network»» followed by voting gates that we call "alternator logic»», followed by an adder tree to convert the numbers back to the binary format. For monotonically increasing functions, the scaling network is all we need, which essentially uses only the routing resources and flip-flops on the FPGA architecture. Our method is especially well-suited to FPGAs due to the abundant availability of routing and FF resources, and for the ability of FPGAs to realize high fanout gates for highly oscillating functions. We compare our method to stochastic computing and to conventional binary implementations on a number of functions, as well as on two common image processing applications. Our method is clearly superior to the conventional binary implementation: our area×delay cost is on average only 3%, 8% and 32% of the binary method for 8-, 10-, and 12-bit resolutions respectively. Compared to stochastic computing, our cost is 6%, 5%, and 8% for those resolutions. The area cost includes conversions from and to the binary format. Our method out performs the conventional binary method on an edge detection algorithm. However, it is not competitive with the binary method on the median filtering application due to the high cost of generating and saving unary representations of the input pixels.

Journal ArticleDOI
TL;DR: This paper proposes a novel strategy to design structured, sparse, and binary HDS measurement matrices based on promoting linear independence between rows by minimizing the number of its zero singular values by developing an algorithm based on an optimal selection of non-zero entries positions.
Abstract: Recently, an important set of high dimensional signals (HDS) applications has successfully implemented compressive sensing (CS) sensors in which their efficiency depends on physical elements that perform a binary codification over the HDS. The structure of the binary codification is crucial as it determines the HDS sensing matrices. For a correct reconstruction, this class of matrices drastically differs from the dense or i.i.d. assumptions usually made in CS. Therefore, current CS matrix design algorithms are impractical. This paper proposes a novel strategy to design structured, sparse, and binary HDS measurement matrices based on promoting linear independence between rows by minimizing the number of its zero singular values. The design constraints lead to keep uniform both, the number of non-zero elements per row and also the number of non-zero elements per column. An algorithm based on an optimal selection of non-zero entries positions is developed to implement this strategy. Simulations show that the proposed optimization improves the quality of the reconstructed HDS in up to 8 dB of PSNR compared with non-optimized matrices.