scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2020"


Journal ArticleDOI
TL;DR: COSMIC as discussed by the authors is a community-developed binary population synthesis suite that is designed to simulate compact-object binary populations and their progenitors, and it can be used to both predict and inform observations of electromagnetic and gravitational wave sources.
Abstract: The formation and evolution of binary stars is a critical component of several fields in astronomy. The most numerous sources for gravitational wave observatories are inspiraling and/or merging compact binaries, while binary stars are present in nearly every electromagnetic survey regardless of the target population. Simulations of large binary populations serve to both predict and inform observations of electromagnetic and gravitational wave sources. Binary population synthesis is a tool that balances physical modeling with simulation speed to produce large binary populations on timescales of days. We present a community-developed binary population synthesis suite: COSMIC which is designed to simulate compact-object binary populations and their progenitors. As a proof of concept, we simulate the Galactic population of compact binaries and their gravitational wave signal observable by the Laser Interferometer Space Antenna (LISA). We find that $\sim10^8$ compact binaries reside in the Milky Way today, while $\sim10^4$ of them may be resolvable by LISA.

149 citations


Proceedings Article
30 Apr 2020
TL;DR: This paper shows how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances, and carefully tuning the optimization procedure to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution.
Abstract: This paper shows how to train binary networks to within a few percent points (~3-5 %) of the full precision counterpart with a negligible increase in the computational cost. In particular, we first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances, and carefully tuning the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in data-driven manner, by using the real-valued activations being available during inference prior to the binarization process for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the resulting model reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 error on CIFAR-100 and ImageNet, respectively, when using a ResNet-18 architecture.

111 citations


Book ChapterDOI
17 Aug 2020
TL;DR: This work introduces novel techniques to improve the translation between arithmetic and binary data types in secure multi-party computation using extended doubly-authenticated bits (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain.
Abstract: This work introduces novel techniques to improve the translation between arithmetic and binary data types in secure multi-party computation. We introduce a new approach to performing these conversions using what we call extended doubly-authenticated bits (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain. These can be used to considerably increase the efficiency of non-linear operations such as truncation, secure comparison and bit-decomposition.

85 citations


Journal ArticleDOI
TL;DR: It is proved that the worst-case complexity of the basic chirp reconstruction algorithm is ${\mathscr{O}}[nK(\log _2^2 n + K)], which makes the reconstruction computationally feasible—a claim supported by reporting computing times for the algorithm.
Abstract: Unsourced multiple access abstracts grantless simultaneous communication of a large number of devices (messages) each of which transmits (is transmitted) infrequently. It provides a model for machine-to-machine communication in the Internet of Things (IoT), including the special case of radio-frequency identification (RFID), as well as neighbor discovery in ad hoc wireless networks. This paper presents a fast algorithm for unsourced multiple access that scales to $2^{100}$ devices (arbitrary $100$ bit messages). The primary building block is multiuser detection of binary chirps which are simply codewords in the second order Reed Muller code. The chirp detection algorithm originally presented by Howard et al. is enhanced and integrated into a peeling decoder designed for a patching and slotting framework. In terms of both energy per bit and number of transmitted messages, the proposed algorithm is within a factor of $2$ of state of the art approaches. A significant advantage of our algorithm is its computational efficiency. We prove that the worst-case complexity of the basic chirp reconstruction algorithm is $\mathcal{O}[nK(\log_2 n + K)]$, where $n$ is the codeword length and $K$ is the number of active users, and we report computing times for our algorithm. Our performance and computing time results represent a benchmark against which other practical algorithms can be measured.

83 citations



Proceedings Article
01 Jan 2020
TL;DR: This paper introduces a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version and proposes a training-aware approximation of the sign function for the gradient backward.
Abstract: Binary Neural Network (BNN) shows its predominance in reducing the complexity of deep neural networks. However, it suffers severe performance degradation. One of the major impediments is the large quantization error between the full-precision weight vector and its binary vector. Previous works focus on compensating for the norm gap while leaving the angular bias hardly touched. In this paper, for the first time, we explore the influence of angular bias on the quantization error and then introduce a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version. At the beginning of each training epoch, we propose to rotate the full-precision weight vector to its binary vector to reduce the angular bias. To avoid the high complexity of learning a large rotation matrix, we further introduce a bi-rotation formulation that learns two smaller rotation matrices. In the training stage, we devise an adjustable rotated weight vector for binarization to escape the potential local optimum. Our rotation leads to around 50% weight flips which maximize the information gain. Finally, we propose a training-aware approximation of the sign function for the gradient backward. Experiments on CIFAR-10 and ImageNet demonstrate the superiorities of RBNN over many state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at this https URL.

64 citations


Journal ArticleDOI
TL;DR: In this study, a fuzzy controller designed to control the wind turbine blades is optimized with a genetic algorithm that is improved and results show that optimization makes the output power even better.

60 citations


Book ChapterDOI
23 Aug 2020
TL;DR: A novel binary-oriented search space is introduced, a new mechanism for controlling and stabilising the resulting searched topologies are proposed, and a series of new search strategies for binary networks that lead to faster convergence and lower search times are proposed.
Abstract: This paper proposes Binary ArchitecTure Search (BATS), a framework that drastically reduces the accuracy gap between binary neural networks and their real-valued counterparts by means of Neural Architecture Search (NAS). We show that directly applying NAS to the binary domain provides very poor results. To alleviate this, we describe, to our knowledge, for the first time, the 3 key ingredients for successfully applying NAS to the binary domain. Specifically, we (1) introduce and design a novel binary-oriented search space, (2) propose a new mechanism for controlling and stabilising the resulting searched topologies, (3) propose and validate a series of new search strategies for binary networks that lead to faster convergence and lower search times. Experimental results demonstrate the effectiveness of the proposed approach and the necessity of searching in the binary space directly. Moreover, (4) we set a new state-of-the-art for binary neural networks on CIFAR10, CIFAR100 and ImageNet datasets. Code will be made available.

54 citations


Journal ArticleDOI
TL;DR: The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources, and shows how to balance between latency and accuracy by retaining full precision on a selected subset of network components.
Abstract: We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.

48 citations


Posted Content
TL;DR: A full implementation of point addition in the Q# quantum programming language that allows unit tests and automatic quantum resource estimation for all components and presents various trade-offs between different cost metrics including the number of qubits, circuit depth and $T$-gate count.
Abstract: We present improved quantum circuits for elliptic curve scalar multiplication, the most costly component in Shor's algorithm to compute discrete logarithms in elliptic curve groups. We optimize low-level components such as reversible integer and modular arithmetic through windowing techniques and more adaptive placement of uncomputing steps, and improve over previous quantum circuits for modular inversion by reformulating the binary Euclidean algorithm. Overall, we obtain an affine Weierstrass point addition circuit that has lower depth and uses fewer $T$ gates than previous circuits. While previous work mostly focuses on minimizing the total number of qubits, we present various trade-offs between different cost metrics including the number of qubits, circuit depth and $T$-gate count. Finally, we provide a full implementation of point addition in the Q# quantum programming language that allows unit tests and automatic quantum resource estimation for all components.

38 citations


Journal ArticleDOI
11 Mar 2020
TL;DR: In this article, binary and ternary neural networks are implemented in the hls4ml library, which is designed to automatically convert deep neural network models to digital circuits with FPGA firmware.
Abstract: We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with FPGA firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.

Journal ArticleDOI
03 Dec 2020
TL;DR: This paper analyzes and optimizes quantum circuits for computing discrete logarithms on binary elliptic curves, including reversible circuits for fixed-base-point scalar multiplication and the full stack of relevant subroutines.
Abstract: This paper analyzes and optimizes quantum circuits for computing discrete logarithms on binary elliptic curves, including reversible circuits for fixed-base-point scalar multiplication and the full stack of relevant subroutines. The main optimization target is the size of the quantum computer, i.e., the number of logical qubits required, as this appears to be the main obstacle to implementing Shor’s polynomial-time discrete-logarithm algorithm. The secondary optimization target is the number of logical Toffoli gates. For an elliptic curve over a field of 2n elements, this paper reduces the number of qubits to 7n + ⌊log2(n)⌋ + 9. At the same time this paper reduces the number of Toffoli gates to 48n3 + 8nlog2(3)+1 + 352n2 log2(n) + 512n2 + O(nlog2(3)) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.

Proceedings Article
Zeping Yu1, Wenxin Zheng, Jiaqi Wang, Qiyi Tang1, Sen Nie1, Shi Wu1 
01 Jan 2020
TL;DR: An end-to-end cross-modal retrieval network for binary source code matching, which achieves higher accuracy and requires less expert experience and is implemented "norm weighted sampling" for negative sampling.
Abstract: Binary source code matching, especially on function-level, has a critical role in the field of computer security. Given binary code only, finding the corresponding source code improves the accuracy and efficiency in reverse engineering. Given source code only, related binary code retrieval contributes to known vulnerabilities confirmation. However, due to the vast difference between source and binary code, few studies have investigated binary source code matching. Previously published studies focus on code literals extraction such as strings and integers, then utilize traditional matching algorithms such as the Hungarian algorithm for code matching. Nevertheless, these methods have limitations on function-level, because they ignore the potential semantic features of code and a lot of code lacks sufficient code literals. Also, these methods indicate a need for expert experience for useful feature identification and feature engineering, which is timeconsuming. This paper proposes an end-to-end cross-modal retrieval network for binary source code matching, which achieves higher accuracy and requires less expert experience. We adopt Deep Pyramid Convolutional Neural Network (DPCNN) for source code feature extraction and Graph Neural Network (GNN) for binary code feature extraction. We also exploit neural network-based models to capture code literals, including strings and integers. Furthermore, we implement "norm weighted sampling" for negative sampling. We evaluate our model on two datasets, where it outperforms other methods significantly.

Posted Content
TL;DR: In this article, the spatial attention maps computed at the output of the binary and real-valued convolutions are matched during training, and the realvalued activations are re-scaled after the binary convolution in a data-driven manner.
Abstract: This paper shows how to train binary networks to within a few percent points ($\sim 3-5 \%$) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at this https URL.

Proceedings ArticleDOI
01 Feb 2020
TL;DR: This coding can be used in big data technologies to reduce time delays and improve the reliability of data transmission from any source of information in telecommunications systems, and to improve the security of information resources based on the development of post-quantum cryptographic protection technologies.
Abstract: The method of developing an effective syntactic representation of binary data arrays based on binomial-polyadic encoding is described. The created coding is based on the structural representation of arrays of binary data on the number of series of ones in binary polyadic space on the basis of vertical and horizontal schemes of coefficients weight calculation. This coding can be used in big data technologies to reduce time delays and improve the reliability of data transmission from any source of information in telecommunications systems, to improve the security of information resources based on the development of post-quantum cryptographic protection technologies. The evaluation of the maximum number of bits that can be assigned to represent the code numbers is performed.

Journal ArticleDOI
TL;DR: The computational results demonstrate the efficiency and effectiveness of the proposed approach in finding a minimal features subset that maximize the classification accuracy.

Journal ArticleDOI
TL;DR: A binary version of the social spider algorithm called BinSSA, which is a heuristic algorithm created on spider behaviors to solve continuous problems, is proposed and obtained results are compared with state-of-art algorithms in the literature.
Abstract: The social spider algorithm (SSA) is a heuristic algorithm created on spider behaviors to solve continuous problems. In this paper, firstly a binary version of the social spider algorithm called binary social spider algorithm (BinSSA) is proposed. Currently, there is insufficient focus on the binary version of SSA in the literature. The main part of the binary version is the transfer function. The transfer function is responsible for mapping continuous search space to binary search space. In this study, eight of the transfer functions divided into two families, S-shaped and V-shaped, are evaluated. BinSSA is obtained from SSA, by transforming constant search space to binary search space with eight different transfer functions (S-Shapes and V-Shaped). Thus, eight different variations of BinSSA are formed as BinSSA1, BinSSA2, BinSSA3, BinSSA4, BinSSA5, BinSSA6, BinSSA7, and BinSSA8. For increasing, exploration and exploitation capacity of BinSSA, a crossover operator is added as BinSSA-CR. In secondly, the performances of BinSSA variations are tested on feature selection task. The optimal subset of features is a challenging problem in the process of feature selection. In this paper, according to different comparison criteria (mean of fitness values, the standard deviation of fitness values, the best of fitness values, the worst of fitness values, accuracy values, the mean number of the selected features, CPU time), the best BinSSA variation is detected. In the feature selection problem, the K-nearest neighbor (K-NN) and support vector machines (SVM) are used as classifiers. A detailed study is performed for the fixed parameter values used in the fitness function. BinSSA is evaluated on low-scaled, middle-scaled and large-scaled twenty-one well-known UCI datasets and obtained results are compared with state-of-art algorithms in the literature. Obtained results have shown that BinSSA and BinSSA-CR show superior performance and offer quality and stable solutions.

Journal ArticleDOI
TL;DR: In this article, the uncertainty in the power spectral density estimation was incorporated into the Bayesian inference of the binary source parameters and applied it to the first 11 CBC detections reported by the LIGO-Virgo Collaboration.
Abstract: In order to perform Bayesian parameter estimation to infer the source properties of gravitational waves from compact binary coalescences (CBCs), the noise characteristics of the detector must be understood. It is typically assumed that the detector noise is stationary and Gaussian, characterized by a power spectral density (PSD) that is measured with infinite precision. We present a new method to incorporate the uncertainty in the power spectral density estimation into the Bayesian inference of the binary source parameters and apply it to the first 11 CBC detections reported by the LIGO-Virgo Collaboration. We find that incorporating the PSD uncertainty only leads to variations in the positions and widths of the binary parameter posteriors on the order of a few percent.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: In this article, the authors studied the deterministic time complexity of solving a given binary labeling problem in trees, in the usual LOCAL model of distributed computing, and showed that the complexity of any such problem is in one of the following classes: O(1), Θ(log n), or unsolvable.
Abstract: We present a complete classification of the deterministic distributed time complexity for a family of graph problems: binary labeling problems in trees. These are locally checkable problems that can be encoded with an alphabet of size two in the edge labeling formalism. Examples of binary labeling problems include sinkless orientation, sinkless and sourceless orientation, 2-vertex coloring, perfect matching, and the task of coloring edges red and blue such that all nodes are incident to at least one red and at least one blue edge. More generally, we can encode e.g. any cardinality constraints on indegrees and outdegrees. We study the deterministic time complexity of solving a given binary labeling problem in trees, in the usual LOCAL model of distributed computing. We show that the complexity of any such problem is in one of the following classes: O(1), Θ(log n), Θ(n), or unsolvable. In particular, a problem that can be represented in the binary labeling formalism cannot have time complexity Θ(log^* n), and hence we know that e.g. any encoding of maximal matchings has to use at least three labels (which is tight). Furthermore, given the description of any binary labeling problem, we can easily determine in which of the four classes it is and what is an asymptotically optimal algorithm for solving it. Hence the distributed time complexity of binary labeling problems is decidable, not only in principle, but also in practice: there is a simple and efficient algorithm that takes the description of a binary labeling problem and outputs its distributed time complexity.

Journal ArticleDOI
TL;DR: It is numerically shown that PA-DM combined with SR-CCDM can reduce the number of sequential processing steps by more than an order of magnitude, while having a rate loss that is comparable to conventional nonbinary CCDM with arithmetic coding.
Abstract: A distribution matcher (DM) maps a binary input sequence into a block of nonuniformly distributed symbols. To facilitate the implementation of shaped signaling, fast DM solutions with high throughput and low serialism are required. We propose a novel DM architecture with parallel amplitudes (PA-DM) for which $m-1$ component DMs, each with a different binary output alphabet, are operated in parallel in order to generate a shaped sequence with $m$ amplitudes. With negligible rate loss compared to a single nonbinary DM, PA-DM has a parallelization factor that grows linearly with $m$ , and the component DMs have reduced output lengths. For such binary-output DMs, a novel constant-composition DM (CCDM) algorithm based on subset ranking (SR) is proposed. We present SR-CCDM algorithms that are serial in the minimum number of occurrences of either binary symbol for mapping, and fully parallel for demapping. For distributions that are optimized for the additive white Gaussian noise (AWGN) channel, we numerically show that PA-DM combined with SR-CCDM can reduce the number of sequential processing steps by more than an order of magnitude, while having a rate loss that is comparable to conventional nonbinary CCDM with arithmetic coding.

Journal ArticleDOI
TL;DR: When the binary signals are corrupted by external heavy-tailed noise, it is found that the Hopfield neural network with a large number of neurons can outperform the matched filter in the region of low input signal-to-noise ratios per bit.

Proceedings ArticleDOI
09 Jul 2020
TL;DR: A concise summary of the efforts of all of the communities studying Boolean Matrix Factorization is given and some open questions which in this opinion require further investigation are raised.
Abstract: The goal of Boolean Matrix Factorization (BMF) is to approximate a given binary matrix as the product of two low-rank binary factor matrices, where the product of the factor matrices is computed under the Boolean algebra. While the problem is computationally hard, it is also attractive because the binary nature of the factor matrices makes them highly interpretable. In the last decade, BMF has received a considerable amount of attention in the data mining and formal concept analysis communities and, more recently, the machine learning and the theory communities also started studying BMF. In this survey, we give a concise summary of the efforts of all of these communities and raise some open questions which in our opinion require further investigation.

Book ChapterDOI
23 Aug 2020
TL;DR: In this paper, a new search space of binary layer types, a new cell template, and a novel search objective are proposed to diversify early search to learn better performing binary architectures.
Abstract: Backbone architectures of most binary networks are well-known floating point (FP) architectures such as the ResNet family. Questioning that the architectures designed for FP networks might not be the best for binary networks, we propose to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective. Specifically, based on the cell based search method, we define the new search space of binary layer types, design a new cell template, and rediscover the utility of and propose to use the Zeroise layer instead of using it as a placeholder. The novel search objective diversifies early search to learn better performing binary architectures. We show that our method searches architectures with stable training curves despite the quantization error inherent in binary networks. Quantitative analyses demonstrate that our searched architectures outperform the architectures used in state-of-the-art binary networks and outperform or perform on par with state-of-the-art binary networks that employ various techniques other than architectural changes.

Journal ArticleDOI
TL;DR: In this article, the problem of compressed sensing using binary measurement matrices and base-pursuit (basis pursuit) as the recovery algorithm is studied and upper and lower bounds on the number of measurements required to achieve robust sparse recovery with binary matrices are derived.
Abstract: In this paper, we study the problem of compressed sensing using binary measurement matrices and $\ell _1$ -norm minimization (basis pursuit) as the recovery algorithm. We derive new upper and lower bounds on the number of measurements to achieve robust sparse recovery with binary matrices. We establish sufficient conditions for a column-regular binary matrix to satisfy the robust null space property (RNSP) and show that the associated sufficient conditions for robust sparse recovery obtained using the RNSP are better by a factor of $(3 \sqrt{3})/2 \approx 2.6$ compared to the sufficient conditions obtained using the restricted isometry property (RIP). Next we derive universal lower bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal. Then we display two classes of binary matrices, namely parity check matrices of array codes and Euler squares, which have girth six and are nearly optimal in the sense of almost satisfying the lower bound. In principle, randomly generated Gaussian measurement matrices are “order-optimal.” So we compare the phase transition behavior of the basis pursuit formulation using binary array codes and Gaussian matrices and show that (i) there is essentially no difference between the phase transition boundaries in the two cases and (ii) the CPU time of basis pursuit with binary matrices is hundreds of times faster than with Gaussian matrices and the storage requirements are less. Therefore it is suggested that binary matrices are a viable alternative to Gaussian matrices for compressed sensing using basis pursuit.

Posted Content
TL;DR: This work proposes Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features.
Abstract: Network binarization is a promising hardware-aware direction for creating efficient deep models. Despite its memory and computational advantages, reducing the accuracy gap between binary models and their real-valued counterparts remains an unsolved challenging research problem. To this end, we make the following 3 contributions: (a) To increase model capacity, we propose Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features. (b) To increase representation capacity, we propose to address the inherent information bottleneck in binary networks by introducing an efficient width expansion mechanism which keeps the binary operations within the same budget. (c) To improve network design, we propose a principled binary network growth mechanism that unveils a set of network topologies of favorable properties. Overall, our method improves upon prior work, with no increase in computational cost, by $\sim6 \%$, reaching a groundbreaking $\sim 71\%$ on ImageNet classification. Code will be made available $\href{this https URL}{here}$.

Posted Content
TL;DR: This work proposes to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective, and designs a new cell template and proposes to use the Zeroise layer instead of using it as a placeholder.
Abstract: Backbone architectures of most binary networks are well-known floating point architectures such as the ResNet family. Questioning that the architectures designed for floating point networks would not be the best for binary networks, we propose to search architectures for binary networks (BNAS) by defining a new search space for binary architectures and a novel search objective. Specifically, based on the cell based search method, we define the new search space of binary layer types, design a new cell template, and rediscover the utility of and propose to use the Zeroise layer instead of using it as a placeholder. The novel search objective diversifies early search to learn better performing binary architectures. We show that our proposed method searches architectures with stable training curves despite the quantization error inherent in binary networks. Quantitative analyses demonstrate that our searched architectures outperform the architectures used in state-of-the-art binary networks and outperform or perform on par with state-of-the-art binary networks that employ various techniques other than architectural changes.

Journal ArticleDOI
TL;DR: The proposed method can reduce the severe effect of quality degradation from binarizing gray-scaled holograms by optimizing the neural network to output binary amplitude holograms directly.
Abstract: Binary hologram generation based on deep learning is proposed. The proposed method can reduce the severe effect of quality degradation from binarizing gray-scaled holograms by optimizing the neural network to output binary amplitude holograms directly. In previous work on binary holograms, the calculation time for generating binary holograms was long. However, in the proposed method, once the neural network is trained enough, the neural network generates binary holograms much faster than previous work with comparable quality. The proposed method is more suitable for opportunities to generate several binary holograms under the same condition. The feasibility of the proposed method was confirmed experimentally.

Proceedings ArticleDOI
15 Apr 2020
TL;DR: Binary lifting is addressed with BinRec, a new approach to heuristic-free binary recompilation which lifts dynamic traces of a binary to a compiler-level intermediate representation (IR) and lowers the IR back to a "recovered" binary.
Abstract: Binary lifting and recompilation allow a wide range of install-time program transformations, such as security hardening, deobfuscation, and reoptimization. Existing binary lifting tools are based on static disassembly and thus have to rely on heuristics to disassemble binaries. In this paper, we present BinRec, a new approach to heuristic-free binary recompilation which lifts dynamic traces of a binary to a compiler-level intermediate representation (IR) and lowers the IR back to a "recovered" binary. This enables BinRec to apply rich program transformations, such as compiler-based optimization passes, on top of the recovered representation. We identify and address a number of challenges in binary lifting, including unique challenges posed by our dynamic approach. In contrast to existing frameworks, our dynamic frontend can accurately disassemble and lift binaries without heuristics, and we can successfully recover obfuscated code and all SPEC INT 2006 benchmarks including C++ applications. We evaluate BinRec in three application domains: i) binary reoptimization, ii) deobfuscation (by recovering partial program semantics from virtualization-obfuscated code), and iii) binary hardening (by applying existing compiler-level passes such as AddressSanitizer and SafeStack on binary code).

Journal ArticleDOI
TL;DR: This article proposes boundary differential privacy (BDP) against attacks by obfuscating the prediction responses with noises by designing a perturbation algorithm called boundary randomized response for a binary model and a generalization of this algorithm to a multiclass model.
Abstract: Machine learning service API allows model owners to monetize proprietary models by offering prediction services to third-party users. However, existing literature shows that model parameters are vulnerable to extraction attacks which accumulate prediction queries and their responses to train a replica model. As countermeasures, researchers have proposed to reduce the rich API output, such as hiding the precise confidence. Nonetheless, even with response being only one bit, an adversary can still exploit fine-tuned queries with differential property to infer the decision boundary of the underlying model. In this paper, we propose boundary differential privacy (BDP) against such attacks by obfuscating the prediction responses with noises. BDP guarantees an adversary cannot learn the decision boundary of any two classes by a predefined precision no matter how many queries are issued to the prediction API. We first design a perturbation algorithm called boundary randomized response for a binary model. Then we prove it satisfies e-BDP, followed by a generalization of this algorithm to a multiclass model. Finally, we generalize a hard boundary to soft boundary and design an adaptive perturbation algorithm that can still work in the latter case. The effectiveness and high utility of our solution are verified by extensive experiments on both linear and non-linear models.

Journal ArticleDOI
Turgay Kaya1
TL;DR: This study presents the design of a ring oscillator (RO)-based PUF in a field programmable gate array and shows that the statistical properties of the numbers obtained were good and could be used in cryptography.
Abstract: Physical unclonable function (PUF) and true random number generator structures are important components used for security in cryptographic systems. Random numbers can be generated for cryptography by using these two components together. In particular, it is desirable that these numbers be unpredictable, non-reproducible and have good statistical properties. This study presents the design of a ring oscillator (RO)-based PUF in a field programmable gate array. Random numbers—obtained from a Chua circuit that exhibits chaotic behavior in 3D and continuous time—were applied to the RO-based PUF challenge inputs. Normalization operations were performed to convert the values in floating number format—obtained by sampling the Chua circuit—into the binary number system. Because modular arithmetic was used in the normalization process, it was simple and fast to obtain the generated numbers to be applied to the challenge inputs. NIST, autocorrelation and scale index tests were used to reveal the usability of the random numbers obtained by the RO-PUF for key generation. The results showed that the statistical properties of the numbers obtained were good and could be used in cryptography.