scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2013"


Journal ArticleDOI
TL;DR: Six new transfer functions divided into two families, s-shaped and v-shaped, are introduced and evaluated and prove that the new introduced v- shaped family of transfer functions significantly improves the performance of the original binary PSO.
Abstract: Particle Swarm Optimization (PSO) is one of the most widely used heuristic algorithms. The simplicity and inexpensive computational cost makes this algorithm very popular and powerful in solving a wide range of problems. The binary version of this algorithm has been introduced for solving binary problems. The main part of the binary version is a transfer function which is responsible to map a continuous search space to a discrete search space. Currently there appears to be insufficient focus on the transfer function in the literature despite its apparent importance. In this study six new transfer functions divided into two families, s-shaped and v-shaped, are introduced and evaluated. Twenty-five benchmark optimization functions provided by CEC 2005 special session are employed to evaluate these transfer functions and select the best one in terms of avoiding local minima and convergence speed. In order to validate the performance of the best transfer function, a comparative study with six recent modifications of BPSO is provided as well. The results prove that the new introduced v-shaped family of transfer functions significantly improves the performance of the original binary PSO.

766 citations


Journal ArticleDOI
TL;DR: This paper investigates an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement, and introduces the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.
Abstract: The compressive sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices provide measurement mappings that, with overwhelming probability, achieve nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the binary e-stable embedding property, which characterizes the robustness of the measurement process to sign changes. We show that the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.

645 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: A novel framework to learn an extremely compact binary descriptor called Bin Boost that is very robust to illumination and viewpoint changes and significantly outperforms the state-of-the-art binary descriptors and performs similarly to the best floating-point descriptors at a fraction of the matching time and memory footprint.
Abstract: Binary key point descriptors provide an efficient alternative to their floating-point competitors as they enable faster processing while requiring less memory. In this paper, we propose a novel framework to learn an extremely compact binary descriptor we call Bin Boost that is very robust to illumination and viewpoint changes. Each bit of our descriptor is computed with a boosted binary hash function, and we show how to efficiently optimize the different hash functions so that they complement each other, which is key to compactness and robustness. The hash functions rely on weak learners that are applied directly to the image patches, which frees us from any intermediate representation and lets us automatically learn the image gradient pooling configuration of the final descriptor. Our resulting descriptor significantly outperforms the state-of-the-art binary descriptors and performs similarly to the best floating-point descriptors at a fraction of the matching time and memory footprint.

212 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a general formalism for determining the contribution of each star of the binary to the total flux received at the top of the atmosphere of an Earth-like planet, and use the Sun's HZ to calculate the locations of the inner and outer boundaries of the HZ around a binary star system.
Abstract: We have developed a comprehensive methodology for calculating the circumbinary HZ in planet-hosting P-type binary star systems. We present a general formalism for determining the contribution of each star of the binary to the total flux received at the top of the atmosphere of an Earth-like planet, and use the Sun's HZ to calculate the locations of the inner and outer boundaries of the HZ around a binary star system. We apply our calculations to the Kepler's currently known circumbinary planetary system and show the combined stellar flux that determines the boundaries of their HZs. We also show that the HZ in P-type systems is dynamic and depending on the luminosity of the binary stars, their spectral types, and the binary eccentricity, its boundaries vary as the stars of the binary undergo their orbital motion. We present the details of our calculations and discuss the implications of the results.

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the evolution of supermassive black hole binaries at the center of spherical, axisymmetric, and triaxial galaxies, using direct N-body integrations as well as analytic estimates.
Abstract: We consider the evolution of supermassive black hole binaries at the center of spherical, axisymmetric, and triaxial galaxies, using direct N-body integrations as well as analytic estimates. We find that the rates of binary hardening exhibit a significant N-dependence in all the models, at least for N in the investigated range of 10^5<=N<=10^6. Binary hardening rates are also substantially lower than would be expected if the binary loss cone remained full, as it would be if the orbits supplying stars to the binary were being efficiently replenished. The difference in binary hardening rates between the spherical and nonspherical models is less than a factor of two even in the simulations with the largest N. By studying the orbital populations of our models, we conclude that the rate of supply of stars to the binary via draining of centrophilic orbits is indeed expected to be much lower than the full-loss-cone rate, consistent with our simulations. We argue that the binary's evolution in the simulations is driven in roughly equal amounts by collisional and collisionless effects, even at the highest N-values currently accessible. While binary hardening rates would probably reach a limiting value for large N, our results suggest that we cannot approach that rate with currently available algorithms and computing hardware. The extrapolation of results from N-body simulations to real galaxies is therefore not straightforward, casting doubt on recent claims that triaxiality or axisymmetry alone are capable of solving the final-parsec problem in gas-free galaxies.

91 citations


Journal ArticleDOI
TL;DR: The concept of the binary numeral system (BNS) is used to reduce the number of binary and continuous variables related to the candidate transmission lines and network constraints that are connected with them and the construction phase of greedy randomized adaptive search procedure (GRASP-CP) and additional constraints, obtained from power flow equilibrium in an electric power system are employed.
Abstract: This paper proposes strategies to reduce the number of variables and the combinatorial search space of the multistage transmission expansion planning problem (TEP). The concept of the binary numeral system (BNS) is used to reduce the number of binary and continuous variables related to the candidate transmission lines and network constraints that are connected with them. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) and additional constraints, obtained from power flow equilibrium in an electric power system are employed for more reduction in search space. The multistage TEP problem is modeled like a mixed binary linear programming problem and solved using a commercial solver with a low computational time. The results of one test system and two real systems are presented in order to show the efficiency of the proposed solution technique.

65 citations


Journal ArticleDOI
TL;DR: A novel method to perform inner product computation based on the distributed arithmetic principles using the thermometer code encoded residues provides a simple means to perform the modular inner products computation due to the absence of the 2 modulo operation encountered in conventional binary code encoded system.
Abstract: This paper presents a novel method to perform inner product computation based on the distributed arithmetic principles. The input data are represented in the residue domain and are encoded using the thermometer code format while the output data are encoded in the one-hot code format. Compared to the conventional distributed arithmetic based system using binary coded format to represent the residues, the proposed system using the thermometer code encoded residues provides a simple means to perform the modular inner products computation due to the absence of the 2 modulo operation encountered in conventional binary code encoded system. In addition, the modulo adder used in the proposed system can be implemented using simple shifter based circuit utilizing one-hot code format. As there is no carry propagation involved in the addition using one-hot code, while the modulo operation can be performed automatically during the addition process, the operating speed of the one-hot code based modulo adder is much superior compared to the conventional binary code based modulo adder. As inner product is used extensively in FIR filter design, SPICE simulation results for an FIR filter implemented using the proposed system is also presented to demonstrate the validity of the proposed scheme.

49 citations



Journal ArticleDOI
TL;DR: In this paper, the restriction on the number of digits in the binary expansion of p was relaxed to r < c(n/log n)4/7, where n is the size of the L-function.
Abstract: We present a new result on counting primes p < N = 2 n for which r (arbitrarily placed) digits in the binary expansion of p are specified. Compared with earlier work of Harman and Katai, the restriction on r is relaxed to r < c(n/log n)4/7. This condition results from the estimates of Gallagher and Iwaniec on zero-free regions of L-functions with ‘powerful’ conductor.

47 citations


ReportDOI
01 Aug 2013
TL;DR: This work analyzes the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner and demonstrates how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.
Abstract: In high-end computing, the collective surface area, smaller fabrication sizes, and increasing density of components have led to an increase in the number of observed bit flips. If mechanisms are not in place to detect them, such flips produce silent errors, i.e. the code returns a result that deviates from the desired solution by more than the allowed tolerance and the discrepancy cannot be distinguished from the standard numerical error associated with the algorithm. These phenomena are believed to occur more frequently in DRAM, but logic gates, arithmetic units, and other circuits are also susceptible to bit flips. Previous work has focused on algorithmic techniques for detecting and correcting bit flips in specific data structures, however, they suffer from lack of generality and often times cannot be implemented in heterogeneous computing environment. Our work takes a novel approach to this problem. We focus on quantifying the impact of a single bit flip on specific floating-point operations. We analyze the error induced by flipping specific bits in the most widely used IEEE floating-point representation in an architecture-agnostic manner, i.e., without requiring proprietary information such as bit flip rates and the vendor-specific circuit designs. We initially study dot products of vectors andmore » demonstrate that not all bit flips create a large error and, more importantly, expected value of the relative magnitude of the error is very sensitive on the bit pattern of the binary representation of the exponent, which strongly depends on scaling. Our results are derived analytically and then verified experimentally with Monte Carlo sampling of random vectors. Furthermore, we consider the natural resilience properties of solvers based on the fixed point iteration and we demonstrate how the resilience of the Jacobi method for linear equations can be significantly improved by rescaling the associated matrix.« less

46 citations


Book ChapterDOI
20 Aug 2013
TL;DR: The λ-coordinates are presented, a new system for representing points in binary elliptic curves that improves speed records for protected/unprotected single/multi-core software implementations of random-point elliptic curve scalar multiplication at the 128-bit security level.
Abstract: In this work we present the λ-coordinates, a new system for representing points in binary elliptic curves. We also provide efficient elliptic curve operations based on the new representation and timing results of our software implementation over the field $\mathbb{F}_{2^{254}}$. As a result, we improve speed records for protected/unprotected single/multi-core software implementations of random-point elliptic curve scalar multiplication at the 128-bit security level. When implemented on a Sandy Bridge 3.4GHz Intel Xeon processor, our software is able to compute a single/multi-core unprotected scalar multiplication in 72,300 and 47,900 clock cycles, respectively; and a protected single-core scalar multiplication in 114,800 cycles. These numbers improve by around 2% on the newer Core i7 2.8GHz Ivy Bridge platform.

Journal ArticleDOI
07 Jul 2013
TL;DR: Two novel low-complexity encoding algorithms for quasi-cyclic (QC) codes based on Galois Fourier transform are presented, making use of the block diagonal structure of the transformed generator matrix to save a large number of Galois field multiplications.
Abstract: This paper presents a novel low-complexity encoding algorithm for binary quasi-cyclic (QC) codes based on matrix transformation. First, a message vector is encoded into a transformed codeword in the transform domain. Then, the transmitted codeword is obtained from the transformed codeword by the inverse Galois Fourier transform. Moreover, a simple and fast mapping is devised to post-process the transformed codeword such that the transmitted codeword is binary as well. The complexity of our proposed encoding algorithm is less than ek(n-k)log2 e+ne(log22 e+log2 e)+ n/2 elog32 e bit operations for binary codes. This complexity is much lower than its traditional complexity 2e2(n - k)k. In the examples of encoding the binary (4095, 2016) and (15500, 10850) QC codes, the complexities are 12.09% and 9.49% of those of traditional encoding, respectively.

Journal ArticleDOI
01 Jul 2013-Order
TL;DR: It is shown that triadic concepts of I, developed within formal concept analysis, provide us with optimal decompositions, and a greedy algorithm for computing suboptimal decomposition is proposed and evaluated.
Abstract: We present a new approach to factor analysis of three-way binary data, i.e. data described by a 3-dimensional binary matrix I, describing a relationship between objects, attributes, and conditions. The problem consists in finding a decomposition of I into three binary matrices, an object-factor matrix A, an attribute-factor matrix B, and a condition-factor matrix C, with the number of factors as small as possible. The scenario is similar to that of decomposition-based methods of analysis of three-way data but the difference consists in the composition operator and the constraint on A, B, and C to be binary. We show that triadic concepts of I, developed within formal concept analysis, provide us with optimal decompositions. We present an example demonstrating the usefulness of the decompositions. Since finding optimal decompositions is NP-hard, we propose a greedy algorithm for computing suboptimal decompositions and evaluate its performance.

Proceedings ArticleDOI
25 Nov 2013
TL;DR: It is proved that the technology of the PSW coding has two mechanisms for compensation of the influence of structural characteristics of a transformant binary format (the quantity of bits of compressed views per one element in average).
Abstract: This paper investigates the peculiarities of the coding transformant bit view taking into account the observed regularities of binary structures based on the positional structural-weight (PSW) coding. It is proved that the technology of the PSW coding has two mechanisms for compensation of the influence of structural characteristics of a transformant binary format (the quantity of bits of compressed views per one element in average). The mechanisms of compensation are a formation of lengths for binary series and a building system of PSW number bases for each array of lengths of binary series.

Journal ArticleDOI
TL;DR: In this paper, a moderately exponential time algorithm for the satisfiability of Boolean formulas over the full binary basis was presented, for formulas of size at most cn, which runs in time O(2^{(1-\mu_{c})n] for some constant ρ ≥ 0.
Abstract: We present a moderately exponential time algorithm for the satisfiability of Boolean formulas over the full binary basis. For formulas of size at most cn, our algorithm runs in time $${2^{(1-\mu_{c})n}}$$ for some constant μ c > 0. As a byproduct of the running time analysis of our algorithm, we obtain strong average-case hardness of affine extractors for linear-sized formulas over the full binary basis.

Journal ArticleDOI
TL;DR: In this article, a new scheme of all-optical binary half adder is proposed, which realizes a kind of alloptical computation which is based on the effect of spatial property in photonic crystal.
Abstract: In this paper, a new scheme of all-optical binary half adder is proposed. This scheme realizes a kind of all-optical computation which is based on the effect of spatial property in photonic crystal. The line defect, which is obtained by reducing the radius of host rod, is used to produce the appropriate phase difference between the input signals. The two input signals and the reference signals interfered offer a desired logic function. Because of the device utilizing the linear material, this structure operation is independent of the input power. The proposed adder is demonstrated, numerically, by computing electromagnetic field distribution using finite difference time domain method.

Journal ArticleDOI
TL;DR: The asymptotic merit factor of several families of binary sequences is established and thereby proved to prove various conjectures, explain numerical evidence presented by other authors, and bring together within a single framework results previously appearing in scattered form.

Proceedings Article
05 Dec 2013
TL;DR: This paper introduces the notion of weak regret transfer bounds, where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution, and shows that a good bipartite ranking model can be used to construct a good classification model (by thresholding at a suitable point).
Abstract: We investigate the relationship between three fundamental problems in machine learning: binary classification, bipartite ranking, and binary class probability estimation (CPE). It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0.5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model. However, not much is known about other directions. Formally, these relationships involve regret transfer bounds. In this paper, we introduce the notion of weak regret transfer bounds, where the mapping needed to transform a model from one problem to another depends on the underlying probability distribution (and in practice, must be estimated from data). We then show that, in this weaker sense, a good bipartite ranking model can be used to construct a good classification model (by thresholding at a suitable point), and more surprisingly, also to construct a good binary CPE model (by calibrating the scores of the ranking model).

Journal ArticleDOI
TL;DR: A highly parallel scheme to speed up the point multiplication for high-speed hardware implementation of ECC cryptoprocessor on Koblitz curves is proposed and the proposed data flow of point addition has the lowest latency in comparison to the counterparts available in the literature.
Abstract: Fast and high-performance computation of finite-field arithmetic is crucial for elliptic curve cryptography (ECC) over binary extension fields. In this brief, we propose a highly parallel scheme to speed up the point multiplication for high-speed hardware implementation of ECC cryptoprocessor on Koblitz curves. We slightly modify the addition formulation in order to employ four parallel finite-field multipliers in the data flow. This reduces the latency of performing point addition and speeds up the overall point multiplication. To the best of our knowledge, the proposed data flow of point addition has the lowest latency in comparison to the counterparts available in the literature. To make the cryptoprocessor more efficient, we employ a low-complexity and efficient digit-level Gaussian normal basis multiplier to perform lower level finite-field multiplications. Finally, we have implemented our proposed architecture for point multiplication on an Altera Stratix II field-programmable gate array and obtained the results of timing and area.

Book ChapterDOI
02 Sep 2013
TL;DR: A novel software multiplier for performing a polynomial multiplication of two 64-bit binary polynomials based on the VMULL instruction included in the NEON engine supported in many ARM processors is described, obtaining a fast software multiplication in the binary field \(\mathbb{F}_{2^m}\), which is up to 45% faster compared to the best known algorithm.
Abstract: Efficient algorithms for binary field operations are required in several cryptographic operations such as digital signatures over binary elliptic curves and encryption. The main performance-critical operation in these fields is the multiplication, since most processors do not support instructions to carry out a polynomial multiplication. In this paper we describe a novel software multiplier for performing a polynomial multiplication of two 64-bit binary polynomials based on the VMULL instruction included in the NEON engine supported in many ARM processors. This multiplier is then used as a building block to obtain a fast software multiplication in the binary field \(\mathbb{F}_{2^m}\), which is up to 45% faster compared to the best known algorithm. We also illustrate the performance improvement in point multiplication on binary elliptic curves using the new multiplier, improving the performance of standard NIST curves at the 128- and 256-bit levels of security. The impact on the GCM authenticated encryption scheme is also studied, with new speed records. We present timing results of our software implementation on the ARM Cortex-A8, A9 and A15 processors.

Journal ArticleDOI
TL;DR: The statistical picture of the solution space for a binary perceptron and the geometrical organization is elucidated by the entropy landscape from a reference configuration and of solution-pairs separated by a given Hamming distance in the solutions space.
Abstract: The statistical picture of the solution space for a binary perceptron is studied. The binary perceptron learns a random classification of input random patterns by a set of binary synaptic weights. The learning of this network is difficult especially when the pattern (constraint) density is close to the capacity, which is supposed to be intimately related to the structure of the solution space. The geometrical organization is elucidated by the entropy landscape from a reference configuration and of solution-pairs separated by a given Hamming distance in the solution space. We evaluate the entropy at the annealed level as well as replica symmetric level and the mean field result is confirmed by the numerical simulations on single instances using the proposed message passing algorithms. From the first landscape (a random configuration as a reference), we see clearly how the solution space shrinks as more constraints are added. From the second landscape of solution-pairs, we deduce the coexistence of clustering and freezing in the solution space.

Journal ArticleDOI
TL;DR: In this article, the authors studied the statistical structure of the solution space for a binary perceptron and showed that the entropy at the annealed level and replica symmetric level of the network can be used to evaluate the mean field.
Abstract: The statistical picture of the solution space for a binary perceptron is studied. The binary perceptron learns a random classification of input random patterns by a set of binary synaptic weights. The learning of this network is difficult especially when the pattern (constraint) density is close to the capacity, which is supposed to be intimately related to the structure of the solution space. The geometrical organization is elucidated by the entropy landscape from a reference configuration and of solution-pairs separated by a given Hamming distance in the solution space. We evaluate the entropy at the annealed level as well as replica symmetric level and the mean field result is confirmed by the numerical simulations on single instances using the proposed message passing algorithms. From the first landscape (a random configuration as a reference), we see clearly how the solution space shrinks as more constraints are added. From the second landscape of solution-pairs, we deduce the coexistence of clustering and freezing in the solution space.

Journal ArticleDOI
TL;DR: In this paper, the design of simple combinational optoelectronic circuit based on SiC technology, able to act simultaneously as a 4-bit binary encoder or a binary decoder in a 4 to 16 line configurations is presented.
Abstract: The purpose of this paper is the design of simple combinational optoelectronic circuit based on SiC technology, able to act simultaneously as a 4-bit binary encoder or a binary decoder in a 4-to-16 line configurations. The 4-bit binary encoder takes all the data inputs, one by one, and converts them to a single encoded output. The binary decoder decodes a binary input pattern to a decimal output code. The optoelectronic circuit is realized using a a-SiC:H double pin/pin photodetector with two front and back optical gates activated trough steady state violet background. Four red, green, blue and violet input channels impinge on the device at different bit sequences allowing 16 possible inputs. The device selects, through the violet background, one of the sixteen possible input logic signals and sends it to the output. Results show that the device acts as a reconfigurable active filter and allows optical switching and optoelectronic logic functions development. A relationship between the optical inputs and the corresponding digital output levels is established. A binary color weighted code that takes into account the specific weights assigned to each bit position establish the optoelectronic functions. A truth table of an encoder that performs 16-to-1 multiplexer (MUX) function is presented.

Journal ArticleDOI
TL;DR: In this paper, a novel efficient adaptive binary arithmetic coder is proposed which is multiplication-free and requires no look-up tables, and it is shown that in comparison with the M-coder the proposed algorithm provides comparable computational complexity, less memory footprint and bitrate savings.
Abstract: In this paper we propose a novel efficient adaptive binary arithmetic coder which is multiplication-free and requires no look-up tables. To achieve this, we combine the probability estimation based on a virtual sliding window with the approximation of multiplication and the use of simple operations to calculate the next approximation after the encoding of each binary symbol. We show that in comparison with the M-coder the proposed algorithm provides comparable computational complexity, less memory footprint and bitrate savings from 0.5 to 2.3% on average for H.264/AVC standard and from 0.6 to 3.6% on average for HEVC standard.

Journal ArticleDOI
TL;DR: A method to compute the Euler number of a binary digital image based on a codification of contour pixel sof the image’s shapes is described, supported through an experimental set which analyzes some digital images and their outcome to demonstrate the applicability of the procedure.


Journal ArticleDOI
TL;DR: A method for estimating the effect on the completely standardized factor loadings is proposed and the various steps of this threshold-free approach of investigating the structure of binary data are demonstrated.
Abstract: A major characteristic of the threshold-free approach to the investigation of the structure of binary data is the step from binary to continuous by computing probabilities instead of estimating thresholds. Another characteristic is the consideration of the shift from the distribution of the binary data to the normal distribution of the latent variables at the level of variances and covariances. Two ways of relating the distributions are considered: standardization and modifying the assumed model of measurement accordingly. Furthermore, there is the consideration of the change in the proportion of true variance. A method for estimating the effect on the completely standardized factor loadings is proposed. In an example the various steps of this threshold-free approach of investigating the structure of binary data are demonstrated. A major advantage of this approach is the avoidance of estimating thresholds that requires especially large samples.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: It is argued that a reexamination of the decision against using ternary arithmetic might be in order, and an efficient binary encoding for balanced ternaries numbers, along with the corresponding arithmetic circuits are proposed.
Abstract: Ternary number representation and arithmetic, based on the radix-3 digit set {-1, 0, ;1}, has been studied at various times in the history of digital computing. Some such studies concluded that we should abandon ternary in favor of binary computation. Others, demonstrated promise and potential advantages, but, for various reasons, including inertia, did not lead to widespread use. By proposing an efficient binary encoding for balanced ternary numbers, along with the corresponding arithmetic circuits, we argue that a reexamination of the decision against using ternary arithmetic might be in order.

Proceedings ArticleDOI
28 Jul 2013
TL;DR: In this article, the authors present a few numeric abstract domains to analyze C programs that exploit the binary representation of numbers in computers, for instance to perform "compute-through-overflow" on machine integers, or to directly manipulate the exponent and mantissa of floating-point numbers.
Abstract: We present a few lightweight numeric abstract domains to analyze C programs that exploit the binary representation of numbers in computers, for instance to perform "compute-through-overflow" on machine integers, or to directly manipulate the exponent and mantissa of floating-point numbers. On integers, we propose an extension of intervals with a modular component, as well as a bitfield domain. On floating-point numbers, we propose a predicate domain to match, infer, and propagate selected expression patterns. These domains are simple, efficient, and extensible. We have included them into the Astree and AstreeA static analyzers to supplement existing domains. Experimental results show that they can improve the analysis precision at a reasonable cost.

Proceedings ArticleDOI
13 Jun 2013
TL;DR: This work proposes an alternative square root algorithm which is based on two approaches, digital binary input decomposition and iterative calculation, and its fixed-point digital hardware implementation is very simple, low complexity, and resource-efficient.
Abstract: Square root operation is one of the basic important operation in digital signal processing. It will calculate the square root value from the given input. This operation is known hard to implement in digital hardware because of the complexity of its algorithm. There were many researches related to this topic to obtain the optimum design between area consumption and speed. Regarding this condition, we propose an alternative square root algorithm which is based on two approaches, digital binary input decomposition and iterative calculation. Its fixed-point digital hardware implementation is very simple, low complexity, and resource-efficient. It doesn't need any correction adjustments and directly produces accurate value of square root result and remainder in (N/2)+1 clock cycles, which N represents the wordlength of input. This design has been synthesized for FPGA target board Altera Cyclone II EP2C35F672C6 and produced good results in resource consumption and speed.