scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2012"


Posted Content
TL;DR: It is shown how to further increase the number of representations and propose a new information set decoding algorithm with running time 20.0494n, which was improved to 20.0537n by May, Meurer and Thomae.
Abstract: Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern’s information set decoding algorithm to 2 . Using representations this bound was improved to 2 by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 2 .

271 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: A new version of ABC, called DisABC, is introduced, which is particularly designed for binary optimization, and uses a new differential expression, which employs a measure of dissimilarity between binary vectors in place of the vector subtraction operator typically used in the original ABC algorithm.
Abstract: Artificial bee colony (ABC) algorithm is one of the recently proposed swarm intelligence based algorithms for continuous optimization. Therefore it is not possible to use the original ABC algorithm directly to optimize binary structured problems. In this paper we introduce a new version of ABC, called DisABC, which is particularly designed for binary optimization. DisABC uses a new differential expression, which employs a measure of dissimilarity between binary vectors in place of the vector subtraction operator typically used in the original ABC algorithm. Such an expression helps to maintain the major characteristics of the original one and is respondent to the structure of binary optimization problems, too. Similar to original ABC algorithm, DisABC's differential expression works in continuous space while its consequence is used in a two-phase heuristic to construct a complete solution in binary space. Effectiveness of DisABC algorithm is tested on solving the uncapacitated facility location problem (UFLP). A set of 15 benchmark test problem instances of UFLP are adopted from OR-Library and solved by the proposed algorithm. Results are compared with two other state of the art binary optimization algorithms, i.e., binDE and PSO algorithms, in terms of three quality indices. Comparisons indicate that DisABC performs very well and can be regarded as a promising method for solving wide class of binary optimization problems.

186 citations


Proceedings Article
03 Dec 2012
TL;DR: This work introduces a novel angular quantization-based binary coding (AQBC) technique for high-dimensional non-negative data that arises in vision and text applications where counts or frequencies are used as features and proposes a method for mapping feature vectors to their smallest-angle binary vertices that scales as O(d log d).
Abstract: This paper focuses on the problem of learning binary codes for efficient retrieval of high-dimensional non-negative data that arises in vision and text applications where counts or frequencies are used as features. The similarity of such feature vectors is commonly measured using the cosine of the angle between them. In this work, we introduce a novel angular quantization-based binary coding (AQBC) technique for such data and analyze its properties. In its most basic form, AQBC works by mapping each non-negative feature vector onto the vertex of the binary hypercube with which it has the smallest angle. Even though the number of vertices (quantization landmarks) in this scheme grows exponentially with data dimensionality d, we propose a method for mapping feature vectors to their smallest-angle binary vertices that scales as O(d log d). Further, we propose a method for learning a linear transformation of the data to minimize the quantization error, and show that it results in improved binary codes. Experiments on image and text datasets show that the proposed AQBC method outperforms the state of the art.

118 citations


Journal ArticleDOI
Ling Wang1, Xiping Fu1, Yunfei Mao1, Muhammad Ilyas Menhas1, Minrui Fei1 
TL;DR: A novel modified binary differential evolution algorithm (NMBDE) inspired by the concept of Estimation of Distribution Algorithm and DE is proposed, which can efficiently maintain diversity of population and achieve a better tradeoff between the exploration and exploitation capabilities by cooperating with the selection operator.

91 citations


Journal ArticleDOI
TL;DR: This is the first FPGA implementation of point multiplication on binary Edwards and generalized Hessian curves represented by ω-coordinates, and it is demonstrated how parallelization in higher levels can be performed by full resource utilization of computing point addition and point-doubling formulas.
Abstract: Efficient implementation of point multiplication is crucial for elliptic curve cryptographic systems. This paper presents the implementation results of an elliptic curve crypto-processor over binary fields GF(2m) on binary Edwards and generalized Hessian curves using Gaussian normal basis (GNB). We demonstrate how parallelization in higher levels can be performed by full resource utilization of computing point addition and point-doubling formulas for both binary Edwards and generalized Hessian curves. Then, we employ the ω-coordinate differential formulations for computing point multiplication. Using a lookup-table (LUT)-based pipelined and efficient digit-level GNB multiplier, we evaluate the LUT complexity and time-area tradeoffs of the proposed crypto-processor on an FPGA. We also compare the implementation results of point multiplication on these curves with the ones on the traditional binary generic curve. To the best of the authors' knowledge, this is the first FPGA implementation of point multiplication on binary Edwards and generalized Hessian curves represented by ω-coordinates.

79 citations


Book ChapterDOI
09 Sep 2012
TL;DR: The resulting scalar multiplier is the fastest reported implementation for generic curves over binary finite fields and leads to area requirements that is significantly lesser compared to other high-speed implementations.
Abstract: In this paper we present an FPGA implementation of a high-speed elliptic curve scalar multiplier for binary finite fields. High speeds are achieved by boosting the operating clock frequency while at the same time reducing the number of clock cycles required to do a scalar multiplication. To increase clock frequency, the design uses optimized implementations of the underlying field primitives and a mathematically analyzed pipeline design. To reduce clock cycles, a new scheduling scheme is presented that allows overlapped processing of scalar bits. The resulting scalar multiplier is the fastest reported implementation for generic curves over binary finite fields. Additionally, the optimized primitives leads to area requirements that is significantly lesser compared to other high-speed implementations. Detailed implementation results are furnished in order to support the claims.

71 citations


Proceedings Article
01 Oct 2012
TL;DR: A natural-domain SMT approach that lifts the CDCL framework to operate directly over abstractions of floating-point values to outperforms the state-of-the-art significantly on problems that check ranges on numerical variables.
Abstract: We present a bit-precise decision procedure for the theory of binary floating-point arithmetic. The core of our approach is a non-trivial generalisation of the conflict analysis algorithm used in modern SAT solvers to lattice-based abstractions. Existing complete solvers for floating-point arithmetic employ bit-vector encodings. Propositional solvers based on the Conflict Driven Clause Learning (CDCL) algorithm are then used as a backend. We present a natural-domain SMT approach that lifts the CDCL framework to operate directly over abstractions of floatingpoint values. We have instantiated our method inside MATHSAT5 with the floating-point interval abstraction. The result is a sound and complete procedure for floating-point arithmetic that outperforms the state-of-the-art significantly on problems that check ranges on numerical variables. Our technique is independent of the specific abstraction and can be applied to problems beyond floating-point satisfiability checking.

60 citations


Proceedings ArticleDOI
05 Nov 2012
TL;DR: Experimental results show that these FSM-based implementations are more tolerant of soft errors and less costly in terms of the area-time product that conventional implementations.
Abstract: The paradigm of logical computation on stochastic bit streams has several key advantages compared to deterministic computation based on binary radix, including error-tolerance and low hardware area cost. Prior research has shown that sequential logic operating on stochastic bit streams can compute non-polynomial functions, such as the tanh function, with less energy than conventional implementations. However, the functions that can be computed in this way are quite limited. For example, high order polynomials and non-polynomial functions cannot be computed using prior approaches. This paper proposes a new finite-state machine (FSM) topology for complex arithmetic computation on stochastic bit streams. It describes a general methodology for synthesizing such FSMs. Experimental results show that these FSM-based implementations are more tolerant of soft errors and less costly in terms of the area-time product that conventional implementations.

59 citations


Patent
28 Dec 2012
TL;DR: In this article, a variable precision floating point circuit was proposed to determine the certainty of the result of a multiply-add floating point calculation in parallel with the floating-point calculation.
Abstract: Embodiments of the present invention may provide methods and circuits for energy efficient floating point multiply and/or add operations. A variable precision floating point circuit may determine the certainty of the result of a multiply-add floating point calculation in parallel with the floating-point calculation. The variable precision floating point circuit may use the certainty of the inputs in combination with information from the computation, such as, binary digits that cancel, normalization shifts, and rounding, to perform a calculation of the certainty of the result. A floating point multiplication circuit may determine whether a lowest portion of a multiplication result could affect the final result and may induce a replay of the multiplication operation when it is determined that the result could affect the final result.

48 citations


Journal ArticleDOI
TL;DR: This work develops and compares binary mask techniques to state-of-the-art continuous gain techniques, and derives spectral magnitude minimum mean-square error binary gain estimators; it is shown that the optimal binary estimators are closely related to a range of existing, heuristically developed, binarygain estimators.
Abstract: Recently, binary mask techniques have been proposed as a tool for retrieving a target speech signal from a noisy observation. A binary gain function is applied to time-frequency tiles of the noisy observation in order to suppress noise dominated and retain target dominated time-frequency regions. When implemented using discrete Fourier transform (DFT) techniques, the binary mask techniques can be seen as a special case of the broader class of DFT-based speech enhancement algorithms, for which the applied gain function is not constrained to be binary. In this context, we develop and compare binary mask techniques to state-of-the-art continuous gain techniques. We derive spectral magnitude minimum mean-square error binary gain estimators; the binary gain estimators turn out to be simple functions of the continuous gain estimators. We show that the optimal binary estimators are closely related to a range of existing, heuristically developed, binary gain estimators. The derived binary gain estimators perform better than existing binary gain estimators in simulation experiments with speech signals contaminated by several different noise sources as measured by speech quality and intelligibility measures. However, even the best binary mask method is significantly outperformed by state-of-the-art continuous gain estimators. The instrumental intelligibility results are confirmed in an intelligibility listening test.

48 citations


Journal ArticleDOI
TL;DR: This article provides the theoretical foundation for many efficient generation algorithms, as well as the first construction of fixed-weight binary de Bruijn sequences; results that will appear in subsequent articles.

Journal ArticleDOI
TL;DR: An online identification method to the problem of parameter estimation from binary observations with low-storage requirements and computational complexity is derived and it is proved the convergence of this method provided that the input signal satisfies a strong mixing property.

Journal ArticleDOI
TL;DR: A bit allocation algorithm for biometric discretization to allocate bits dynamically to every feature element based on a Binary Reflected Gray code is proposed, which bases upon a combination of bit statistics and signal to noise ratio in performing feature selection and bit allocation procedures.

Journal ArticleDOI
TL;DR: A simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm, to solve power-control problems in the areas of femtocells and cognitive radio and find that optimal solutions have a binary character.
Abstract: This paper considers the optimum single cell power control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown that the optimum power allocation is binary, which means that links are either “on” or “off.” By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time division multiple access (TDMA) maximizes the aggregate communication rate are established. In a numerical study, we compare and contrast the performance achieved by the optimum binary power-control policy with other suboptimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near perfect interference cancellation efficiency. In this paper, we exploit the theory of majorization to obtain the aforementioned results. In the final part of this paper, we do so to solve power-control problems in the areas of femtocells and cognitive radio and find that, again, optimal solutions have a binary (or almost binary) character.

Proceedings ArticleDOI
01 Oct 2012
TL;DR: This paper designs a novel sampling matrix with unique sum property, which can be universally applied to any binary signal, and proposes a novel binary CS decoding algorithm (BCS) based on graph and unique sum table, which does not need complex optimization process.
Abstract: Model-based compressive sensing (CS) for signal-specific applications is of particular interest in the sparse signal approximation. In this paper, we deal with a special class of sparse signals with binary entries. Unlike conventional CS approaches based on l 1 minimization, we model the CS process with a bi-partite graph. We design a novel sampling matrix with unique sum property, which can be universally applied to any binary signal. Moreover, a novel binary CS decoding algorithm (BCS) based on graph and unique sum table, which does not need complex optimization process, is proposed. Proposed method is verified and compared with existing solutions through mathematical analysis and numerical simulations.

Journal ArticleDOI
TL;DR: A method that provides the conversion of a set of 32 combination of the binary number 25 which represents the UP and DOWN positions of fiver fingers into decimal numbers is proposed.

Book ChapterDOI
07 Oct 2012
TL;DR: In this paper, a software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128-bit security level is presented, where the use of the Frobenius automorphism is exploited to obtain new and faster interleaved versions of the well-known τNAF scalar multiplication algorithm.
Abstract: We design a state-of-the-art software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128-bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The i-th power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the well-known τNAF scalar multiplication algorithm. The usage of the $\tau^{\lfloor m/3 \rfloor}$ and $\tau^{\lfloor m/4 \rfloor}$ maps are employed to create analogues of the 3-and 4-dimensional GLV decompositions and in general, the $\lfloor m/s \rfloor$-th power of the Frobenius automorphism is applied as an analogue of an s-dimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. In particular, our library is able to compute a random point scalar multiplication in just below 105 clock cycles, which sets a new speed record across all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a trade-off between speed, compliance with the published standards and side-channel protection. Finally, we estimate the performance of curve-based cryptographic protocols instantiated using the proposed techniques and compare our results to related work.

Proceedings Article
Kang Zhang1, Jiyang Li1, Yijing Li1, Weidong Hu1, Lifeng Sun1, Shiqiang Yang1 
01 Nov 2012
TL;DR: In this paper, the cost volume is constructed through bitwise operations on a series of binary strings and then this approach is combined with traditional winner-take-all strategy, resulting in a new local stereo matching algorithm called binary stereo matching (BSM).
Abstract: In this paper, we propose a novel binary-based cost computation and aggregation approach for stereo matching problem. The cost volume is constructed through bitwise operations on a series of binary strings. Then this approach is combined with traditional winner-take-all strategy, resulting in a new local stereo matching algorithm called binary stereo matching (BSM). Since core algorithm of BSM is based on binary and integer computations, it has a higher computational efficiency than previous methods. Experimental results on Middlebury benchmark show that BSM has comparable performance with state-of-the-art local stereo methods in terms of both quality and speed. Furthermore, experiments on images with radiometric differences demonstrate that BSM is more robust than previous methods under these changes, which is common under real illumination.

Journal ArticleDOI
TL;DR: In this article, the authors present a method to characterize statistically the parameters of a detached binary sample -binary fraction, separation distribution, and mass ratio distribution - using noisy radial-velocity data with as few as two, randomly spaced, epochs per object.
Abstract: We present a method to characterize statistically the parameters of a detached binary sample - binary fraction, separation distribution, and mass ratio distribution - using noisy radial-velocity data with as few as two, randomly spaced, epochs per object To do this, we analyze the distribution of DRVmax, the maximum radial-velocity difference between any two epochs for the same object At low values, the core of this distribution is dominated by measurement errors, but for large enough samples there is a high-velocity tail that can effectively constrain the parameters of the binary population We discuss our approach for the case of a population of detached white-dwarf (WD) binaries with separations that are decaying via gravitational wave emission We derive analytic expressions for the present-day distribution of separations, integrated over the star-formation history of the Galaxy, for parametrized initial WD separation distributions at the end of the common-envelope phase We use Monte Carlo techniques to produce grids of simulated DRVmax distributions with specific binary population parameters, and the same sampling cadences and radial velocity errors as the observations, and we compare them to the real DRVmax distribution to constrain the properties of the binary population We illustrate the sensitivity of the method to both the model and the observational parameters In the particular case of binary white dwarfs, every model population predicts a merger rate per star which can easily be compared to type-Ia supernova rates In a companion paper, we apply the method to a sample of about 4000 WDs from the Sloan Digital Sky Survey, and we find a merger rate remarkably similar to the rate of Type-Ia supernovae in Milky-Way-like galaxies

Journal ArticleDOI
TL;DR: In this article, the authors present a code called COCAL (Compact Object Calculator) for the computation of equilibriums and quasiequilibrium initial data sets of single or binary compact objects of all kinds.
Abstract: We present a new code, named COCAL---Compact Object CALculator, for the computation of equilibriums and quasiequilibrium initial data sets of single or binary compact objects of all kinds. In the cocal code, those solutions are calculated on one or multiple spherical coordinate patches covering the initial hypersurface up to the asymptotic region. The numerical method used to solve field equations written in elliptic form is an adaptation of self-consistent field iterations in which Green's integral formula is computed using multipole expansions and standard finite difference schemes. We extended the method so that it can be used on a computational domain with excised regions for a black hole and a binary companion. Green's functions are constructed for various types of boundary conditions imposed at the surface of the excised regions for black holes. The numerical methods used in cocal are chosen to make the code simpler than any other recent initial data codes, accepting the second order accuracy for the finite difference schemes. We perform convergence tests for time symmetric single black hole data on a single coordinate patch, and binary black hole data on multiple patches. Then, we apply the code to obtain spatially conformally flat binary black hole initial data using boundary conditions, including the one based on the existence of equilibrium apparent horizons.

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter demonstrates the application of new instructions designed to facilitate basic, but important, parallel primitives on per-thread predicates as well as instructions for manipulating and querying bits within a word in the construction of efficient parallel algorithm primitives such as reductions, scans, and segmented scans of binary or Boolean data.
Abstract: Publisher Summary The NVIDIA Fermi graphical processing unit (GPU) architecture introduces new instructions designed to facilitate basic, but important, parallel primitives on per-thread predicates, as well as instructions for manipulating and querying bits within a word. This chapter demonstrates the application of these instructions in the construction of efficient parallel algorithm primitives such as reductions, scans, and segmented scans of binary or Boolean data. It presents binary scan and reduction primitives that exploit new features of the Fermi architecture to accelerate important parallel algorithm building blocks. Often in applications that apply prefix sums to large sequences, bandwidth is the primary bottleneck because data must be shared between thread blocks, requiring global memory traffic and separate kernel launches. It also experimented with binary scan and reduction primitives in large binary prefix sums, stream compaction, and radix sort, with average speed-ups of 3%, 2%, and 4%, respectively. For intra-block radix sort, which is not dominated by intra-block communication through global memory, speed-ups was observed as high as 12%. For applications that are less bandwidth bound, these primitives can help improve binary prefix sums performance by up to 24%, and binary reduction performance by up to 100%. Furthermore, the intrinsics often simplify code, and their lower shared memory usage may help improve GPU occupancy in kernels that combine prefix sums with other computation.

Journal ArticleDOI
TL;DR: A complete classification of doubly even and extremal self-dual codes of length $40$ was given in this paper, and a classification of binary extremal dual codes with length $38$ was also given.
Abstract: A complete classification of binary doubly even self-dual codes of length $40$ is given. As a consequence, a classification of binary extremal self-dual codes of length $38$ is also given.

Journal ArticleDOI
TL;DR: It is shown that binary-space Voronoi diagrams have thick boundaries, meaning that there are many points that lie at the same distance from two random points, which violates the implicit assumption made by most ANN algorithms that points can be neatly assigned to clusters centered around a set of cluster centers.

Journal ArticleDOI
TL;DR: A new absolute position measurement method using a single track binary code where an absolute position code is encoded by changing the phase of one binary state representation, which can be decoded efficiently using structural property of the binary code.
Abstract: We present a new absolute position measurement method using a single track binary code where an absolute position code is encoded by changing the phase of one binary state representation. It can be decoded efficiently using structural property of the binary code, and its sub-division is possible by detecting the relative positions of the binary state representation used for the absolute position encoding. Therefore, the absolute position encoding does not interfere with the sub-division process and so any pseudo-random sequence can be used as the absolute position code. Because the proposed method does not require additional sensing part for the sub-division, it can be realized with a simple configuration and efficient data processing. To verify and evaluate the proposed method, an absolute position measurement system was setup using a binary code scale, a microscopic imaging system, and a CCD camera. In the comparison results with a laser interferometer, the measurement system shows the resolution of less than 50 nm and the nonlinearity error of less than ±60 nm after compensation.


Patent
06 Nov 2012
TL;DR: In this article, a video encoder is configured to encode a binary string indicating a position of a last significant coefficient within a video block, which is then decoded by a video decoder.
Abstract: A video encoder is configured to encode a binary sting indicating a position of a last significant coefficient within a video block. A video decoder is configured to decode the encoded binary string. The string may be coded using context adaptive binary arithmetic coding (CABAC). Binary indices of the binary string may be assigned a context. The context may be determined according to a mapping function. A context may be a assigned to one or more binary indices where each index is associated with a different block size. The last binary index of a 16x16 video block may share a context with the last binary index of a 32x32 video block.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: It is shown that under certain restrictions the increase of complexity when using binary encoding can be avoided and the proofs show that binary encoding adds more expressiveness to bit- vector logics, e.g. it makes xed-size bit-vector logic even without uninterpreted functions nor quantication NExpTime-complete.
Abstract: Bit-precise reasoning is important for many practical applications of Satisability Modulo Theories (SMT). In recent years ecient approaches for solving xed-size bit-vector formulas have been developed. From the theoretical point of view, only few results on the complexity of xed-size bit-vector logics have been published. In this paper we show that some of these results only hold if unary encoding on the bit-width of bit-vectors is used. We then consider xed-size bit-vector logics with binary encoded bit-width and establish new complexity results. Our proofs show that binary encoding adds more expressiveness to bit-vector logics, e.g. it makes xed-size bit-vector logic even without uninterpreted functions nor quantication NExpTime-complete. We also show that under certain restrictions the increase of complexity when using binary encoding can be avoided.

Journal ArticleDOI
TL;DR: The approach provides insight into, improves upon, and subsumes related linearization methods for products of functions of discrete variables, and their products, using logarithmic numbers of binary variables.
Abstract: This paper presents an approach for representing functions of discrete variables, and their products, using logarithmic numbers of binary variables. Given a univariate function whose domain consists of n distinct values, it begins by employing a base-2 expansion to express the function in terms of the ceiling of log2n binary and n continuous variables, using linear restrictions to equate the functional values with the possible binary realizations. The representation of the product of such a function with a nonnegative variable is handled via an appropriate scaling of the linear restrictions. Products of m functions are treated in an inductive manner from i = 2 to m, where each step i uses such a scaling to express the product of function i and a nonnegative variable denoting a translated version of the product of functions 1 through i-1 as a newly defined variable. The resulting representations, both in terms of one function and many, are important for reformulating general discrete variables as binary, and also for linearizing mixed-integer generalized geometric and discrete nonlinear programs, where it is desired to economize on the number of binary variables. The approach provides insight into, improves upon, and subsumes related linearization methods for products of functions of discrete variables.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A novel method for computing the erasure probability in the bit subchannels induced by the polarization kernel is proposed and the codes obtained using the proposed method outperform those based on the Arikan kernel.
Abstract: The problem of construction of binary polar codes with high-dimensional kernels is considered. A novel method for computing the erasure probability in the bit subchannels induced by the polarization kernel is proposed. The codes obtained using the proposed method outperform those based on the Arikan kernel.

Journal ArticleDOI
TL;DR: The vector representations obtained can be used to efficiently process large arrays of input multidimensional vectors in applications related to searching, classification, associative memory, etc.
Abstract: Properties of randomized binary vector representations with adjustable sparseness are investigated. Such representations are formed from input vectors by projecting them using a random matrix with ternary elements {-1, 0, +1}. The accuracy of estimating measures of similarity-difference between initial vectors composed of floating-point numbers and output binary vectors is analyzed. The vector representations obtained can be used to efficiently process large arrays of input multidimensional vectors in applications related to searching, classification, associative memory, etc.