scispace - formally typeset
Search or ask a question
Topic

Binary number

About: Binary number is a research topic. Over the lifetime, 7616 publications have been published within this topic receiving 112299 citations. The topic is also known as: base 021 & J^2-O^2.


Papers
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: A deep neural network is developed to seek multiple hierarchical non-linear transformations to learn compact binary codes for large scale visual search and shows the superiority of the proposed approach over the state-of-the-arts.
Abstract: In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.

569 citations

Journal ArticleDOI
TL;DR: The primary contribution of this paper is in introducing several state machine-based computational elements for performing sigmoid nonlinearity mappings, linear gain, and exponentiation functions, and describing an efficient method for the generation of, and conversion between, stochastic and deterministic binary signals.
Abstract: This paper examines a number of stochastic computational elements employed in artificial neural networks, several of which are introduced for the first time, together with an analysis of their operation. We briefly include multiplication, squaring, addition, subtraction, and division circuits in both unipolar and bipolar formats, the principles of which are well-known, at least for unipolar signals. We have introduced several modifications to improve the speed of the division operation. The primary contribution of this paper, however, is in introducing several state machine-based computational elements for performing sigmoid nonlinearity mappings, linear gain, and exponentiation functions. We also describe an efficient method for the generation of, and conversion between, stochastic and deterministic binary signals. The validity of the present approach is demonstrated in a companion paper through a sample application, the recognition of noisy optical characters using soft competitive learning. Network generalization capabilities of the stochastic network maintain a squared error within 10 percent of that of a floating-point implementation for a wide range of noise levels. While the accuracy of stochastic computation may not compare favorably with more conventional binary radix-based computation, the low circuit area, power, and speed characteristics may, in certain situations, make them attractive for VLSI implementation of artificial neural networks.

497 citations

Journal ArticleDOI
John N. Mitchell1
TL;DR: A method of computer multiplication and division is proposed which uses binary logarithms and an error analysis is given and a means of reducing the error for the multiply operation is shown.
Abstract: A method of computer multiplication and division is proposed which uses binary logarithms. The logarithm of a binary number may be determined approximately from the number itself by simple shifting and counting. A simple add or subtract and shift operation is all that is required to multiply or divide. Since the logarithms used are approximate there can be errors in the result. An error analysis is given and a means of reducing the error for the multiply operation is shown.

488 citations

Journal ArticleDOI
TL;DR: The construction of a switching network capable of n-permutation of its input terminals to its output terminals is described and an algorithm is given for the setting of the binary cells in the network according to any specified permutation.
Abstract: In this paper the construction of a switching network capable of n!-permutation of its n input terminals to its n output terminals is described. The building blocks for this network are binary cells capable of permuting their two input terminals to their two output terminals.The number of cells used by the network is 〈n · log2n - n + 1〉 = Σnk=1 〈log2k〉. It could be argued that for such a network this number of cells is a lower bound, by noting that binary decision trees in the network can resolve individual terminal assignments only and not the partitioning of the permutation set itself which requires only 〈log2n!〉 = 〈Σnk=1 log2k〉 binary decisions.An algorithm is also given for the setting of the binary cells in the network according to any specified permutation.

488 citations

Journal ArticleDOI
01 Nov 2003
TL;DR: This paper considers computational systems whose material realizations utilize electrons and energy barriers to represent and manipulate their binary representations of state.
Abstract: In this paper we consider device scaling and speed limitations on irreversible von Neumann computing that are derived from the requirement of "least energy computation." We consider computational systems whose material realizations utilize electrons and energy barriers to represent and manipulate their binary representations of state.

483 citations


Network Information
Related Topics (5)
Matrix (mathematics)
105.5K papers, 1.9M citations
79% related
Cluster analysis
146.5K papers, 2.9M citations
76% related
Hydrogen
132.2K papers, 2.5M citations
75% related
Magnetic field
167.5K papers, 2.3M citations
73% related
Monte Carlo method
95.9K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20232,154
20224,541
2021383
2020313
2019275