scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2015"


Journal ArticleDOI
TL;DR: This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies.
Abstract: The Advanced LIGO and Advanced Virgo gravitational-wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star, a neutron star–black hole binary and a binary black hole, where we show a cross comparison of results obtained using three independent sampling algorithms. These systems were analyzed with nonspinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analyzing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence parameter space.

781 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: A deep neural network is developed to seek multiple hierarchical non-linear transformations to learn compact binary codes for large scale visual search and shows the superiority of the proposed approach over the state-of-the-arts.
Abstract: In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.

569 citations


Journal ArticleDOI
TL;DR: A compact binary face descriptor (CBFD) feature learning method for face representation and recognition that reduces the modality gap of heterogeneous faces at the feature level to make the method applicable to heterogeneous face recognition.
Abstract: Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

371 citations



Journal ArticleDOI
TL;DR: In this paper, the properties of primordial binary populations in Galactic globular clusters were constrained using the MOCCA Monte Carlo code for cluster evolution, and the results were compared to the observations of Milone et al. using the photometric binary populations as proxies for the true underlying distributions, in order to test the hypothesis that the data are consistent with an universal initial binary fraction near unity and the binary orbital parameter distributions of Kroupa.
Abstract: In this paper, we constrain the properties of primordial binary populations in Galactic globular clusters. Using the MOCCA Monte Carlo code for cluster evolution, our simulations cover three decades in present-day total cluster mass. Our results are compared to the observations of Milone et al. (2012) using the photometric binary populations as proxies for the true underlying distributions, in order to test the hypothesis that the data are consistent with an universal initial binary fraction near unity and the binary orbital parameter distributions of Kroupa (1995). With the exception of a few possible outliers, we find that the data are to first-order consistent with the universality hypothesis. Specifically, the present-day binary fractions inside the half-mass radius can be reproduced assuming either high initial binary fractions near unity with a dominant soft binary component as in the Kroupa distribution combined with high initial densities (10 4 -10 6 M⊙ pc −3 ), or low initial binary fractions (� 5-10%) with a dominant hard binary component combined with moderate initial densities near their present-day values (10 2 -10 3 M⊙ pc −3 ). This apparent degeneracy can potentially be broken using the binary fractions outside the half-mass radius only high initial binary fractions with a significant soft component combined with high initial densities can contribute to reproducing the observed anti-correlation between the binary fractions outside the half-mass radius and the total cluster mass. We further illustrate using the simulated present-day binary orbital parameter distributions and the technique first introduced in Leigh et al. (2012) that the relative fractions of hard and soft binaries can be used to further constrain both the initial cluster density and the initial mass-density relation. Our results favour an initial mass-density relation of

62 citations


Journal ArticleDOI
TL;DR: This paper proposes efficient and high speed architectures to implement point multiplication on binary Edwards and generalized Hessian curves and employs a newly proposed digit-level hybrid-double Gaussian normal basis multiplier to reduce the latency of point multiplication.
Abstract: High-performance and fast implementation of point multiplication is crucial for elliptic curve cryptographic systems. Recently, considerable research has investigated the implementation of point multiplication on different curves over binary extension fields. In this paper, we propose efficient and high speed architectures to implement point multiplication on binary Edwards and generalized Hessian curves. We perform a data-flow analysis and investigate maximum number of parallel multipliers to be employed to reduce the latency of point multiplication on these curves. Then, we modify the addition and doubling formulations and employ a newly proposed digit-level hybrid-double Gaussian normal basis multiplier to remove the data dependencies and hence reduce the latency of point multiplication. To the best of our knowledge, this is the first time that one employs hybrid-double multiplication technique to reduce the computation time of point multiplication. Moreover, we have implemented our proposed architectures for point multiplication on FPGA and obtained the results of timing and area. Our results indicate that the proposed scheme is one step forward to improve the performance of point multiplication on binary Edward and generalized Hessian curves.

46 citations


Proceedings ArticleDOI
14 Jun 2015
TL;DR: This paper presents a construction for several families of optimal binary locally repairable codes (LRCs) with small locality which provides binary LRCs which attain the Cadambe-Mazumdar bound.
Abstract: This paper presents a construction for several families of optimal binary locally repairable codes (LRCs) with small locality (2 and 3). This construction is based on various anticodes. It provides binary LRCs which attain the Cadambe-Mazumdar bound. Moreover, most of these codes are optimal with respect to the Griesmer bound.

45 citations


Journal ArticleDOI
TL;DR: In this article, Tromp presented a simple way of encoding lambda calculus terms as binary sequences and derived results from their generating functions, especially that the number of terms of size n grows roughly like 1.963447954.
Abstract: In a paper, entitled Binary lambda calculus and combinatory logic, John Tromp presents a simple way of encoding lambda calculus terms as binary sequences. In what follows, we study the numbers of binary strings of a given size that represent lambda terms and derive results from their generating functions, especially that the number of terms of size n grows roughly like 1.963447954. . .n. In a second part we use this approach to generate random lambda terms using Boltzmann samplers.

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors obtained binary fractions for 35 globular clusters that were imaged in the F606W and F814W with the Advanced Camera for Surveys on the Hubble Space Telescope.
Abstract: Binary stars are predicted to have an important role in the evolution of globular clusters, so we obtained binary fractions for 35 globular clusters that were imaged in the F606W and F814W with the Advanced Camera for Surveys on the Hubble Space Telescope. When compared to the values of prior efforts, we find significant discrepancies, despite each group correcting for contamination effects and having performed the appropriate reliability tests. The most reliable binary fractions are obtained when restricting the binary fraction to . Our analysis indicates that the range of the binary fractions is nearly an order of magnitude for the lowest dynamical ages, suggesting that there is a broad distribution in the binary fraction at globular cluster formation. Dynamical effects also appear to decrease the core binary fractions by a factor of two over a Hubble time, but this is a weak relationship. We confirm a correlation from previous work that the binary fraction within the core?radius decreases with cluster age, indicating that younger clusters formed with higher binary fractions. The strong radial gradient in the binary fraction with cluster radius appears to be a consequence of dynamical interactions. It is often not present in dynamically young clusters, but is nearly always present in dynamically old clusters.

35 citations


Journal ArticleDOI
TL;DR: The AIFV code can attain better average compression rate than the Huffman code at the expenses of a little decoding delay and a little large memory size to store multiple code trees.
Abstract: We propose almost instantaneous fixed-to-variable length (AIFV) codes such that two (resp. $K-1$ ) code trees are used, if code symbols are binary (resp. $K$ -ary for $K\geq 3$ ), and source symbols are assigned to incomplete internal nodes in addition to leaves. Although the AIFV codes are not instantaneous codes, they are devised such that the decoding delay is at most two bits (resp. one code symbol) in the case of binary (resp. $K$ -ary) code alphabet. The AIFV code can attain better average compression rate than the Huffman code at the expenses of a little decoding delay and a little large memory size to store multiple code trees. We also show for the binary and ternary AIFV codes that the optimal AIFV code can be obtained by solving 0-1 integer programming problems.

33 citations


Patent
20 Mar 2015
TL;DR: In this article, the authors describe a system and method thereof of a communication device that includes a port configured to receive a plurality of binary data streams having a binary header and a binary body.
Abstract: A system and method thereof of a communication device. The device includes a port configured to receive a plurality of binary data streams having a binary header and a binary body. The device includes a memory storing a first structural description of the binary header and the binary body and a second structural description of a metadata construct of the message. The device includes a processor configured to parse a received binary data stream using the first structural description to determine the binary header and the binary body. The processor parses the binary body using the second structural description to determine the one or more groups of description values forming the metadata construct where the processor uses a portion of the determined description values of the metadata construct to determine the one or more groups of data values of the message construct.

Journal ArticleDOI
TL;DR: An efficient lossless image cryptographic algorithm to transmit pictorial data securely and some parametric tests show that the proposed work is resilient and robust in the field of cryptography.
Abstract: Presently a number of techniques are used to restrict confidential image data from unauthorized access. In this paper, the authors have proposed an efficient lossless image cryptographic algorithm to transmit pictorial data securely. Initially we take a 64 bit key, we convert our decimal pixel value into binary 8 bits and we XOR the first 8 bits of the key with the pixel value. After that we take the next 8 bits of the key and XOR with the next pixel value. We perform the circular right shit operation when the key gets exhausted. We perform the first level haar wavelet decomposition thereafter. Dividing the LL1 into four equal sections we perform some swapping operations. Decryption follows the reverse of the encryption .Evaluation is done by some parametric tests which includes correlation analysis, NPCR, UACI readings etc. show that the proposed work is resilient and robust in the field of cryptography.

Posted Content
TL;DR: A scalable Bayesian model for low-rank factorization of massive tensors with binary observations using a zero-truncated Poisson likelihood for binary data, achieving excellent computational scalability, and demonstrating its usefulness in leveraging side-information provided in form of mode-network(s).
Abstract: We present a scalable Bayesian model for low-rank factorization of massive tensors with binary observations. The proposed model has the following key properties: (1) in contrast to the models based on the logistic or probit likelihood, using a zero-truncated Poisson likelihood for binary data allows our model to scale up in the number of \emph{ones} in the tensor, which is especially appealing for massive but sparse binary tensors; (2) side-information in form of binary pairwise relationships (e.g., an adjacency network) between objects in any tensor mode can also be leveraged, which can be especially useful in "cold-start" settings; and (3) the model admits simple Bayesian inference via batch, as well as \emph{online} MCMC; the latter allows scaling up even for \emph{dense} binary data (i.e., when the number of ones in the tensor/network is also massive). In addition, non-negative factor matrices in our model provide easy interpretability, and the tensor rank can be inferred from the data. We evaluate our model on several large-scale real-world binary tensors, achieving excellent computational scalability, and also demonstrate its usefulness in leveraging side-information provided in form of mode-network(s).

Journal ArticleDOI
TL;DR: A new and potentially integrable scheme for the realization of an all-optical binary full adder employing two XOR gates, two AND gates, and one OR gate based on a semiconductor optical amplifier is proposed.
Abstract: We propose a new and potentially integrable scheme for the realization of an all-optical binary full adder employing two XOR gates, two AND gates, and one OR gate. The XOR gate is realized using a Mach-Zehnder interferometer (MZI) based on a semiconductor optical amplifier (SOA). The AND and OR gates are based on the nonlinear properties of a semiconductor optical amplifier. The proposed scheme is driven by two input data streams and a carry bit from the previous less-significant bit order position. In our proposed design, we achieve extinction ratios for Sum and Carry output signals of 10 dB and 12 dB respectively. Successful operation of the system is demonstrated at 10 Gb/s with return-to-zero modulated signals.

Posted Content
TL;DR: In this paper, the authors focus on the binary autoencoder model, which seeks to reconstruct an image from the binary code produced by the hash function, and reformulates the optimization as alternating two easier steps: one that learns the encoder and decoder separately, and one that optimizes the code for each image.
Abstract: An attractive approach for fast search in image databases is binary hashing, where each high-dimensional, real-valued image is mapped onto a low-dimensional, binary vector and the search is done in this binary space. Finding the optimal hash function is difficult because it involves binary constraints, and most approaches approximate the optimization by relaxing the constraints and then binarizing the result. Here, we focus on the binary autoencoder model, which seeks to reconstruct an image from the binary code produced by the hash function. We show that the optimization can be simplified with the method of auxiliary coordinates. This reformulates the optimization as alternating two easier steps: one that learns the encoder and decoder separately, and one that optimizes the code for each image. Image retrieval experiments, using precision/recall and a measure of code utilization, show the resulting hash function outperforms or is competitive with state-of-the-art methods for binary hashing.

Journal ArticleDOI
TL;DR: In this article, an amplitude-aided signal reconstruction scheme was proposed for 1-bit CS over noisy WSNs subject to channel-induced bit flipping errors, by which the representation points of local binary quantizers were designed to minimize the loss of data fidelity caused by local sensing noise, quantization, and bit sign flipping, and the fusion center adopts the conventional -minimization method for sparse signal recovery using the decoded and de-mapped binary data.
Abstract: One-bit compressive sensing (CS) is known to be particularly suited for resource-constrained wireless sensor networks (WSNs). In this letter, we consider 1-bit CS over noisy WSNs subject to channel-induced bit flipping errors, and propose an amplitude-aided signal reconstruction scheme, by which 1) the representation points of local binary quantizers are designed to minimize the loss of data fidelity caused by local sensing noise, quantization, and bit sign flipping, and 2) the fusion center adopts the conventional ${\ell_1}$ -minimization method for sparse signal recovery using the decoded and de-mapped binary data. The representation points of binary quantizers are designed by minimizing the mean square error (MSE) of the net data mismatch, taking into account the distributions of the nonzero signal entries, local sensing noise, quantization error, and bit flipping; a simple closed-form solution is then obtained. Numerical simulations show that our method improves the estimation accuracy when SNR is low or the number of sensors is small, as compared to state-of-the-art 1-bit CS algorithms relying solely on the sign message for signal recovery.

Journal ArticleDOI
TL;DR: It is shown that for the choice of projection parameters that provide nearly-Gaussian distribution, the experimental and analytical errors are close.
Abstract: We propose a transformation of real input vectors to output binary vectors by projection using a binary random matrix with elements {0,1} and thresholding. We investigate the rate of convergence of the distribution of vector components before binarization to the Gaussian distribution as well as its relationship to the estimation error of the angle between the input vectors by the binarized output vectors. It is shown that for the choice of projection parameters that provide nearly-Gaussian distribution, the experimental and analytical errors are close.

Journal ArticleDOI
TL;DR: 2-adic complexity of some binary sequences with interleaved structure with optimal autocorrelation sequences constructed by Tang and Ding 16 and Zhou et al. 23 is investigated and it is shown that they have the maximum 2-adic simplicity.


Journal ArticleDOI
TL;DR: This paper builds on the popular minutiae cylinder code (MCC) and applies the theory of Markov random field to model bit correlations in MCC, and designs a hierarchical fingerprint indexing scheme for binary hash codes.
Abstract: Compact binary codes can in general improve the speed of searches in large-scale applications. Although fingerprint retrieval was studied extensively with real-valued features, only few strategies are available for search in Hamming space. In this paper, we propose a theoretical framework for systematically learning compact binary hash codes and develop an integrative approach to hash-based fingerprint indexing. Specifically, we build on the popular minutiae cylinder code (MCC) and are inspired by observing that the MCC bit-based representation is bit-correlated. Accordingly, we apply the theory of Markov random field to model bit correlations in MCC. This enables us to learn hash bits from a generalized linear model whose maximum likelihood estimates can be conveniently obtained using established algorithms. We further design a hierarchical fingerprint indexing scheme for binary hash codes. Under the new framework, the code length can be significantly reduced from 384 to 24 bits for each minutiae representation. Statistical experiments on public fingerprint databases demonstrate that our proposed approach can significantly improve the search accuracy of the benchmark MCC-based indexing scheme. The binary hash codes can achieve a significant search speedup compared with the MCC bit-based representation.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this article, the authors propose re-randomization techniques for stochastic computing and use the Logistic Map x → r x(1−x) as a case study for dynamical systems in general.
Abstract: Stochastic Computing (SC) is a digital computation approach that operates on random bit streams to perform complex tasks with much smaller hardware footprints compared to conventional approaches that employ binary radix. For stochastic logic to work, the input random bit streams have to be independent, which is a challenge when implementing systems with feedback: outputs that are generated based on input bit streams would be correlated to those streams and cannot be readily combined as inputs to stochastic logic for another iteration of the function. We propose re-randomization techniques for stochastic computing and use the Logistic Map x → r x(1−x) as a case study for dynamical systems in general. We show that complex behaviors such as period-doubling and chaos do indeed occur in digital logic with only a few gates operating on a few 0's and 1's. We employ a number of techniques such as random number generator sharing and using table-lookup pre-computations to significantly reduce the total energy of the computation. Compared to the conventional binary approach, we achieve between 8% and 25% energy consumption.

Journal ArticleDOI
TL;DR: An improved binary linear-to-log (Lin2Log) conversion algorithm that has been optimized for implementation on a field-programmable gate array that achieves 23 bits of fractional precision while using just one 18K-bit block RAM (BRAM).
Abstract: This brief describes an improved binary linear-to-log (Lin2Log) conversion algorithm that has been optimized for implementation on a field-programmable gate array. The algorithm is based on a piecewise linear (PWL) approximation of the transform curve combined with a PWL approximation of a scaled version of a normalized segment error. The architecture presented achieves 23 bits of fractional precision while using just one 18K-bit block RAM (BRAM), and synthesis results indicate operating frequencies of 93 and 110 MHz when implemented on Xilinx Spartan3 and Spartan6 devices, respectively. Memory requirements are reduced by exploiting the symmetrical properties of the normalized error curve, allowing it to be more efficiently implemented using the combinatorial logic available in the reconfigurable fabric instead of using a second BRAM inefficiently. The same principles can be also adapted to applications where higher accuracy is needed.

Journal ArticleDOI
TL;DR: In this paper, stability analysis is used to explore the effect of the mass ratio on the structure of families of periodic orbits near a large mass ratio binary star system, which is useful in a variety of applications, including the determination of potentially stable exoplanet motions near a binary star.
Abstract: With improved observational capabilities and techniques, an increasing number of exoplanets have been discovered to orbit in the vicinity of binary star systems. In this investigation, periodic motions near a large mass ratio binary are explored within the context of the circular restricted three-body problem. Specifically, stability analysis is used to explore the effect of the mass ratio on the structure of families of periodic orbits. Such analysis is useful in a variety of applications, including the determination of potentially stable exoplanet motions near a binary star.

Journal ArticleDOI
01 Feb 2015
TL;DR: It is mathematically proven that this scheme, called opposition-based learning, also does well in binary spaces and it is proven that utilizing random numbers and their opposite is beneficial in evolutionary algorithms.
Abstract: Evolutionary algorithms start with an initial population vector, which is randomly generated when no preliminary knowledge about the solution is available. Recently, it has been claimed that in solving continuous domain optimization problems, the simultaneous consideration of randomness and opposition is more effective than pure randomness. Here it is mathematically proven that this scheme, called opposition-based learning, also does well in binary spaces. The proposed binary opposition-based scheme can be embedded inside many binary population-based algorithms. We applied it to accelerate the convergence rate of Binary Gravitational Search Algorithm (BGSA) as an application. The experimental results and mathematical proofs confirm each other. We introduce the concept of opposition-based learning in binary spaces.It is proven that utilizing random numbers and their opposite is beneficial in evolutionary algorithms.Opposite numbers are applied to accelerate the convergence rate of Binary Gravitational Search Algorithm (BGSA).The results show that OBGSA possesses superior performance in accuracy as compared to the BGSA. Evolutionary algorithms start with an initial population vector, which is randomly generated when no preliminary knowledge about the solution is available. Recently, it has been claimed that in solving continuous domain optimization problems, the simultaneous consideration of randomness and opposition is more effective than pure randomness. In this paper it is mathematically proven that this scheme, called opposition-based learning, also does well in binary spaces. The proposed binary opposition-based scheme can be embedded inside many binary population-based algorithms. We applied it to accelerate the convergence rate of binary gravitational search algorithm (BGSA) as an application. The experimental results and mathematical proofs confirm each other.

Book ChapterDOI
06 Dec 2015
TL;DR: This paper utilizes corrected mixed point addition and doubling formulas to achieve a secure, but still fast implementation of a point multiplication on binary Edwards curves.
Abstract: Elliptic curve cryptography ECC is an ideal choice for low-resource applications because it provides the same level of security with smaller key sizes than other existing public key encryption schemes. For low-resource applications, designing efficient functional units for elliptic curve computations over binary fields results in an effective platform for an embedded co-processor. This paper proposes such a co-processor designed for area-constrained devices by utilizing state of the art binary Edwards curve equations over mixed point addition and doubling. The binary Edwards curve offers the security advantage that it is complete and is, therefore, immune to the exceptional points attack. In conjunction with Montgomery Ladder, such a curve is naturally immune to most types of simple power and timing attacks. The recently presented formulas for mixed point addition in [1] were found to be invalid, but were corrected such that the speed and register usage were maintained. We utilize corrected mixed point addition and doubling formulas to achieve a secure, but still fast implementation of a point multiplication on binary Edwards curves. Our synthesis results over NIST recommended fields for ECC indicate that the proposed co-processor requires about 50i¾?% fewer clock cycles for point multiplication and occupies a similar silicon area when compared to the most recent in literature.

Journal ArticleDOI
TL;DR: In this article, the fundamental limits in minimum energy dissipation are discussed presenting two, popularly adopted, switching procedures, and it is shown that the zero-power limit is only attainable with one of the two protocols, which does not involve any irreversible entropy increase.

Journal ArticleDOI
07 May 2015
TL;DR: In this article, it was shown that searching for the ground state of a one-dimensional chain of quantum particles is equivalent to searching for optimal binary sequences with minimum energy levels, where the peak level of the PSL must be minimal.
Abstract: A Bernasconi model for a one-dimensional chain of quantum particles is considered. It is shown that searching for the ground state of such a quantum system is equivalent to searching for optimal binary sequences with minimum energy levels. The second criteria for the optimality of binary sequences with low levels of aperiodic autocorrelation is the minimax criterion, in which the peak level of side lobe (PSL) must be minimal. A review and new results regarding the construction of such binary sequences up to length N=82 are presented.

Posted Content
TL;DR: Taniguchi and Gourgoulhon as discussed by the authors presented an extension of the Compact Object Calculator (COCA) to compute general-relativistic initial data for binary compact-star systems, in particular for equal-mass binaries with spins that are either aligned or antialigned with the orbital angular momentum.
Abstract: We present the extension of our \cocal~- Compact Object CALculator - code to compute general-relativistic initial data for binary compact-star systems. In particular, we construct quasiequilibrium initial data for equal-mass binaries with spins that are either aligned or antialigned with the orbital angular momentum. The Isenberg-Wilson-Mathews formalism is adopted and the constraint equations are solved using the representation formula with a suitable choice of a Green's function. We validate the new code with solutions for equal-mass binaries and explore its capabilities for a wide range of compactnesses, from a white dwarf binary with compactness $\sim 10^{-4}$, up to a highly relativistic neutron-star binary with compactness $\sim 0.22$. We also present a comparison with corotating and irrotational quasiequilibrium sequences from the spectral code \lorene [Taniguchi and Gourgoulhon, Phys. Rev. D {\bf 66}, 104019 (2002)] and with different compactness, showing that the results from the two codes agree to a precision of the order of $0.05\%$. Finally, we present equilibria for spinning configurations with a nuclear-physics equation of state in a piecewise polytropic representation.

Journal ArticleDOI
TL;DR: Simulation results show that the AFCS can perfectly recover all non-zero elements of the sparse binary signal with a significantly reduced number of measurements, compared to the conventional binary CS and l1-minimization approaches in a wide range of signal to noise ratios (SNRs) by using the standard message passing decoder.
Abstract: In this paper, a compressive sensing (CS) approach is proposed for sparse binary signals' compression and reconstruction based on analog fountain codes (AFCs). In the proposed scheme, referred to as the analog fountain compressive sensing (AFCS), each measurement is generated from a linear combination of $L$ randomly selected binary signal elements with real weight coefficients. The weight coefficients are chosen from a finite weight set and $L$ , called measurement degree, is obtained based on a predefined degree distribution function. We propose a simple verification based reconstruction algorithm for the AFCS in the noiseless case. The proposed verification based decoder is analyzed through SUM-OR tree analytical approach and an optimization problem is formulated to find the optimum measurement degree to minimize the number of measurements required for the reconstruction of binary sparse signals. We show that in the AFCS, the number of required measurements is of ${\cal O}(-n\log(1-k/n))$ , where $n$ is the signal length and $L=k$ is the signal sparsity level. Simulation results show that the AFCS can perfectly recover all non-zero elements of the sparse binary signal with a significantly reduced number of measurements, compared to the conventional binary CS and $\ell_1$ -minimization approaches in a wide range of signal to noise ratios (SNRs) by using the standard message passing decoder. Finally, we show a practical application of the AFCS for the sparse event detection in wireless sensor networks (WSNs), where the sensors' readings can be treated as measurements from the CS point of view.

Journal ArticleDOI
TL;DR: A technique based on the Gauss-Jordan elimination in GF(q) (Galois field), with q=2m, where m is the number of bits per symbol, is proposed to deal with the blind identification problem of code word length.
Abstract: In cognitive radio context, the parameters of coding schemes are unknown at the receiver The design of an intelligent receiver is then essential to blindly identify these parameters from the received data The blind identification of code word length has already been extensively studied in the case of binary error-correcting codes Here, we are interested in non-binary codes where a noisy transmission environment is considered To deal with the blind identification problem of code word length, we propose a technique based on the Gauss-Jordan elimination in GF(q) (Galois field), with q=2 m , where m is the number of bits per symbol This proposed technique is based on the information provided by the arithmetic mean of the number of zeros in each column of these matrices The robustness of our technique is studied for different code parameters and over different Galois fields