scispace - formally typeset
Search or ask a question

Showing papers on "Binary number published in 2021"


Journal ArticleDOI
TL;DR: In this article, two new hetero-dihalogenated terminals (FCl-IC and FBr-IC) with a pair of fluorine/chlorine or fluorine-bromine at one terminal and three NFAs (Y-BO-FCl, Y-BO -FBr, and YBO-ClBr) with three heterogeneous terminals were synthesized in a general process for OSCs.
Abstract: Despite dihalogenation of terminals is an effective strategy to achieve efficient nonfullerene acceptors (NFAs)-based organic solar cells (OSCs), hetero-dihalogenated terminals are quite difficult to obtain. Here, we firstly synthesized two new hetero-dihalogenated terminals (FCl-IC and FBr-IC) with a pair of fluorine/chlorine or fluorine/bromine at one terminal and three NFAs (Y-BO-FCl, Y-BO-FBr, and Y-BO-ClBr) with three hetero-dihalogenated terminals (FCl-IC, FBr-IC, and ClBr-IC) in a general process for OSCs, respectively. Y-BO-FCl neat film presents slightly lower energy level in comparison with those of Y-BO-FBr and Y-BO-ClBr. We, for the first time, obtained the single crystals of hetero-dihalogenated NFAs. From Y-BO-ClBr single crystal to fluorinated acceptor single crystals, the crystal systems and the intermolecular packing motifs have been significantly improved. The crystallographic and theoretical analysis indicate that Y-BO-FCl exhibits the most planar molecular geometry, the smallest intermolecular packing distance and the largest π−π electronic coupling among these acceptors. Moreover, PM6:Y-BO-FCl blend films present more order face-on orientation crystallinity, more suitable fiber-like phase separation, higher and more balanced charge mobility, weaker charge recombination in comparison with those of PM6:Y-BO-FBr and PM6:Y-BO-ClBr. As a result, up to remarkable PCE of 17.52% with enhanced FF of ca. 78% was achieved in binary Y-BO-FCl:PM6 devices compared to that of PM6:Y-BO-FBr (16.47%) and PM6:Y-BO-ClBr (13.61%), which is the highest efficiency for the hetero-halogenated NFAs-based OSCs. Our investigations demonstrate that fluorine/chlorine hetero-dihalogenated terminal is a new and effective synergistic strategy to induce significant difference in single crystallography and achieve high-performance hetero-halogenated NFAs-based OSCs.

63 citations


Journal ArticleDOI
TL;DR: This article is concerned with moving-horizon state estimation problems for a class of discrete-time linear dynamic networks, where signals are transmitted via noisy network channels and distortions can be caused by channel noises.
Abstract: This article is concerned with moving-horizon state estimation problems for a class of discrete-time linear dynamic networks. The signals are transmitted via noisy network channels and distortions can be caused by channel noises. As such, the binary encoding schemes, which take advantages of the robustness of the binary data, are exploited during the signal transmission. More specifically, under such schemes, the original signals are encoded into a bit string, transmitted via memoryless binary symmetric channels with certain crossover probabilities, and eventually restored by a decoder at the receiver. Novel centralized and decentralized moving-horizon estimators in the presence of the binary encoding schemes are constructed by solving the respective global and local least-square optimization problems. Sufficient conditions are obtained through intensive stochastic analysis to guarantee the stochastically ultimate boundedness of the estimation errors. A simulation example is presented to verify the effectiveness of the proposed moving-horizon estimators.

56 citations


Journal ArticleDOI
TL;DR: Diagonalizing the transform matrix of the map is given, giving the explicit formulation of any iteration of the generalized Cat map and its real graph (cycle) structure in any binary arithmetic domain is disclosed.
Abstract: Chaotic dynamics is an important source for generating pseudorandom binary sequences (PRBS) Much efforts have been devoted to obtaining period distribution of the generalized discrete Arnold's Cat map in various domains using all kinds of theoretical methods, including Hensel's lifting approach Diagonalizing the transform matrix of the map, this paper gives the explicit formulation of any iteration of the generalized Cat map Then, its real graph (cycle) structure in any binary arithmetic domain is disclosed The subtle rules on how the cycles (itself and its distribution) change with the arithmetic precision e are elaborately investigated and proved The regular and beautiful patterns of Cat map demonstrated in a computer adopting fixed-point arithmetics are rigorously proved and experimentally verified The results can serve as a benchmark for studying the dynamics of the variants of the Cat map in any domain In addition, the used methodology can be used to evaluate randomness of PRBS generated by iterating any other maps

49 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a binary version of ChOA and tried to prove that the transfer function is the most important part of binary algorithms and investigated the efficiency of binary ChOAs (BChOA) in terms of convergence speed and local minima avoidance.
Abstract: Chimp optimization algorithm (ChOA) is a newly proposed meta-heuristic algorithm inspired by chimps’ individual intelligence and sexual motivation in their group hunting. The preferable performance of ChOA has been approved among other well-known meta-heuristic algorithms. However, its continuous nature makes it unsuitable for solving binary problems. Therefore, this paper proposes a novel binary version of ChOA and attempts to prove that the transfer function is the most important part of binary algorithms. Therefore, four S-shaped and V-shaped transfer functions, as well as a novel binary approach, have been utilized to investigate the efficiency of binary ChOAs (BChOA) in terms of convergence speed and local minima avoidance. In this regard, forty-three unimodal, multimodal, and composite optimization functions and ten IEEE CEC06-2019 benchmark functions were utilized to evaluate the efficiency of BChOAs. Furthermore, to validate the performance of BChOAs, four newly proposed binary optimization algorithms were compared with eighteen novel state-of-the-art algorithms. The results indicate that both the novel binary approach and V-shaped transfer functions improve the efficiency of BChOAs in a statistically significant way.

43 citations


Journal ArticleDOI
TL;DR: Among eight transfer functions, V 4 transfer function with population reduction on binary GSK algorithm outperforms other optimizers in terms of accuracy, fitness values and the minimal number of features.
Abstract: In machine learning, searching for the optimal feature subset from the original datasets is a very challenging and prominent task. The metaheuristic algorithms are used in finding out the relevant, important features, that enhance the classification accuracy and save the resource time. Most of the algorithms have shown excellent performance in solving feature selection problems. A recently developed metaheuristic algorithm, gaining-sharing knowledge-based optimization algorithm (GSK), is considered for finding out the optimal feature subset. GSK algorithm was proposed over continuous search space; therefore, a total of eight S-shaped and V-shaped transfer functions are employed to solve the problems into binary search space. Additionally, a population reduction scheme is also employed with the transfer functions to enhance the performance of proposed approaches. It explores the search space efficiently and deletes the worst solutions from the search space, due to the updation of population size in every iteration. The proposed approaches are tested over twenty-one benchmark datasets from UCI repository. The obtained results are compared with state-of-the-art metaheuristic algorithms including binary differential evolution algorithm, binary particle swarm optimization, binary bat algorithm, binary grey wolf optimizer, binary ant lion optimizer, binary dragonfly algorithm, binary salp swarm algorithm. Among eight transfer functions, V4 transfer function with population reduction on binary GSK algorithm outperforms other optimizers in terms of accuracy, fitness values and the minimal number of features. To investigate the results statistically, two non-parametric statistical tests are conducted that concludes the superiority of the proposed approach.

40 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed joint importance measures for the optimal component sequence of a consecutive-k-out-of-n system, which considers the impact of the possible change of the system structure during its life cycle.

35 citations


Journal ArticleDOI
TL;DR: A binary version of MPA is proposed for solving the 0–1 knapsack (KP01) problem and the performance of the proposed BMPA algorithm is tested on a set of KP01 problems and compared to a number of existing algorithms.

33 citations



Proceedings ArticleDOI
01 Jan 2021
TL;DR: BMXiong et al. as discussed by the authors proposed an architectural approach called MeliusNet, which consists of alternating a DenseBlock, which increases the feature capacity, and their proposed ImprovementBlock.
Abstract: Binary Neural Networks (BNNs) are neural networks which use binary weights and activations instead of the typical 32-bit floating point values. They have reduced model sizes and allow for efficient inference on mobile or embedded devices with limited power and computational resources. However, the binarization of weights and activations leads to feature maps of lower quality and lower capacity and thus a drop in accuracy compared to their 32-bit counterparts. Previous work has increased the number of channels or used multiple binary bases to alleviate these problems. In this paper, we instead present an architectural approach: MeliusNet. It consists of alternating a DenseBlock, which increases the feature capacity, and our proposed ImprovementBlock, which increases the feature quality. Experiments on the ImageNet dataset demonstrate the superior performance of our MeliusNet over a variety of popular binary architectures with regards to both computation savings and accuracy. Furthermore, BNN models trained with our method can match the accuracy of the popular compact network MobileNet-v1 in terms of model size and number of operations. Our code is published online: https://github.com/hpi-xnor/BMXNet-v2

29 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied how often signal overlaps of various types will occur in an ET-CE network over the course of a year and found that a binary neutron star signal will typically have tens of overlapping binary black holes and binary neutron stars signals.
Abstract: In the past few years, the detection of gravitational waves from compact binary coalescences with the Advanced LIGO and Advanced Virgo detectors has become routine. Future observatories will detect even larger numbers of gravitational-wave signals, which will also spend a longer time in the detectors' sensitive band. This will eventually lead to overlapping signals, especially in the case of Einstein Telescope (ET) and Cosmic Explorer (CE). Using realistic distributions for the merger rate as a function of redshift as well as for component masses in binary neutron star and binary black hole coalescences, we map out how often signal overlaps of various types will occur in an ET-CE network over the course of a year. We find that a binary neutron star signal will typically have tens of overlapping binary black hole and binary neutron star signals. Moreover, it will happen up to tens of thousands of times per year that two signals will have their end times within seconds of each other. In order to understand to what extent this would lead to measurement biases with current parameter estimation methodology, we perform injection studies with overlapping signals from binary black hole and/or binary neutron star coalescences. Varying the signal-to-noise ratios, the durations of overlap, and the kinds of overlapping signals, we find that in most scenarios the intrinsic parameters can be recovered with negligible bias. However, we find large offsets for a short binary black hole or a quieter binary neutron star signal overlapping with a long and louder binary neutron star event when the merger times are sufficiently close. Although based on a limited number of simulations, our studies may be an indicator of where improvements are required to ensure reliable estimation of source parameters for all detected compact binary signals as we go from second-generation to third-generation detectors.

24 citations


Proceedings ArticleDOI
24 May 2021
TL;DR: BugGraph as discussed by the authors uses a triplet loss network on the attributed control flow graph to produce a similarity ranking for source-binary code similarity detection, achieving 90% and 75% true positive rate for syntax equivalent and similar code, respectively, an improvement of 16% and 24% over state-of-theart methods.
Abstract: Binary code similarity detection, which answers whether two pieces of binary code are similar, has been used in a number of applications,such as vulnerability detection and automatic patching. Existing approaches face two hurdles in their efforts to achieve high accuracy and coverage: (1) the problem of source-binary code similarity detection, where the target code to be analyzed is in the binary format while the comparing code (with ground truth) is in source code format. Meanwhile, the source code is compiled to the comparing binary code with either a random or fixed configuration (e.g.,architecture, compiler family, compiler version, and optimization level), which significantly increases the difficulty of code similarity detection; and (2) the existence of different degrees of code similarity. Less similar code is known to be more, if not equally, important in various applications such as binary vulnerability study. To address these challenges, we design BugGraph, which performs source-binary code similarity detection in two steps. First, BugGraph identifies the compilation provenance of the target binary and compiles the comparing source code to a binary with the same provenance.Second, BugGraph utilizes a new graph triplet-loss network on the attributed control flow graph to produce a similarity ranking. The experiments on four real-world datasets show that BugGraph achieves 90% and 75% true positive rate for syntax equivalent and similar code, respectively, an improvement of 16% and 24% overstate-of-the-art methods. Moreover, BugGraph is able to identify 140 vulnerabilities in six commercial firmware.

Journal ArticleDOI
TL;DR: In this article, NARDINI is proposed for non-random arrangement of residues in disordered regions inferred using Numerical Intermixing (NERIM) to enable the discovery of potentially important, shared patterns across sequence families.

Journal ArticleDOI
TL;DR: In this article, three sets of features have been collected: a) interaction features of solutes and Mg obtained from first-principles calculation, b) intrinsic physical properties of the pure elements and c) structural features.

Journal ArticleDOI
07 Apr 2021-PLOS ONE
TL;DR: This paper conducted a simulation experiment that compared two-cluster K-median partitions of 71 binary similarity coefficients based on their pairwise correlations obtained under 15 different base-rate configurations.
Abstract: There are many psychological applications that require collapsing the information in a two-mode (e.g., respondents-by-attributes) binary matrix into a one-mode (e.g., attributes-by-attributes) similarity matrix. This process requires the selection of a measure of similarity between binary attributes. A vast number of binary similarity coefficients have been proposed in fields such as biology, geology, and ecology. Although previous studies have reported cluster analyses of binary similarity coefficients, there has been little exploration of how cluster memberships are affected by the base rates (percentage of ones) for the binary attributes. We conducted a simulation experiment that compared two-cluster K-median partitions of 71 binary similarity coefficients based on their pairwise correlations obtained under 15 different base-rate configurations. The results reveal that some subsets of coefficients consistently group together regardless of the base rates. However, there are other subsets of coefficients that group together for some base rates, but not for others.

Journal ArticleDOI
TL;DR: In this article, the authors exploit the stochasticity during switching of probabilistic conductive bridging RAM (CBRAM) devices to reduce the size of Multiply and Accumulate (MAC) units by 5 orders of magnitude.
Abstract: Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic. In contrast to conventional representation schemes used within the binary domain, the sequence of bit streams in the stochastic domain is inconsequential, and computation is usually non-deterministic. In this brief, we exploit the stochasticity during switching of probabilistic Conductive Bridging RAM (CBRAM) devices to efficiently generate stochastic bit streams in order to perform Deep Learning (DL) parameter optimization, reducing the size of Multiply and Accumulate (MAC) units by 5 orders of magnitude. We demonstrate that in using a 40-nm Complementary Metal Oxide Semiconductor (CMOS) process our scalable architecture occupies 1.55mm$^2$ and consumes approximately 167$\mu$W when optimizing parameters of a Convolutional Neural Network (CNN) while it is being trained for a character recognition task, observing no notable reduction in accuracy post-training.

Posted Content
TL;DR: In this article, a mixture model framework that infers the chirp mass, mass ratio, and aligned spin distributions of the binary black hole population was extended to also model the redshift evolution of merger rate and report all the major one and two-dimensional features in the binary Black Hole population using the 69 gravitational wave signals detected with a false alarm rate.
Abstract: Vamana is the mixture model framework that infers the chirp mass, mass ratio, and aligned spin distributions of the binary black hole population. We extend the mixing components to also model the redshift evolution of merger rate and report all the major one and two-dimensional features in the binary black hole population using the 69 gravitational wave signals detected with a false alarm rate $<1\mathrm{yr}^{-1}$ in the third Gravitational Wave Transient Catalog. Endorsing our previous report and corroborating recent report from LIGO Scientific, Virgo, and KAGRA Collaborations, we observe the chirp mass distribution has multiple peaks and a lack of mergers with chirp masses $10 \mathrm{-} 12M_\odot$. In addition, we observe aligned spins show mass dependence with heavier binaries exhibiting larger spins, mass ratio does not show a notable dependence on either the chirp mass or the aligned spin, and the redshift evolution of the merger rate for the peaks in the mass distribution is disparate. These features possibly reflect the astrophysics associated with the binary black formation channels. However, additional observations are needed to improve our limited confidence in them.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed two binary imbalanced data classification methods based on diversity oversampling by generative models using extreme learning machine autoencoder and generative adversarial network, respectively.

Posted Content
TL;DR: In this article, the performance of four quadratic unconstrained binary optimization problem solvers, namely D-Wave Hybrid Solver Service (HSS), Toshiba Simulated Bifurcation Machine (SBM), Fujitsu Digital Annealer (DA), and simulated annealing on a personal computer, was benchmarked.
Abstract: Recently, inspired by quantum annealing, many solvers specialized for unconstrained binary quadratic programming problems have been developed. For further improvement and application of these solvers, it is important to clarify the differences in their performance for various types of problems. In this study, the performance of four quadratic unconstrained binary optimization problem solvers, namely D-Wave Hybrid Solver Service (HSS), Toshiba Simulated Bifurcation Machine (SBM), Fujitsu DigitalAnnealer (DA), and simulated annealing on a personal computer, was benchmarked. The problems used for benchmarking were instances of real problems in MQLib, instances of the SAT-UNSAT phase transition point of random not-all-equal 3-SAT(NAE 3-SAT), and the Ising spin glass Sherrington-Kirkpatrick (SK) model. Concerning MQLib instances, the HSS performance ranked first; for NAE 3-SAT, DA performance ranked first; and regarding the SK model, SBM performance ranked first. These results may help understand the strengths and weaknesses of these solvers.

Posted Content
TL;DR: In this paper, the authors analyze the strengths and weaknesses of the standard Rayleigh criterion supplied with a Fisher matrix error estimation, and find that the criterion is useful, but too restrictive.
Abstract: Black hole spectroscopy is the proposal to observe multiple quasinormal modes in the ringdown of a binary black hole merger. In addition to the fundamental quadrupolar mode, overtones and higher harmonics may be present and detectable in the gravitational wave signal, allowing for tests of the no-hair theorem. We analyze in detail the strengths and weaknesses of the standard Rayleigh criterion supplied with a Fisher matrix error estimation, and we find that the criterion is useful, but too restrictive. Therefore we motivate the use of a conservative high Bayes factor threshold to obtain the black hole spectroscopy horizons of current and future detectors, i.e., the distance (averaged in sky location and binary inclination) up to which one or more additional modes can be detected and confidently distinguished from each other. We set up all of our searches for additional modes starting at $t = 10(M_1+M_2)$ after the peak amplitude in simulated signals of circular nonspinning binaries. An agnostic multimode analysis allows us to rank the subdominant modes: for nearly equal mass binaries we find $(\ell, m, n) = (2,2,1)$ and $(3,3,0)$ and, for very asymmetric binaries, $(3,3,0)$ and $(4,4,0)$, for the secondary and tertiary modes, respectively. At the current estimated rates for heavy stellar mass binary black hole mergers, with primary masses between 45 and 100 solar masses, we expect an event rate of mergers within the $(2,2,1)$ spectroscopy horizon of $0.03 - 0.10\ {\rm yr}^{-1}$ for LIGO at design sensitivity and $(0.6 - 2.4) \times 10^3\ {\rm yr}^{-1}$ for the future third generation ground-based detector Cosmic Explorer.

Journal ArticleDOI
TL;DR: In this article, a novel adaptive transfer function based on two linear functions is proposed to overcome the shortcomings of existing transfer functions, called upgrade transfer function (UTF), which adapts itself during running the algorithm to switch from exploration to exploitation.

Journal ArticleDOI
TL;DR: A novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS), which adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO.
Abstract: Data classification is a challenging problem. Data classification is very sensitive to the noise and high dimensionality of the data. Being able to reduce the model complexity can help to improve the accuracy of the classification model performance. Therefore, in this research, we propose a novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS). The proposed BHHO-TVS adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO. Eighteen well-known datasets provided by the UCI repository were utilized to show the significance of the proposed approach. The reported results show that BHHO-TVS outperforms BHHO with traditional binarization schemes as well as other binary feature selection methods such as binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), binary bat algorithm (BBA), binary whale optimization algorithm (BWOA), and binary salp swarm algorithm (BSSA). Compared with other similar feature selection approaches introduced in previous studies, the proposed method achieves the best accuracy rates on 67% of datasets.

Journal ArticleDOI
TL;DR: The proposed $\ell_1$ norm regularized quadratic surface support vector machine models for binary classification in supervised learning establish their desired theoretical properties, including the existence and uniqueness of the optimal solution, reduction to the standard SVMs over (almost) linearly separable data sets, and detection of true sparsity pattern over ( almost) quadratically separated data sets.
Abstract: We propose \begin{document}$ \ell_1 $\end{document} norm regularized quadratic surface support vector machine models for binary classification in supervised learning. We establish some desired theoretical properties, including the existence and uniqueness of the optimal solution, reduction to the standard SVMs over (almost) linearly separable data sets, and detection of true sparsity pattern over (almost) quadratically separable data sets if the penalty parameter on the \begin{document}$ \ell_1 $\end{document} norm is large enough. We also demonstrate their promising practical efficiency by conducting various numerical experiments on both synthetic and publicly available benchmark data sets.

Journal ArticleDOI
TL;DR: Four new transfer function, an improved speed update scheme, and a second-stage position update method are proposed for the binary pigeon-inspired optimization algorithm to improve the solution quality of the BPIO algorithm.
Abstract: The Pigeon-Inspired Optimization (PIO) algorithm is an intelligent algorithm inspired by the behavior of pigeons returned to the nest. The binary pigeon-inspired optimization (BPIO) algorithm is a binary version of the PIO algorithm, it can be used to optimize binary application problems. The transfer function plays a very important part in the BPIO algorithm. To improve the solution quality of the BPIO algorithm, this paper proposes four new transfer function, an improved speed update scheme, and a second-stage position update method. The original BPIO algorithm is easier to fall into the local optimal, so a new speed update equation is proposed. In the simulation experiment, the improved BPIO is compared with binary particle swarm optimization (BPSO) and binary grey wolf optimizer (BGWO). In addition, the benchmark test function, statistical analysis, Friedman’s test and Wilcoxon rank-sum test are used to prove that the improved algorithm is quite effective, and it also verifies how to set the speed of dynamic movement. Finally, feature selection was successfully implemented in the UCI data set, and higher classification results were obtained with fewer feature numbers.

Journal ArticleDOI
TL;DR: In this article, effective-one-body waveforms with the stationary phase approximation were used to obtain frequency-domain multipolar approximants valid from any low frequency to merger.
Abstract: The inference of binary neutron star properties from gravitational-wave observations requires the generation of millions of waveforms, each one spanning about three order of magnitudes in frequency range. Thus, waveform models must be efficiently generated and, at the same time, be faithful from the post-Newtonian quasi-adiabatic inspiral up to the merger regime. A simple solution to this problem is to combine effective-one-body waveforms with the stationary phase approximation to obtain frequency-domain multipolar approximants valid from any low frequency to merger. We demonstrate that effective-one-body frequency-domain waveforms generated in post-adiabatic approximation are computationally competitive with current phenomenological and surrogate models, (virtually) arbitrarily long, and faithful up to merger for any binary parameter. The same method can also be used to efficiently generate intermediate mass binary black hole inspiral waveforms detectable by space-based interferometers.

Journal ArticleDOI
TL;DR: Calculation results demonstrate that EBCSA has capability for searching the optimal network configuration with greater success rate, better optimal solution quality and smaller number of average convergence iterations than those of binary CSA, binary coyote optimization algorithm, binary genetic algorithm and binary particle swarm optimization.

Journal ArticleDOI
TL;DR: Pseudorandom binary sequences applied to dc power distribution systems are reviewed, allowing fast response to system variations through, for example, adaptive controllers and discusses their advantages as well as limitations.
Abstract: Frequency-domain identification based on wideband techniques has become a popular method in the analysis and control of various dc power distribution systems. In the method, a single converter or a system is perturbed by an external wideband voltage or current injection, the resulting voltage or current responses are measured, and Fourier analysis is applied to extract the spectral information of the measured variables. Most often, the system or converter input and output impedances and the loop gain are the quantities of interest. One class of perturbation signals, pseudorandom binary sequences, has become widely used because most of such signals can be generated using simple shift-register circuitry. As the signals are deterministic and binary, they are well suited to perform measurements on power-converter systems in real time, allowing fast response to system variations through, for example, adaptive controllers. This article reviews the pseudorandom binary sequences applied to dc power distribution systems and discusses their advantages as well as limitations. The conventional maximum-length binary sequence, inverse-repeat binary sequence, discrete-interval binary sequence, and orthogonal binary sequences are considered. Several experimental results from various dc power systems are presented and used to demonstrate the applicability of the discussed methods.

Posted Content
TL;DR: In this paper, a Fisher matrix method was used to estimate the sky localization uncertainty for binary neutron star mergers at distances between 2.4 and 3.4 square degrees and showed that the 1ET2CE network can detect 7.2% more of the simulated astrophysical population than the 2ET1CE network.
Abstract: This work characterises the sky localization and early warning performance of networks of third generation gravitational wave detectors, consisting of different combinations of detectors with either the Einstein Telescope or Cosmic Explorer configuration in sites in North America, Europe and Australia. Using a Fisher matrix method which includes the effect of earth rotation, we estimate the sky localization uncertainty for $1.4\text{M}\odot$-$1.4\text{M}\odot$ binary neutron star mergers at distances $40\text{Mpc}$, $200\text{Mpc}$, $400\text{Mpc}$, $800\text{Mpc}$, $1600\text{Mpc}$, and an assumed astrophysical population up to redshift of 2 to characterize its performance for binary neutron star observations. We find that, for binary neutron star mergers at $200\text{Mpc}$ and a network consisting of the Einstein Telescope, Cosmic Explorer and an extra Einstein Telescope-like detector in Australia(2ET1CE), the upper limit of the size of the 90% credible region for the best localized 90% signals is $0.51\text{deg}^2$. For the simulated astrophysical distribution, this upper limit is $183.58\text{deg}^2$. If the Einstein Telescope-like detector in Australia is replaced with a Cosmic Explorer-like detector(1ET2CE), for $200\text{Mpc}$ case, the upper limit is $0.36\text{deg}^2$, while for astrophysical distribution, it is $113.55\text{deg}^2$. We note that the 1ET2CE network can detect 7.2% more of the simulated astrophysical population than the 2ET1CE network. In terms of early warning performance, we find that a network of 2ET1CE and 1ET2CE networks can both provide early warnings of the order of 1 hour prior to merger with sky localization uncertainties of 30 square degrees or less. Our study concludes that the 1ET2CE network is a good compromise between binary neutron stars detection rate, sky localization and early warning capabilities.

Journal ArticleDOI
TL;DR: In this paper, the transformation invariance of binary local descriptors is ensured by projecting the original patches and their transformed counterparts into an identical high-dimensional feature space and an identical low-dimensional descriptor space simultaneously.
Abstract: Despite the great success achieved by prevailing binary local descriptors, they are still suffering from two problems: 1) vulnerable to the geometric transformations; 2) lack of an effective treatment to the highly-correlated bits that are generated by directly applying the scheme of image hashing. To tackle both limitations, we propose an unsupervised Transformation-invariant Binary Local Descriptor learning method (TBLD). Specifically, the transformation invariance of binary local descriptors is ensured by projecting the original patches and their transformed counterparts into an identical high-dimensional feature space and an identical low-dimensional descriptor space simultaneously. Meanwhile, it enforces the dissimilar image patches to have distinctive binary local descriptors. Moreover, to reduce high correlations between bits, we propose a bottom-up learning strategy, termed Adversarial Constraint Module , where low-coupling binary codes are introduced externally to guide the learning of binary local descriptors. With the aid of the Wasserstein loss, the framework is optimized to encourage the distribution of the generated binary local descriptors to mimic that of the introduced low-coupling binary codes, eventually making the former more low-coupling. Experimental results on three benchmark datasets well demonstrate the superiority of the proposed method over the state-of-the-art methods. The project page is available at https://github.com/yoqim/TBLD .

Posted Content
Abstract: LIGO and Virgo have initiated the era of gravitational-wave (GW) astronomy; but in order to fully explore GW frequency spectrum, we must turn our attention to innovative techniques for GW detection. One such approach is to use binary systems as dynamical GW detectors by studying the subtle perturbations to their orbits caused by impinging GWs. We present a powerful new formalism for calculating the orbital evolution of a generic binary coupled to a stochastic background of GWs, deriving from first principles a secularly-averaged Fokker-Planck equation which fully characterises the statistical evolution of all six of the binary's orbital elements. We also develop practical tools for numerically integrating this equation, and derive the necessary statistical formalism to search for GWs in observational data from binary pulsars and laser-ranging experiments.

Posted Content
TL;DR: In this paper, the authors provide a detailed analysis of the Travelling Salesman Problem with Time Windows (TSPTW) in the context of solving it on a quantum computer and introduce quadratic unconstrained binary optimization and higher order binary optimization formulations of this problem.
Abstract: Quantum computing is offering a novel perspective for solving combinatorial optimization problems. To fully explore the possibilities offered by quantum computers, the problems need to be formulated as unconstrained binary models, taking into account limitation and advantages of quantum devices. In this work, we provide a detailed analysis of the Travelling Salesman Problem with Time Windows (TSPTW) in the context of solving it on a quantum computer. We introduce quadratic unconstrained binary optimization and higher order binary optimization formulations of this problem. We demonstrate the advantages of edge-based and node-based formulations of the TSPTW problem. Additionally, we investigate the experimental realization of the presented methods on a quantum annealing device. The provided results pave the path for utilizing quantum computer for a variety of real-world task which can be cast in the form of Travelling Salesman Problem with Time Windows problem.