scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2020"


Journal ArticleDOI
TL;DR: In this article, the authors show global uniqueness in the fractional Calderon problem with a single measurement and with data on arbitrary disjoint subsets of the exterior, and give a constructive procedure for determining an unknown potential from a single exterior measurement, based on constructive versions of the unique continuation result.

102 citations


Proceedings Article
07 Feb 2020
TL;DR: In this article, the authors propose an alternative notion of distance between datasets that is model-agnostic, does not involve training, can compare datasets even if their label sets are completely disjoint and has solid theoretical footing.
Abstract: The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning. Current methods to quantify it are often heuristic, make strong assumptions on the label sets across the tasks, and many are architecture-dependent, relying on task-specific optimal parameters (e. g., require training a model on each dataset). In this work we propose an alternative notion of distance between datasets that (i) is model-agnostic, (ii) does not involve training, (iii) can compare datasets even if their label sets are completely disjoint and (iv) has solid theoretical footing. This distance relies on optimal transport, which provides it with rich geometry awareness, interpretable correspondences and well-understood properties. Our results show that this novel distance provides meaningful comparison of datasets, and correlates well with transfer learning hardness across various experimental settings and datasets.

87 citations


Journal ArticleDOI
TL;DR: In this paper, a simple two-loop radiative neutrino mass model was proposed, and a combined explanation of all these apparently disjoint phenomena within this framework was proposed.
Abstract: Motivated by the long-standing tension in the muon anomalous magnetic moment (AMM) and persistent observations of B-physics anomalies in ${R}_{{D}^{(*)}}$ and ${R}_{{K}^{(*)}}$ ratios, we construct a simple two-loop radiative neutrino mass model, and propose a combined explanations of all these apparently disjoint phenomena within this framework. Our proposed model consists of two scalar leptoquarks (LQs), a $SU(2{)}_{L}$ singlet ${S}_{1}\ensuremath{\sim}(\overline{3},1,1/3)$ and a $SU(2{)}_{L}$ triplet ${S}_{3}\ensuremath{\sim}(\overline{3},3,1/3)$ to accommodate ${R}_{{D}^{(*)}}$ and ${R}_{{K}^{(*)}}$ anomalies, respectively. The muon receives chirality-enhanced contribution toward its $g\ensuremath{-}2$ due to the presence of ${S}_{1}$ LQ that accounts for the observed deviation from the Standard Model prediction. Furthermore, we introduce a $SU(2{)}_{L}$ singlet scalar diquark $\ensuremath{\omega}\ensuremath{\sim}(\overline{6},1,2/3)$, which is necessary to break lepton number and generate neutrino mass radiatively with the aid of ${S}_{1}$ and ${S}_{3}$ LQs. We perform a detailed phenomenological analysis of this set-up and demonstrate its viability by providing benchmark points where a fit to the neutrino oscillation data together with proper explanations of the muon AMM puzzle and flavor anomalies are accomplished while simultaneously meeting all other flavor violation and collider bounds.

83 citations


Journal ArticleDOI
TL;DR: A multiset-fuzzy-decision-theoretic rough set model is created, a model which considers fuzzy relations in multisetset-valued information tables, and two methods are introduced that compute expected costs from loss functions.

62 citations


Journal ArticleDOI
TL;DR: A signal-dependent feature graph Laplacian regularizer (SDFGLR) that assumes surface normals computed from point coordinates are piecewise smooth with respect to a signal- dependent graph LaPLacian matrix is designed.
Abstract: Point cloud is a collection of 3D coordinates that are discrete geometric samples of an object’s 2D surfaces. Imperfection in the acquisition process means that point clouds are often corrupted with noise. Building on recent advances in graph signal processing, we design local algorithms for 3D point cloud denoising. Specifically, we design a signal-dependent feature graph Laplacian regularizer (SDFGLR) that assumes surface normals computed from point coordinates are piecewise smooth with respect to a signal-dependent graph Laplacian matrix. Using SDFGLR as a signal prior, we formulate an optimization problem with a general $\ell _{p}$ -norm fidelity term that can explicitly remove only two types of additive noise: small but non-sparse noise like Gaussian (using $\ell _{2}$ fidelity term) and large but sparser noise like Laplacian (using $\ell _{1}$ fidelity term).To establish a linear relationship between normals and 3D point coordinates, we first perform bipartite graph approximation to divide the point cloud into two disjoint node sets (red and blue). We then optimize the red and blue nodes’ coordinates alternately. For $\ell _{2}$ -norm fidelity term, we iteratively solve an unconstrained quadratic programming (QP) problem, efficiently computed using conjugate gradient with a bounded condition number to ensure numerical stability. For $\ell _{1}$ -norm fidelity term, we iteratively minimize an $\ell _{1}$ - $\ell _{2}$ cost function using accelerated proximal gradient (APG), where a good step size is chosen via Lipschitz continuity analysis. Finally, we propose simple mean and median filters for flat patches of a given point cloud to estimate the noise variance given the noise type, which in turn is used to compute a weight parameter trading off the fidelity term and signal prior in the problem formulation. Extensive experiments show state-of-the-art denoising performance among local methods using our proposed algorithms.

59 citations


Posted Content
TL;DR: The lifted disjoint paths tracker achieves nearly optimal assignments with respect to input detections and leads on all three main benchmarks of the MOT challenge, improving significantly over state-of-the-art.
Abstract: We present an extension to the disjoint paths problem in which additional \emph{lifted} edges are introduced to provide path connectivity priors We call the resulting optimization problem the lifted disjoint paths problem We show that this problem is NP-hard by reduction from integer multicommodity flow and 3-SAT To enable practical global optimization, we propose several classes of linear inequalities that produce a high-quality LP-relaxation Additionally, we propose efficient cutting plane algorithms for separating the proposed linear inequalities The lifted disjoint path problem is a natural model for multiple object tracking and allows an elegant mathematical formulation for long range temporal interactions Lifted edges help to prevent id switches and to re-identify persons Our lifted disjoint paths tracker achieves nearly optimal assignments with respect to input detections As a consequence, it leads on all three main benchmarks of the MOT challenge, improving significantly over state-of-the-art

59 citations


Journal ArticleDOI
TL;DR: In this paper, a solution to the persistent tensions in the decay observables was presented by introducing a $SU(2{)}_{L}$ doublet and a$SU( 2{}-L$ triplet scalar leptoquarks (LQs) that reside at the TeV energy scale.
Abstract: In this work, we present a solution to the persistent tensions in the decay observables ${R}_{{D}^{(*)}}$ and ${R}_{{K}^{(*)}}$ by introducing a $SU(2{)}_{L}$ doublet and a $SU(2{)}_{L}$ triplet scalar leptoquarks (LQs) that reside at the TeV energy scale. Neutrinos that remain massless in the Standard Model receive naturally small masses at one-loop level via the propagation of the same LQs inside the loop. Such a common origin of apparently disjoint phenomenological observations is appealing, and we perform a comprehensive analysis of this setup. We identify the minimal Yukawa textures required to accommodate these flavor anomalies and to successfully incorporate neutrino oscillation data while being consistent with all experimental constraints. This scenario has the potential to be tested at the experiments by the future improved measurements in the lepton flavor violating processes. Furthermore, proper explanations of these flavor anomalies predict TeV scale LQs that are directly accessible at the LHC.

56 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the double-trace operator and show that the mutual information and reflected entropy diverge for disjoint intervals when the separation distance approaches a minimum, finite value that depends solely on the deformation parameter.
Abstract: We consider fine-grained probes of the entanglement structure of two dimensional conformal field theories deformed by the irrelevant double-trace operator $T\bar{T}$ and its closely related but nonetheless distinct single-trace counterpart. For holographic conformal field theories, these deformations can be interpreted as modifications of bulk physics in the ultraviolet region of anti-de Sitter space. Consequently, we can use the Ryu-Takayanagi formula and its generalizations to mixed state entanglement measures to test highly nontrivial consistency conditions. In general, the agreement between bulk and boundary quantities requires the equivalence of partition functions on manifolds of arbitrary genus. For the single-trace deformation, which is dual to an asymptotically linear dilaton geometry, we find that the mutual information and reflected entropy diverge for disjoint intervals when the separation distance approaches a minimum, finite value that depends solely on the deformation parameter. This implies that the mutual information fails to serve as a geometric regulator which is related to the breakdown of the split property at the inverse Hagedorn temperature. In contrast, for the double-trace deformation, which is dual to anti-de Sitter space with a finite radial cutoff, we find all divergences to disappear including the standard quantum field theory ultraviolet divergence that is generically seen as disjoint intervals become adjacent. We furthermore compute reflected entropy in conformal perturbation theory. While we find formally similar behavior between bulk and boundary computations, we find quantitatively distinct results. We comment on the interpretation of these disagreements and the physics that must be altered to restore consistency. We also briefly discuss the $T{\bar J}$ and $J{\bar T}$ deformations.

47 citations


Proceedings Article
12 Jul 2020
TL;DR: In this article, a lifted disjoint path tracker was proposed to solve the problem of path connectivity priors, where additional lifted edges are introduced to provide path connectivity prior to prevent id switches and to reidentify persons.
Abstract: We present an extension to the disjoint paths problem in which additional \emph{lifted} edges are introduced to provide path connectivity priors. We call the resulting optimization problem the lifted disjoint paths problem. We show that this problem is NP-hard by reduction from integer multicommodity flow and 3-SAT. To enable practical global optimization, we propose several classes of linear inequalities that produce a high-quality LP-relaxation. Additionally, we propose efficient cutting plane algorithms for separating the proposed linear inequalities. The lifted disjoint path problem is a natural model for multiple object tracking and allows an elegant mathematical formulation for long range temporal interactions. Lifted edges help to prevent id switches and to re-identify persons. Our lifted disjoint paths tracker achieves nearly optimal assignments with respect to input detections. As a consequence, it leads on all three main benchmarks of the MOT challenge, improving significantly over state-of-the-art.

45 citations


Journal ArticleDOI
TL;DR: It is shown that Hamiltonian Cycle (and hence Hamiltonian Path ) is NP -hard on graphs of linear mim-width 1; this further hints at the expressive power of the mim- width parameter.

36 citations


Journal ArticleDOI
TL;DR: A deep multimodal transfer learning (DMTL) approach to transfer the knowledge from the previously labeled categories to improve the retrieval performance on the unlabeled new categories (target domain) and employs a joint learning paradigm to transfer knowledge by assigning a pseudolabel to each target sample.
Abstract: Cross-modal retrieval (CMR) enables flexible retrieval experience across different modalities (e.g., texts versus images), which maximally benefits us from the abundance of multimedia data. Existing deep CMR approaches commonly require a large amount of labeled data for training to achieve high performance. However, it is time-consuming and expensive to annotate the multimedia data manually. Thus, how to transfer valuable knowledge from existing annotated data to new data, especially from the known categories to new categories, becomes attractive for real-world applications. To achieve this end, we propose a deep multimodal transfer learning (DMTL) approach to transfer the knowledge from the previously labeled categories (source domain) to improve the retrieval performance on the unlabeled new categories (target domain). Specifically, we employ a joint learning paradigm to transfer knowledge by assigning a pseudolabel to each target sample. During training, the pseudolabel is iteratively updated and passed through our model in a self-supervised manner. At the same time, to reduce the domain discrepancy of different modalities, we construct multiple modality-specific neural networks to learn a shared semantic space for different modalities by enforcing the compactness of homoinstance samples and the scatters of heteroinstance samples. Our method is remarkably different from most of the existing transfer learning approaches. To be specific, previous works usually assume that the source domain and the target domain have the same label set. In contrast, our method considers a more challenging multimodal learning situation where the label sets of the two domains are different or even disjoint. Experimental studies on four widely used benchmarks validate the effectiveness of the proposed method in multimodal transfer learning and demonstrate its superior performance in CMR compared with 11 state-of-the-art methods.

Journal ArticleDOI
TL;DR: It is proved that all individuals can be partitioned into two disjoint clusters within a fixed time and they move to opposite directions with the same velocity.

Proceedings Article
11 Jun 2020
TL;DR: Pointer Graph Networks (PGNs) are introduced which augment sets or graphs with additional inferred edges for improved model expressivity and can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees.
Abstract: Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model generalisation ability. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.

Journal ArticleDOI
TL;DR: A surrogate modeling scheme based on Grassmannian manifold learning to be used for cost-efficient predictions of high-dimensional stochastic systems is introduced, where the proposed surrogate is used to predict the full strain field of a material specimen under large shear strains.

Journal ArticleDOI
TL;DR: A comprehensible method for measuring the fuzzy entropy of a shadowed set, i.e., interval fuzzy entropy, is defined and the results of the instance analysis of different types of representative membership functions and many experiments show that the fuzzier entropy loss of the interval shadowed sets is lower than that of the traditional shadowing set of a fuzzy set.
Abstract: Shadowed sets, proposed by Pedrycz, provide a three-way approximation scheme for transforming the universe of a fuzzy set into three disjoint areas, i.e., elevated area, reduced area, and shadow area. To calculate a pair of decision-making thresholds, an analytic method was proposed by solving a minimization problem of the uncertainty arising from the three areas. However, some uncertainty will be lost in the process of constructing the shadowed set model using Pedrycz's method. Moreover, few references on how to measure the uncertainty of shadow sets exist. In this article, a comprehensible method for measuring the fuzzy entropy of a shadowed set, i.e., interval fuzzy entropy, is defined. Based on the interval fuzzy entropy, a new shadowed set model, namely, interval shadowed sets, is proposed. Compared with Pedrycz's model, the main difference is that the range of the shadow area in this model is $[\beta,\alpha ]$ $(0 \leq \beta while not [0, 1]. By solving a fuzzy entropy loss-minimization problem, a pair of optimal thresholds, $\alpha$ and $\beta$ , can be obtained. Finally, the results of the instance analysis of different types of representative membership functions and many experiments show that the fuzzy entropy loss of the interval shadowed set is lower than that of the traditional shadowed set of a fuzzy set. These results enrich shadowed set theory from a new perspective.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the probability of polymers crossing a unit-order region while beginning and ending within a short distance of each other is bounded above by a factor of O(k^2 - 1)/2 \, + \, o(1)}.
Abstract: In last passage percolation models lying in the KPZ universality class, long maximizing paths have a typical deviation from the linear interpolation of their endpoints governed by the two-thirds power of the interpolating distance. This two-thirds power dictates a choice of scaled coordinates, in which these maximizers, now called polymers, cross unit distances with unit-order fluctuations. In this article, we consider Brownian last passage percolation in these scaled coordinates, and prove that the probability of the presence of $k$ disjoint polymers crossing a unit-order region while beginning and ending within a short distance $\epsilon$ of each other is bounded above by $\epsilon^{(k^2 - 1)/2 \, + \, o(1)}$. This result, which we conjecture to be sharp, yields understanding of the uniform nature of the coalescence structure of polymers, and plays a foundational role in [Ham17c] in proving comparison on unit-order scales to Brownian motion for polymer weight profiles from general initial data. The present paper also contains an on-scale articulation of the two-thirds power law for polymer geometry: polymers fluctuate by $\epsilon^{2/3}$ on short scales $\epsilon$.

Journal ArticleDOI
TL;DR: It is shown that when the intersection of the intervals imposed by the agents is nonempty, the resulting constrained consensus problem must converge to a common value inside that intersection, and it is proven that there is a unique equilibrium that is globally attractive if the constraint intervals are pairwise disjoint.
Abstract: The constrained consensus problem considered in this paper, denoted interval consensus, is characterized by the fact that each agent can impose a lower and upper bound on the achievable consensus value. Such constraints can be encoded in the consensus dynamics by saturating the values that an agent transmits to its neighboring nodes. We show in the paper that when the intersection of the intervals imposed by the agents is nonempty, the resulting constrained consensus problem must converge to a common value inside that intersection. In our algorithm, convergence happens in a fully distributed manner, and without need of sharing any information on the individual constraining intervals. When the intersection of the intervals is an empty set, the intrinsic nonlinearity of the network dynamics raises new challenges in understanding the node state evolution. Using Brouwer fixed-point theorem we prove that in that case there exists at least one equilibrium, and in fact the possible equilibria are locally stable if the constraints are satisfied or dissatisfied at the same time among all nodes. For graphs with sufficient sparsity it is further proven that there is a unique equilibrium that is globally attractive if the constraint intervals are pairwise disjoint.

Journal ArticleDOI
TL;DR: In this article, a hierarchical decomposition of the approximation space obtained by splitting the independent variables of the problem into disjoint subsets is proposed, which can be conveniently visualized in terms of binary trees, yielding series expansions analogous to the classical Tensor-Train and Hierarchical Tucker tensor formats.

Journal ArticleDOI
TL;DR: Numerical simulations exhibit dynamical behavior of the measles system under influence of its parameters which further suggest improvement in both the vaccine efficacy and its coverage rate for substantial reduction in the measles epidemic.
Abstract: Modeling of infectious diseases is essential to comprehend dynamic behavior for the transmission of an epidemic. This research study consists of a newly proposed mathematical system for transmission dynamics of the measles epidemic. The measles system is based upon mass action principle wherein human population is divided into five mutually disjoint compartments: susceptible S(t)—vaccinated V(t)—exposed E(t)—infectious I(t)—recovered R(t). Using real measles cases reported from January 2019 to October 2019 in Pakistan, the system has been validated. Two unique equilibria called measles-free and endemic (measles-present) are shown to be locally asymptotically stable for basic reproductive number $${\mathcal {R}}_0 1$$, respectively. While using Lyapunov functions, the equilibria are found to be globally asymptotically stable under the former conditions on $${\mathcal {R}}_0$$. However, backward bifurcation shows coexistence of stable endemic equilibrium with a stable measles-free equilibrium for $${\mathcal {R}}_0<1$$. A strategy for measles control based on herd immunity is presented. The forward sensitivity indices for $${\mathcal {R}}_0$$ are also computed with respect to the estimated and fitted biological parameters. Finally, numerical simulations exhibit dynamical behavior of the measles system under influence of its parameters which further suggest improvement in both the vaccine efficacy and its coverage rate for substantial reduction in the measles epidemic.

Journal ArticleDOI
03 Apr 2020
TL;DR: Graph Convolutional Network (GCN) is brought into multi-view learning and a novel multi-View semi-supervised learning method Co-GCN is proposed by adaptively exploiting the graph information from the multiple views with combined Laplacians.
Abstract: In many real-world applications, the data have several disjoint sets of features and each set is called as a view. Researchers have developed many multi-view learning methods in the past decade. In this paper, we bring Graph Convolutional Network (GCN) into multi-view learning and propose a novel multi-view semi-supervised learning method Co-GCN by adaptively exploiting the graph information from the multiple views with combined Laplacians. Experimental results on real-world data sets verify that Co-GCN can achieve better performance compared with state-of-the-art multi-view semi-supervised methods.

Journal ArticleDOI
TL;DR: A physical protocol of zero-knowledge proof for Numberlink using a deck of cards, which allows a prover to convince a verifier that he/she knows a solution without revealing it.
Abstract: Numberlink is a logic puzzle with an objective to connect all pairs of cells with the same number by non-crossing paths in a rectangular grid. In this paper, we propose a physical protocol of zero-knowledge proof for Numberlink using a deck of cards, which allows a prover to convince a verifier that he/she knows a solution without revealing it. In particular, the protocol shows how to physically count the number of elements in a list that are equal to a given secret value without revealing that value, the positions of elements in the list that are equal to it, or the value of any other element in the list. Finally, we show that our protocol can be modified to verify a solution of the well-known $k$ vertex-disjoint paths problem, both the undirected and directed settings.

Journal ArticleDOI
TL;DR: In this study, a novel shadowed set model is proposed, namely, mean-entropy-based shadowed sets (MESS), a novel framework of three-way approximations of fuzzy sets is proposed based on the mean of fuzzy entropy.

Journal ArticleDOI
TL;DR: In this paper, it was shown that finite unions of disjoint open Wulff shapes with equal radii are the only volume-constrained critical points of the anisotropic surface energy among all sets with finite perimeter and reduced boundary almost equal to its closure.
Abstract: Given an elliptic integrand of class $$\mathscr {C}^{2,\alpha }$$ , we prove that finite unions of disjoint open Wulff shapes with equal radii are the only volume-constrained critical points of the anisotropic surface energy among all sets with finite perimeter and reduced boundary almost equal to its closure.

Posted Content
TL;DR: In this paper, it was shown that any tree with n edges can be decomposed into disjoint copies of another tree with O(n+1) times the complexity of the original tree.
Abstract: A typical decomposition question asks whether the edges of some graph $G$ can be partitioned into disjoint copies of another graph $H$. One of the oldest and best known conjectures in this area, posed by Ringel in 1963, concerns the decomposition of complete graphs into edge-disjoint copies of a tree. It says that any tree with $n$ edges packs $2n+1$ times into the complete graph $K_{2n+1}$. In this paper, we prove this conjecture for large $n$.

Posted Content
TL;DR: The results demonstrate that continuous community partition method can improve influence spread and accuracy of the community partition effectively.
Abstract: Community partition is of great importance in social networks because of the rapid increasing network scale, data and applications. We consider the community partition problem under LT model in social networks, which is a combinatorial optimization problem that divides the social network to disjoint $m$ communities. Our goal is to maximize the sum of influence propagation through maximizing it within each community. As the influence propagation function of community partition problem is supermodular under LT model, we use the method of Lov{$\acute{a}$}sz Extension to relax the target influence function and transfer our goal to maximize the relaxed function over a matroid polytope. Next, we propose a continuous greedy algorithm using the properties of the relaxed function to solve our problem, which needs to be discretized in concrete implementation. Then, random rounding technique is used to convert the fractional solution to integer solution. We present a theoretical analysis with $1-1/e$ approximation ratio for the proposed algorithms. Extensive experiments are conducted to evaluate the performance of the proposed continuous greedy algorithms on real-world online social networks datasets and the results demonstrate that continuous community partition method can improve influence spread and accuracy of the community partition effectively.

Journal ArticleDOI
TL;DR: In this article, the authors extend Fano's inequality to work with arbitrary [0, 1]-valued random variables and provide a lower bound on the regret in non-stochastic sequential learning.
Abstract: We extend Fano's inequality, which controls the average probability of (disjoint) events in terms of the average of some Kullback-Leibler divergences, to work with arbitrary [0,1]-valued random variables. Our simple two-step methodology is general enough to cover the case of an arbitrary (possibly continuously infinite) family of distributions as well as [0,1]-valued random variables not necessarily summing up to 1. Several novel applications are provided, in which the consideration of random variables is particularly handy. The most important applications deal with the problem of Bayesian posterior concentration (minimax or distribution-dependent) rates and with a lower bound on the regret in non-stochastic sequential learning. We also improve in passing some earlier fundamental results: in particular, we provide a simple and enlightening proof of the refined Pinsker's inequality of Ordentlich and Weinberger and derive a sharper Bretagnolle-Huber inequality.

Journal ArticleDOI
TL;DR: The method constructs infinite families of optimal binary linear codes and obtains much simpler criteria under various specific conditions on the maximal elements of $\Delta $ .
Abstract: A linear code is optimal if it has the highest minimum distance of any linear code with a given length and dimension. We construct infinite families of optimal binary linear codes $C_{\Delta ^{c}}$ constructed from simplicial complexes in $\mathbb {F}^{n}_{2}$ , where $\Delta $ is a simplicial complex in $\mathbb {F}^{n}_{2}$ and $\Delta ^{c}$ the complement of $\Delta $ . We first find an explicit computable criterion for $C_{\Delta ^{c}}$ to be optimal; this criterion is given in terms of the 2-adic valuation of $\sum _{i=1}^{s} 2^{|A_{i}|-1}$ , where the $A_{i}$ ’s are maximal elements of $\Delta $ . Furthermore, we obtain much simpler criteria under various specific conditions on the maximal elements of $\Delta $ . In particular, we find that $C_{\Delta ^{c}}$ is a Griesmer code if and only if the maximal elements of $\Delta $ are pairwise disjoint and their sizes are all distinct. Specially, when $\mathcal {F}$ has exactly two maximal elements, we explicitly determine the weight distribution of $C_{\Delta ^{c}}$ . We present many optimal linear codes constructed by our method, and we emphasize that we obtain at least 32 new optimal linear codes.

Journal ArticleDOI
TL;DR: This paper proposes a hybrid genetic algorithm which adopts greedy initialization and bidirectional mutation operations, termed BMHGA, to find a number of non-disjoint cover sets to prolong lifetime of heterogeneous WSNs, while subject to ensuring the full coverage of the monitoring area during the network lifetime.
Abstract: Sleep scheduling is an effective mechanism to extend the lifetime of energy-constrained Wireless Sensor Networks(WSNs). It is often that the sensors are divided into sets with some constraints after plentiful sensors are deployed randomly, and then the sensors are scheduled to be activated successively according to the numbering of the sets. Many approaches divide the sensors into disjoint sets, which are not suitable for heterogeneous WSNs because of the waste of energy. In this paper, we propose a hybrid genetic algorithm which adopts greedy initialization and bidirectional mutation operations, termed BMHGA, to find a number of non-disjoint cover sets to prolong lifetime of heterogeneous WSNs, while subject to ensuring the full coverage of the monitoring area during the network lifetime. BMHGA adopts two-level structured chromosome to indicate the sensors and energy assignment to each set. A novel greedy method only uses little time to initialize the population that avoids time waste of random initialization. A new bidirectional mutation is proposed to keep multiplicity and global search. Through simulations, we show that the proposed algorithm outperforms the other existing approaches, finding the cover sets with longer lifetime by consuming less running time, especially in the large-scale networks. The experiment study also verifies the effectiveness of the proposed genetic operations and reveals the proper parameter settings for BMHGA.

Journal ArticleDOI
01 Feb 2020
TL;DR: A new nodal metric, nodal disjoint path (NDP), is developed, which measures a node's importance in terms of its diverse connectivity to other nodes, and proposes two algorithms, NDP‐global and NDP‐cluster, for determining the locations of the k controllers to increase network robustness against targeted attacks.
Abstract: Traditional IP networks are difficult to manage, owing to their rapid expansion and dynamic changes. Software‐defined networks are introduced to simplify network management by separating t...

Journal ArticleDOI
TL;DR: This work significantly extends the usefulness of matroid theory in kernelization by showing that matroid-based techniques for kernelization can be used for classical kernels as well as for discrete-time kernels.
Abstract: We continue the development of matroid-based techniques for kernelization, initiated by the present authors [47] We significantly extend the usefulness of matroid theory in kernelization by showing applications of a result on representative sets due to Lovasz [51] and Marx [53] As a first result, we show how representative sets can be used to derive a polynomial kernel for the elusive ALMOST 2-SAT problem (where the task is to remove at most k clauses to make a 2-CNF formula satisfiable), solving a major open problem in kernelization This result also yields a new O(√log OPT)-approximation for the problem, improving on the O(√log n)-approximation of Agarwal et al [3] and an implicit O(log OPT)-approximation due to Even et al [24] We further apply the representative sets tool to the problem of finding irrelevant vertices in graph cut problems, that is, vertices that can be made undeletable without affecting the answer to the problem This gives the first significant progress towards a polynomial kernel for the MULTIWAY CUT problem; in particular, we get a kernel of O(ks+1) vertices for MULTIWAY CUT instances with at most s terminals Both these kernelization results have significant spin-off effects, producing the first polynomial kernels for a range of related problems More generally, the irrelevant vertex results have implications for covering min cuts in graphs For a directed graph G=(V,E) and sets S, T ⊆ V, let r be the size of a minimum (S,T)-vertex cut (which may intersect S and T) We can find a set Z ⊆ V of size O(|S| |T| r) that contains a minimum (A,B)-vertex cut for every A ⊆ S, B ⊆ T Similarly, for an undirected graph G=(V,E), a set of terminals X ⊆ V, and a constant s, we can find a set Z⊆ V of size O(|X|s+1) that contains a minimum multiway cut for every partition of X into at most s pairwise disjoint subsets Both results are polynomial time We expect this to have further applications; in particular, we get direct, reduction rule-based kernelizations for all problems above, in contrast to the indirect compression-based kernel previously given for ODD CYCLE TRANSVERSAL [47] All our results are randomized, with failure probabilities that can be made exponentially small in n, due to needing a representation of a matroid to apply the representative sets tool