scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2018"


Book
11 Feb 2018
TL;DR: In this article, the maximum size of k-sets and k-set union of all k-points in an arrangement of curves and surfaces has been shown to be Θ(nk 2?s nk) where nk is the maximum length of (m, s)-Davenport-Schinzel sequences.
Abstract: We extend the notion ofk-sets and (≤k)-sets (see [3], [12], and [19]) to arrangements of curves and surfaces. In the case of curves in the plane, we assume that each curve is simple and separates the plane. Ak-point is an intersection point of a pair of the curves which is covered by exactlyk interiors of (or half-planes bounded by) other curves; thek-set is the set of allk-points in such an arrangement, and the (≤k)-set is the union of allj-sets, forj≤k. Adapting the probabilistic analysis technique of Clarkson and Shor [13], we obtain bounds that relate the maximum size of the (≤k)-set to the maximum size of a 0-set of a sample of the curves. Using known bounds on the size of such 0-sets we obtain asympotically tight bounds for the maximum size of the (≤k)-set in the following special cases: (i) If each pair of curves intersect at most twice, the maximum size is ?(nk?(nk)). (ii) If the curves are unbounded arcs and each pair of them intersect at most three times, then the maximum size is ?(nk?(n/k)). (iii) If the curves arex-monotone arcs and each pair of them intersect in at most some fixed numbers of points, then the maximum size of the (≤k)-set is ?(k2?s(nk)), where ?s(m) is the maximum length of (m,s)-Davenport-Schinzel sequences. We also obtain generalizations of these results to certain classes of surfaces in three and higher dimensions. Finally, we present various applications of these results to arrangements of segments and curves, high-order Voronoi diagrams, partial stabbing of disjoint convex sets in the plane, and more. An interesting application yields andO(n logn) bound on the expected number of vertically visible features in an arrangement ofn horizontal discs when they are stacked on top of each other in random order. This in turn leads to an efficient randomized preprocessing ofn discs in the plane so as to allow fast stabbing queries, in which we want to report all discs containing a query point.

99 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the computation of the entanglement entropy of two disjoint intervals in a (1+1) dimensional conformal field theory by conformal block expansion of the 4-point correlation function of twist fields.
Abstract: We reconsider the computation of the entanglement entropy of two disjoint intervals in a (1+1) dimensional conformal field theory by conformal block expansion of the 4-point correlation function of twist fields. We show that accurate results may be obtained by taking into account several terms in the operator product expansion (OPE) of twist fields and by iterating the Zamolodchikov recursion formula for each conformal block. We perform a detailed analysis for the Ising conformal field theory and for the free compactified boson. Each term in the conformal block expansion can be easily analytically continued and so this approach also provides a good approximation for the von Neumann entropy.

68 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the ground states of the Gross-Pitaevskii (GP) equation are unique and radially symmetric at least for almost every.
Abstract: We are interested in the attractive Gross–Pitaevskii (GP) equation in , where the external potential vanishes on m disjoint bounded domains and as , that is, the union of these is the bottom of the potential well. By establishing some delicate estimates on the associated energy functional of the GP equation, we prove that when the interaction strength a approaches some critical value , the ground states concentrate and blow up at the center of the incircle of some with the largest inradius. Moreover, under some further conditions on , we show that the ground states of the GP equations are unique and radially symmetric at least for almost every .

65 citations


Posted Content
TL;DR: In this article, a message passing algorithm is proposed to optimize the acquisition function in high-dimensional black-box functions. But the authors consider the assumption that the subsets are disjoint, and consider additive models with arbitrary overlap among the sub-sets.
Abstract: Bayesian optimization (BO) is a popular technique for sequential black-box function optimization, with applications including parameter tuning, robotics, environmental monitoring, and more. One of the most important challenges in BO is the development of algorithms that scale to high dimensions, which remains a key open problem despite recent progress. In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables. In particular, we significantly generalize this approach by lifting the assumption that the subsets are disjoint, and consider additive models with arbitrary overlap among the subsets. By representing the dependencies via a graph, we deduce an efficient message passing algorithm for optimizing the acquisition function. In addition, we provide an algorithm for learning the graph from samples based on Gibbs sampling. We empirically demonstrate the effectiveness of our methods on both synthetic and real-world data.

62 citations


Journal ArticleDOI
TL;DR: This paper introduces multiple local search algorithms that can improve the total lifetime of WSNs consisting of nodes with varying initial energy and discusses the efficiency of each of the algorithms through extensive simulations.
Abstract: Limited energy of the sensors is one of the key issues towards realizing a reliable wireless sensor network (WSN), which can survive under the emerging WSN applications. A promising method for conserving the energy of these sensors can be implemented by applying a sleep-wake scheduling while distributing the data gathering and sensing tasks to a dominating set of awake sensors while the other nodes are in a sleep mode. Producing the maximum possible number of such disjoint dominating sets, called the domatic partition problem in unit disk graphs, can further prolong the network lifetime. This problem becomes challenging when the initial energy of the nodes varies from one to another. In this paper, we introduce multiple local search algorithms that can improve the total lifetime of WSNs consisting of nodes with varying initial energy. We discuss the performance of the existing dominating set algorithm and introduce three more algorithms which can be applied on multiple disjoint dominating sets with nodes having varying initial energy. We discuss the efficiency of each of the algorithms through extensive simulations.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the Schmidt gap is introduced, which scales near the transition with an exponent that is compatible with the analytical bound, due to an insensitivity to certain finite-size fluctuations, which remain significant in other quantities at the sizes accessible to exact numerical methods.
Abstract: Many-body localization has become an important phenomenon for illuminating a potential rift between nonequilibrium quantum systems and statistical mechanics. However, the nature of the transition between ergodic and localized phases in models displaying many-body localization is not yet well understood. Assuming that this is a continuous transition, analytic results show that the length scale should diverge with a critical exponent $\ensuremath{ u}\ensuremath{\ge}2$ in one-dimensional systems. Interestingly, this is in stark contrast with all exact numerical studies which find $\ensuremath{ u}\ensuremath{\sim}1$. We introduce the Schmidt gap, new in this context, which scales near the transition with an exponent $\ensuremath{ u}g2$ compatible with the analytical bound. We attribute this to an insensitivity to certain finite-size fluctuations, which remain significant in other quantities at the sizes accessible to exact numerical methods. Additionally, we find that a physical manifestation of the diverging length scale is apparent in the entanglement length computed using the logarithmic negativity between disjoint blocks.

55 citations


Book
15 Aug 2018
TL;DR: In this paper, the Erdos-Ko-Rado theorem via shifting Katona's circle and the Kurskal-Katona theorem for no $s$ pairwise disjoint sets are discussed.
Abstract: Introduction Operations on sets and set systems Theorems on traces The Erdos-Ko-Rado theorem via shifting Katona's circle The Kurskal-Katona theorem Kleitman theorem for no $s$ pairwise disjoint sets The Hilton-Milner theorem The Erdos matching conjecture The Ahswede-Khachatrian theorem Pushing-pulling method Uniform measure versus product measure Kleitman's correlation inequality $r$-cross union families Random walk method $L$-systems Exponent of $(10,\{0,1,3,6\})$-system The Deza-Erdos-Frankl theorem Furedi's structure theorem Rodl's packing theorem Upper bounds using multilinear polynomials Application to discrete geometry Upper bounds using inclusion matrices Some algebraic constructions for $L$-systems Oddtown and eventown problems Tensor product method The ratio bound Measures of cross independent sets Application of semidefinite programming A cross intersection problem with measures Capsets and sunflowers Challenging open problems Bibliography Index

51 citations


Book
05 Feb 2018
TL;DR: An algorithm which calculates shortest paths amidst two convex polyhedral obstacles in time and a new kind of Voronoi diagram, calledeper's Voronoa diagram, is introduced and analyzed here.
Abstract: We consider the problem of computing the Euclidean shortest path between two points in three-dimensional space which must avoid the interiors of k given disjoint convex polyhedral obstacles, having altogether n faces. Although this problem is hard to solve when k is arbitrarily large, it had been efficiently solved by Mount [Mo84] (cf. also Sharir and Schorr [SS84]) for k = 1, i.e. in the presence of a single convex polyhedral obstacle, in time O(n2log n). In this paper we consider the generalization of this technique to the cases k = 2 and k > 2. In the first part of this presentation we describe an algorithm which calculates shortest paths amidst two convex polyhedral obstacles in time O(n3a(n)O(a(n)7)log n), where a(n) is the functional inverse of Ackermann's function (and is thus extremely slowly growing). This result is achieved by constructing a new kind of Voronoi diagram, called peeper's Voronoi diagram, which is introduced and analyzed here. In the second part we show that shortest paths amidst k > 2 disjoint convex polyhedral obstacles can be calculated in time polynomial in the total number n of faces of these obstacles (but exponential in the number of obstacles). This is a consequence of the following result: Let K be a 3-D convex polyhedron having n vertices. Then the number of shortest-path edge sequences on K is polynomial in n (specifically O(n7)), where a shortest-path edge sequence x is a sequence of edges of K for which there exist two points X, Y on the surface S of K such that x is the sequence of edges crossed by the shortest path from X to Y along S.

50 citations


Posted Content
TL;DR: It is shown empirically that DIMNet is able to achieve better performance than other current methods, with the additional benefits of being conceptually simpler and less data-intensive.
Abstract: We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than other current methods, with the additional benefits of being conceptually simpler and less data-intensive.

43 citations


Posted Content
TL;DR: In this paper, the maximum size of a family of subsets of an n-element set if it has no pairwise disjoint sets was shown to be Ω(n − s + 1 ) when n is sufficiently large.
Abstract: More than 50 years ago, Erd\H os asked the following question: what is the maximum size of a family $\mathcal F$ of $k$-element subsets of an $n$-element set if it has no $s+1$ pairwise disjoint sets? This question attracted a lot of attention recently, in particular, due to its connection to various combinatorial, probabilistic and theoretical computer science problems. Improving the previous best bound due to the first author, we prove that $|\mathcal F|\le {n\choose k}-{n-s\choose k}$, provided $n\ge \frac 53sk -\frac 23 s$ and $s$ is sufficiently large. We derive several corollaries concerning Dirac thresholds and deviations of sums of random variables. We also obtain several related results.

35 citations


01 Jan 2018
TL;DR: eulerr outperforms the other software tested in this thesis in fitting Euler diagrams to set configurations that might lack exact solutions provided that the authors use ellipses; eulerr's circular diagrams fit better on all accounts save for the diagError metric in the case of three-set diagrams.
Abstract: Euler diagrams are common and intuitive visualizations for data involving sets and relationships thereof Compared to Venn diagrams, Euler diagrams do not require all set relationships to be present and may therefore be area-proportional also with subset or disjoint relationships in the input Most Euler diagrams use circles, but circles do not always support accurate diagrams A promising alternative for Euler diagrams is ellipses, which enable accurate diagrams for a wider range of set combinations Ellipses, however, have not yet been implemented for more than three sets or three-set diagrams where there are disjoint or subset relationships The aim of this thesis is to present a method and software for elliptical Euler diagrams for any number of sets In this thesis, we provide and outline an R-based implementation called eulerr It fits Euler diagrams using numerical optimization and exact-area algorithms through two steps: first, an initial layout is formed using the sets' pairwise relationships; second, this layout is finalized taking all the sets' intersections into account Finally, we compare eulerr with other software implementations of Euler diagrams and show that the package is overall both more consistent and accurate as well as faster for up to seven sets compared to the other R-packages eulerr perfectly reproduces samples of circular Euler diagrams as well as three-set diagrams with ellipses, but performs suboptimally with elliptical diagrams of more than three sets eulerr also outperforms the other software tested in this thesis in fitting Euler diagrams to set configurations that might lack exact solutions provided that we use ellipses; eulerr's circular diagrams, meanwhile, fit better on all accounts save for the diagError metric in the case of three-set diagrams

Journal ArticleDOI
TL;DR: In this article, it was shown that for the additive Gaussian noise channel with quadratic cost constraint, it is necessary and sufficient to saturate the channel to the point where mutual information is close to the maximum possible.
Abstract: This paper quantifies the intuitive observation that adding noise reduces available information by means of nonlinear strong data processing inequalities. Consider the random variables $W\to X\to Y$ forming a Markov chain, where $Y = X + Z$ with $X$ and $Z$ real valued, independent and $X$ bounded in $L_{p}$ -norm. It is shown that $I(W; Y) \le F_{I}(I(W;X))$ with $F_{I}(t) whenever $t > 0$ , if and only if $Z$ has a density whose support is not disjoint from any translate of itself. A related question is to characterize for what couplings $(W, X)$ the mutual information $I(W; Y)$ is close to maximum possible. To that end we show that in order to saturate the channel, i.e., for $I(W; Y)$ to approach capacity, it is mandatory that $I(W; X)\to \infty $ (under suitable conditions on the channel). A key ingredient for this result is a deconvolution lemma which shows that postconvolution total variation distance bounds the preconvolution Kolmogorov–Smirnov distance. Explicit bounds are provided for the special case of the additive Gaussian noise channel with quadratic cost constraint. These bounds are shown to be order optimal. For this case, simplified proofs are provided leveraging Gaussian-specific tools such as the connection between information and estimation (I-MMSE) and Talagrand’s information-transportation inequality.

Journal ArticleDOI
TL;DR: In this article, the authors introduced the notion of two-colored noncrossing partitions, which are exactly the tensor categories being used in the theory of easy quantum groups.

Journal ArticleDOI
18 Dec 2018-Energies
TL;DR: The experimental results show that the proposed DGA performs better than other state-of-the-art approaches in maximizing the number of disjoin sets.
Abstract: This paper proposed a distributed genetic algorithm (DGA) to solve the energy efficient coverage (EEC) problem in the wireless sensor networks (WSN). Due to the fact that the EEC problem is Non-deterministic Polynomial-Complete (NPC) and time-consuming, it is wise to use a nature-inspired meta-heuristic DGA approach to tackle this problem. The novelties and advantages in designing our approach and in modeling the EEC problems are as the following two aspects. Firstly, in the algorithm design, we realized DGA in the multi-processor distributed environment, where a set of processors run distributed to evaluate the fitness values in parallel to reduce the computational cost. Secondly, when we evaluate a chromosome, different from the traditional model of EEC problem in WSN that only calculates the number of disjoint sets, we proposed a hierarchical fitness evaluation and constructed a two-level fitness function to count the number of disjoint sets and the coverage performance of all the disjoint sets. Therefore, not only do we have the innovations in algorithm, but also have the contributions on the model of EEC problem in WSN. The experimental results show that our proposed DGA performs better than other state-of-the-art approaches in maximizing the number of disjoin sets.

Proceedings Article
31 Mar 2018
TL;DR: This paper significantly generalizes the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables, by representing the dependencies via a graph and deducing an efficient message passing algorithm for optimizing the acquisition function.
Abstract: Bayesian optimization (BO) is a popular technique for sequential black-box function optimization, with applications including parameter tuning, robotics, environmental monitoring, and more. One of the most important challenges in BO is the development of algorithms that scale to high dimensions, which remains a key open problem despite recent progress. In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables. In particular, we significantly generalize this approach by lifting the assumption that the subsets are disjoint, and consider additive models with arbitrary overlap among the subsets. By representing the dependencies via a graph, we deduce an efficient message passing algorithm for optimizing the acquisition function. In addition, we provide an algorithm for learning the graph from samples based on Gibbs sampling. We empirically demonstrate the effectiveness of our methods on both synthetic and real-world data.

Posted Content
TL;DR: The result achieve both nearly linear running time and the strong expander guarantee for clusters, and is the first nearly linear time algorithm when $\phi$ is at least $1/\log^{O(1)} m$, which is the case in most practical settings and theoretical applications.
Abstract: We study the problem of graph clustering where the goal is to partition a graph into clusters, i.e. disjoint subsets of vertices, such that each cluster is well connected internally while sparsely connected to the rest of the graph. In particular, we use a natural bicriteria notion motivated by Kannan, Vempala, and Vetta which we refer to as {\em expander decomposition}. Expander decomposition has become one of the building blocks in the design of fast graph algorithms, most notably in the nearly linear time Laplacian solver by Spielman and Teng, and it also has wide applications in practice. We design algorithm for the parametrized version of expander decomposition, where given a graph $G$ of $m$ edges and a parameter $\phi$, our algorithm finds a partition of the vertices into clusters such that each cluster induces a subgraph of conductance at least $\phi$ (i.e. a $\phi$ expander), and only a $\widetilde{O}(\phi)$ fraction of the edges in $G$ have endpoints across different clusters. Our algorithm runs in $\widetilde{O}(m/\phi)$ time, and is the first nearly linear time algorithm when $\phi$ is at least $1/\log^{O(1)} m$, which is the case in most practical settings and theoretical applications. Previous results either take $\Omega(m^{1+o(1)})$ time, or attain nearly linear time but with a weaker expansion guarantee where each output cluster is guaranteed to be contained inside some unknown $\phi$ expander. Our result achieve both nearly linear running time and the strong expander guarantee for clusters. Moreover, a main technique we develop for our result can be applied to obtain a much better \emph{expander pruning} algorithm, which is the key tool for maintaining an expander decomposition on dynamic graphs. Finally, we note that our algorithm is developed from first principles based on relatively simple and basic techniques, thus making it very likely to be practical.

Journal ArticleDOI
TL;DR: This work demonstrates that examining both feasible and infeasible solutions during the search is a highly effective search strategy for the considered coloring problem and could beneficially be applied to other constrained problems as well.

Posted Content
TL;DR: It turns out that a parity-check matrix with the Vandermond structure produces an optimal locally recoverable code must obey certain disjoint property for subsets of $\mathbb{F}_q$ to be equivalent to a well-studied problem in extremal graph theory.
Abstract: Recently, it was discovered by several authors that a $q$-ary optimal locally recoverable code, i.e., a locally recoverable code archiving the Singleton-type bound, can have length much bigger than $q+1$. This is quite different from the classical $q$-ary MDS codes where it is conjectured that the code length is upper bounded by $q+1$ (or $q+2$ for some special case). This discovery inspired some recent studies on length of an optimal locally recoverable code. It was shown in \cite{LXY} that a $q$-ary optimal locally recoverable code is unbounded for $d=3,4$. Soon after, it was proved that a $q$-ary optimal locally recoverable code with distance $d$ and locality $r$ can have length $\Omega_{d,r}(q^{1 + 1/\lfloor(d-3)/2\rfloor})$. Recently, an explicit construction of $q$-ary optimal locally recoverable codes for distance $d=5,6$ was given in \cite{J18} and \cite{BCGLP}. In this paper, we further investigate construction optimal locally recoverable codes along the line of using parity-check matrices. Inspired by classical Reed-Solomon codes and \cite{J18}, we equip parity-check matrices with the Vandermond structure. It is turns out that a parity-check matrix with the Vandermond structure produces an optimal locally recoverable code must obey certain disjoint property for subsets of $\mathbb{F}_q$. To our surprise, this disjoint condition is equivalent to a well-studied problem in extremal graph theory. With the help of extremal graph theory, we succeed to improve all of the best known results in \cite{GXY} for $d\geq 7$. In addition, for $d=6$, we are able to remove the constraint required in \cite{J18} that $q$ is even.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the distribution of the nodal surplus of a metric graph with disjoint cycles and showed that it is binomial over the allowed range of values of the surplus.
Abstract: It has been suggested that the distribution of the suitably normalized number of zeros of Laplacian eigenfunctions contains information about the geometry of the underlying domain. We study this distribution (more precisely, the distribution of the “nodal surplus”) for Laplacian eigenfunctions of a metric graph. The existence of the distribution is established, along with its symmetry. One consequence of the symmetry is that the graph’s first Betti number can be recovered as twice the average nodal surplus of its eigenfunctions. Furthermore, for graphs with disjoint cycles it is proven that the distribution has a universal form—it is binomial over the allowed range of values of the surplus. To prove the latter result, we introduce the notion of a local nodal surplus and study its symmetry and dependence properties, establishing that the local nodal surpluses of disjoint cycles behave like independent Bernoulli variables.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: This paper shows how to significantly increase the expressive power of disjoint intersection types by adding support for nested subtyping and composition, which enables simple forms of family polymorphism to be expressed in the calculus.
Abstract: Calculi with disjoint intersection types support an introduction form for intersections called the merge operator, while retaining a coherent semantics. Disjoint intersections types have great potential to serve as a foundation for powerful, flexible and yet type-safe and easy to reason OO languages. This paper shows how to significantly increase the expressive power of disjoint intersection types by adding support for nested subtyping and composition, which enables simple forms of family polymorphism to be expressed in the calculus. The extension with nested subtyping and composition is challenging, for two different reasons. Firstly, the subtyping relation that supports these features is non-trivial, especially when it comes to obtaining an algorithmic version. Secondly, the syntactic method used to prove coherence for previous calculi with disjoint intersection types is too inflexible, making it hard to extend those calculi with new features (such as nested subtyping). We show how to address the first problem by adapting and extending the Barendregt, Coppo and Dezani (BCD) subtyping rules for intersections with records and coercions. A sound and complete algorithmic system is obtained by using an approach inspired by Pierce's work. To address the second problem we replace the syntactic method to prove coherence, by a semantic proof method based on logical relations. Our work has been fully formalized in Coq, and we have an implementation of our calculus.

Book ChapterDOI
07 Jan 2018
TL;DR: This work presents a randomized polynomial time algorithm based on random contractions akin to Karger's min cut algorithm that solves the more general hedge k-cut problem when the subgraph induced by every hedge has a constant number of connected components.
Abstract: In the hypergraph k-cut problem, the input is a hypergraph, and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. This problem is known to be at least as hard as the densest k-subgraph problem when k is part of the input (Chekuri-Li, 2015). We present a randomized polynomial time algorithm to solve the hypergraph k-cut problem for constant k. Our algorithm solves the more general hedge k-cut problem when the subgraph induced by every hedge has a constant number of connected components. In the hedge k-cut problem, the input is a hedgegraph specified by a vertex set and a disjoint set of hedges, where each hedge is a subset of edges defined over the vertices. The goal is to find a smallest subset of hedges whose removal ensures that the number of connected components in the remaining underlying (multi-)graph is at least k. Our algorithm is based on random contractions akin to Karger's min cut algorithm. Our main technical contribution is a distribution over the hedges (hyperedges) so that random contraction of hedges (hyperedges) chosen from the distribution succeeds in returning an optimum solution with large probability.

Journal ArticleDOI
TL;DR: In this article, the authors proposed two new methods called a soft max-row decision making method and a multi-soft distributive max-min decision-making method employing these operations.

Journal ArticleDOI
TL;DR: For a dominant rational self-map on a smooth projective variety defined over a number field, Kawaguchi and Silverman conjectured that the (first) dynamical degree is equal to the arithmetic degree at a rational point whose forward orbit is well-defined and Zariski dense.
Abstract: For a dominant rational self-map on a smooth projective variety defined over a number field, Kawaguchi and Silverman conjectured that the (first) dynamical degree is equal to the arithmetic degree at a rational point whose forward orbit is well-defined and Zariski dense. We prove this conjecture for surjective endomorphisms on smooth projective surfaces. For surjective endomorphisms on any smooth projective varieties, we show the existence of rational points whose arithmetic degrees are equal to the dynamical degree. Moreover, if the map is an automorphism, there exists a Zariski dense set of such points with pairwise disjoint orbits.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the computation of the entanglement entropy of two disjoint intervals in a (1+1) dimensional conformal field theory by conformal block expansion of the 4-point correlation function of twist fields.
Abstract: We reconsider the computation of the entanglement entropy of two disjoint intervals in a (1+1) dimensional conformal field theory by conformal block expansion of the 4-point correlation function of twist fields. We show that accurate results may be obtained by taking into account several terms in the operator product expansion (OPE) of twist fields and by iterating the Zamolodchikov recursion formula for each conformal block. We perform a detailed analysis for the Ising conformal field theory and for the free compactified boson. Each term in the conformal block expansion can be easily analytically continued and so this approach also provides a good approximation for the von Neumann entropy.

Journal ArticleDOI
21 Aug 2018
TL;DR: In this paper, an edge irregular reflexive k-labeling for disjoint association of wheel-related diagrams and deduce the correct estimation of the reflexive edge strength for the m copies of some wheelrelated graphs, specifically gear graphs and prism graphs.
Abstract: In graph theory, a graph is given names—generally a whole number—to edges, vertices, or both in a chart. Formally, given a graph G = ( V , E ) , a vertex naming is a capacity from V to an arrangement of marks. A diagram with such a capacity characterized defined is known as a vertex-marked graph. Similarly, an edge naming is a mapping of an element of E to an arrangement of marks. In this case, the diagram is called an edge-marked graph. We consider an edge irregular reflexive k-labeling for the disjoint association of wheel-related diagrams and deduce the correct estimation of the reflexive edge strength for the disjoint association of m copies of some wheel-related graphs, specifically gear graphs and prism graphs.

Journal ArticleDOI
TL;DR: An efficient method is given for constructing a large set of disjoint spectra functions without linear structures, which are not equivalent to partially linear functions, and some balanced functions are designed that achieve the highest nonlinearity known.
Abstract: In this paper, we give an efficient method for constructing a large set of disjoint spectra functions without linear structures, which are not equivalent to partially linear functions. This positively answers the open problem [“how to construct a large set of disjoint spectra functions which are not (linearly equivalent to) partially linear functions” raised by Zhang and Xiao]. At the same time, this significantly extends a recent result of Zhang, where a method of specifying four disjoint spectra functions was given. It is demonstrated that such sets can be utilized in the design of highly nonlinear resilient functions. In the second part, based on a generalization of the indirect sum method, we give an alternative approach for designing sets of disjoint spectra functions of even larger cardinality than already given ones, but these functions then admit linear structures. In addition, it is shown that using suitable initial functions in the generalized indirect sum method we can specify highly nonlinear resilient Boolean functions (in odd number of input variables $n$ ) whose nonlinearity in many cases exceeds the current best known values. Moreover, we design some balanced functions (for odd $n$ ) that also achieve the highest nonlinearity known.

Journal ArticleDOI
TL;DR: The result of Qiao and Yang is improved by showing that all n-dimensional folded hypercubes are ( 3 n − 5 ) -conditional edge-fault-tolerant strongly Menger edge connected for n ≥ 5 and an example is presented to show that the result is optimal with respect to the maximum tolerated edge faults.

Journal ArticleDOI
Paul Horn1
TL;DR: If Kn is properly edge colored with n−1 colors, a positive fraction of the edges can be covered by edge disjoint rainbow spanning trees.
Abstract: Brualdi and Hollingsworth conjectured that, for even n, in a proper edge coloring of Kn using precisely n−1 colors, the edge set can be partitioned into n/2 spanning trees which are rainbow (and hence, precisely one edge from each color class is in each spanning tree) They proved that there always are two edge disjoint rainbow spanning trees Krussel, Marshall, and Verrall improved this to three edge disjoint rainbow spanning trees Recently, Carraher, Hartke and the author proved a theorem improving this to enlogn rainbow spanning trees, even when more general edge colorings of Kn are considered In this article, we show that if Kn is properly edge colored with n−1 colors, a positive fraction of the edges can be covered by edge disjoint rainbow spanning trees

Journal ArticleDOI
TL;DR: It is shown that the linear sets of pseudoregulus type and for t ≥ 4 the scattered linear sets found by Lunardon and Polverino are the only maximum scattered F q -linear sets in PG.

Journal ArticleDOI
TL;DR: In this article, the authors studied the model reduction of leader-follower multi-agent networks by clustering and derived a priori upper bound for the approximate model reduction error in the case that the agent dynamics is an arbitrary multivariable input-state-output system.
Abstract: In the recent paper (Monshizadeh et al. in IEEE Trans Control Netw Syst 1(2):145–154, 2014. https://doi.org/10.1109/TCNS.2014.2311883 ), model reduction of leader–follower multi-agent networks by clustering was studied. For such multi-agent networks, a reduced order network is obtained by partitioning the set of nodes in the graph into disjoint sets, called clusters, and associating with each cluster a single, new, node in a reduced network graph. In Monshizadeh et al. (2014), this method was studied for the special case that the agents have single integrator dynamics. For a special class of graph partitions, called almost equitable partitions, an explicit formula was derived for the $$\mathcal {H}_2$$ model reduction error. In the present paper, we will extend and generalize the results from Monshizadeh et al. (2014) in a number of directions. Firstly, we will establish an a priori upper bound for the $$\mathcal {H}_2$$ model reduction error in case that the agent dynamics is an arbitrary multivariable input–state–output system. Secondly, for the single integrator case, we will derive an explicit formula for the $$\mathcal {H}_\infty $$ model reduction error. Thirdly, we will prove an a priori upper bound for the $$\mathcal {H}_\infty $$ model reduction error in case that the agent dynamics is a symmetric multivariable input–state–output system. Finally, we will consider the problem of obtaining a priori upper bounds if we cluster using arbitrary, possibly non almost equitable, partitions.