scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2006"


Journal ArticleDOI
TL;DR: New cost functions for spectral clustering based on measures of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem are derived.
Abstract: Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters, with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive new cost functions for spectral clustering based on measures of error between a given partition and a solution of the spectral relaxation of a minimum normalized cut problem. Minimizing these cost functions with respect to the partition leads to new spectral clustering algorithms. Minimizing with respect to the similarity matrix leads to algorithms for learning the similarity matrix from fully labelled data sets. We apply our learning algorithm to the blind one-microphone speech separation problem, casting the problem as one of segmentation of the spectrogram.

313 citations


Journal ArticleDOI
TL;DR: In this paper, the free energy of the Hermitian one-matrix model is calculated to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves).
Abstract: We present the diagrammatic technique for calculating the free energy of the Hermitian one-matrix model to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves).

275 citations


Journal Article
TL;DR: In this article, the authors describe anytime search procedures that find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists.
Abstract: We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.

220 citations


Journal ArticleDOI
TL;DR: In this paper, the free energy of the matrix eigenvalue model with arbitrary power β by the Vandermonde determinant was calculated to all orders of 1/N expansion in the case where the limiting eigen value distribution spans arbitrary (but fixed) number of disjoint intervals (curves).
Abstract: We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power β by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves).

182 citations


Journal ArticleDOI
01 Aug 2006
TL;DR: It is shown how to extend the analysis of iterative rounding applied to EC-SNDP to yield 2-approximation algorithms for both general ELC and VC-SND, and for the case of VC- SNDP when rij ∈ {0, 1, 2}.
Abstract: The survivable network design problem (SNDP) is the following problem: given an undirected graph and values rij for each pair of vertices i and j, find a minimum-cost subgraph such that there are at least rij disjoint paths between vertices i and j. In the edge connected version of this problem (EC-SNDP), these paths must be edge-disjoint. In the vertex connected version of the problem (VC-SNDP), the paths must be vertex disjoint. The element connectivity problem (ELC-SNDP, or ELC) is a problem of intermediate difficulty. In this problem, the set of vertices is partitioned into terminals and nonterminals. The edges and nonterminals of the graph are called elements. The values rij are only specified for pairs of terminals i, j, and the paths from i to j must be element disjoint. Thus if rij-1 elements fail, terminals i and j are still connected by a path in the network.These variants of SNDP are all known to be NP-hard. The best known approximation algorithm for the EC-SNDP has performance guarantee of 2 and iteratively rounds solutions to a linear programming relaxation of the problem. ELC has a primal-dual O(log k)-approximation algorithm, where k = maxi,j rij. Since this work first appeared as an extended abstract, it has been shown that it is hard to approximate VC-SNDP to factor 2log1-en.In this paper we investigate applying iterative rounding to ELC and VC-SNDP We show that iterative rounding will not yield a constant factor approximation algorithm for general VC-SNDP. On the other hand, we show how to extend the analysis of iterative rounding applied to EC-SNDP to yield 2-approximation algorithms for both general ELC, and for the case of VC-SNDP when rij ∈ {0, 1, 2}. The latter result improves on an existing 3-approximation algorithm. The former is the first constant factor approximation algorithm for a general survivable network design problem that allows node failures.

129 citations


Journal ArticleDOI
TL;DR: An efficient branch-and-bound approach to solve the redundancy allocation problem where the considered system is coherent, that is, the objective and constraint functions have monotonic increasing properties, based primarily on a search space elimination of disjoint sets in a solution space that does not require any relaxation of branched subproblems.

121 citations


Journal ArticleDOI
TL;DR: A key concept is introduced, the edge multiplicity, that measures the number of triangles passing through an edge that extends the clustering coefficient in that it involves the properties of two- and not just one-vertices.
Abstract: We develop a full theoretical approach to clustering in complex networks. A key concept is introduced, the edge multiplicity, that measures the number of triangles passing through an edge. This quantity extends the clustering coefficient in that it involves the properties of two-and not just one-vertices. The formalism is completed with the definition of a three-vertex correlation function, which is the fundamental quantity describing the properties of clustered networks. The formalism suggests different metrics that are able to thoroughly characterize transitive relations. A rigorous analysis of several real networks, which makes use of this formalism and the metrics, is also provided. It is also found that clustered networks can be classified into two main groups: the weak and the strong transitivity classes. In the first class, edge multiplicity is small, with triangles being disjoint. In the second class, edge multiplicity is high and so triangles share many edges. As we shall see in the following paper, the class a network belongs to has strong implications in its percolation properties.

120 citations


Journal ArticleDOI
TL;DR: This work describes a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques, and presents several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.
Abstract: Genes with common functions often exhibit correlated expression levels, which can be used to identify sets of interacting genes from microarray data. Microarrays typically measure expression across genomic space, creating a massive matrix of co-expression that must be mined to extract only the most relevant gene interactions. We describe a graph theoretical approach to extracting co-expressed sets of genes, based on the computation of cliques. Unlike the results of traditional clustering algorithms, cliques are not disjoint and allow genes to be assigned to multiple sets of interacting partners, consistent with biological reality. A graph is created by thresholding the correlation matrix to include only the correlations most likely to signify functional relationships. Cliques computed from the graph correspond to sets of genes for which significant edges are present between all members of the set, representing potential members of common or interacting pathways. Clique membership can be used to infer function about poorly annotated genes, based on the known functions of better-annotated genes with which they share clique membership (i.e., "guilt-by-association"). We illustrate our method by applying it to microarray data collected from the spleens of mice exposed to low-dose ionizing radiation. Differential analysis is used to identify sets of genes whose interactions are impacted by radiation exposure. The correlation graph is also queried independently of clique to extract edges that are impacted by radiation. We present several examples of multiple gene interactions that are altered by radiation exposure and thus represent potential molecular pathways that mediate the radiation response.

118 citations


Journal ArticleDOI
TL;DR: It is shown that a set ofn points in the plane has at most O(10.05 n) perfect matchings with crossing-free straight-line embedding, and several related bounds are derived.
Abstract: We show that a set of $n$ points in the plane has at most $O(10.05^n)$ perfect matchings with crossing-free straight-line embedding. The expected number of perfect crossing-free matchings of a set of $n$ points drawn independently and identically distributed from an arbitrary distribution in the plane is at most $O(9.24^n)$. Several related bounds are derived: (a) The number of all (not necessarily perfect) crossing-free matchings is at most $O(10.43^n)$. (b) The number of red-blue perfect crossing-free matchings (where the points are colored red or blue and each edge of the matching must connect a red point with a blue point) is at most $O(7.61^n)$. (c) The number of left-right perfect crossing-free matchings (where the points are designated as left or right endpoints of the matching edges) is at most $O(5.38^n)$. (d) The number of perfect crossing-free matchings across a line (where all the matching edges must cross a fixed halving line of the set) is at most $4^n$. These bounds are employed to infer that a set of $n$ points in the plane has at most $O(86.81^n)$ crossing-free spanning cycles (simple polygonizations) and at most $O(12.24^n)$ crossing-free partitions (these are partitions of the point set so that the convex hulls of the individual parts are pairwise disjoint). We also derive lower bounds for some of these quantities.

109 citations


Journal ArticleDOI
TL;DR: The proposed cycle counting algorithm consists of integer matrix operations and its complexity grows as O(gn/sup 3/) where n=max(|U|,|W|).
Abstract: Let G=(U/spl cup/W, E) be a bipartite graph with disjoint vertex sets U and W, edge set E, and girth g. This correspondence presents an algorithm for counting the number of cycles of length g, g+2, and g+4 incident upon every vertex in U/spl cup/W. The proposed cycle counting algorithm consists of integer matrix operations and its complexity grows as O(gn/sup 3/) where n=max(|U|,|W|).

96 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the structure of M and of ∂Ω, when Ω is not strictly convex, and constructed examples of such divisible convex open sets Ω.
Abstract: Divisible convex sets IV: Boundary structure in dimension 3 Let Ω be an indecomposable properly convex open subset of the real projective 3-space which is divisible i.e. for which there exists a torsion free discrete group Γ of projective transformations preserving Ω such that the quotient M := Γ\Ω is compact. We study the structure of M and of ∂Ω, when Ω is not strictly convex: The union of the properly embedded triangles in Ω projects in M onto an union of finitely many disjoint tori and Klein bottles which induces an atoroidal decomposition of M. Every non extremal point of ∂Ω is on an edge of a unique properly embedded triangle in Ω and the set of vertices of these triangles is dense in the boundary of Ω (see Figs. 1 to 4). Moreover, we construct examples of such divisible convex open sets Ω.

Journal ArticleDOI
TL;DR: Improving a result of Erdos, Gyarfas and Pyber for large n, it is shown that for every integer r>=2 there exists a constant n"0=n"0(r) such that if n>=n"-0 and the edges of the complete graph K"n are colored with r colors then the vertex set of K" n can be partitioned into at most 100rlogr vertex disjoint monochromatic cycles.

Journal Article
TL;DR: Random separation as discussed by the authors partitions the vertex set of a graph randomly into two disjoint sets to separate a solution from the rest of the graph into connected components, and then select appropriate components to form a solution.
Abstract: We develop a new randomized method, random separation, for solving fixed-cardinality optimization problems on graphs, i.e., problems concerning solutions with exactly a fixed number k of elements (e.g., k vertices V') that optimize solution values (e.g., the number of edges covered by V'). The key idea of the method is to partition the vertex set of a graph randomly into two disjoint sets to separate a solution from the rest of the graph into connected components, and then select appropriate components to form a solution. We can use universal sets to derandomize algorithms obtained from this method. This new method is versatile and powerful as it can be used to solve a wide range of fixed-cardinality optimization problems for degree-bounded graphs, graphs of bounded degeneracy (a large family of graphs that contains degree-bounded graphs, planar graphs, graphs of bounded tree-width, and nontrivial minor-closed families of graphs), and even general graphs.

Journal ArticleDOI
TL;DR: The notion ofmg*-closed sets is introduced and the unified theory for collections of subsets between closed sets andg- closed sets is obtained.
Abstract: Quite recently, by using semi-open (resp.α-open, preopen,β-open) sets in a topological space, the notions ofsg*-closed (resp.αg*-closed,pg*-closedβg*-closed) sets are indroduced and investigated in [8]. These subsets place between closed sets andg-closed sets due to Levine [5]. In this paper, we introduce the notion ofmg*-closed sets and obtain the unified theory for collections of subsets between closed sets andg-closed sets.

Journal ArticleDOI
TL;DR: In this article, it was shown that every graph G is 2A(G)-matroidally colorable, where G is a simplicial complex of independent sets of a graph.
Abstract: A classical theorem of Edmonds provides a min-max formula relating the maximal size of a set in the intersection of two matroids to a "covering" parameter. We generalize this theorem, replacing one of the matroids by a general simplicial complex. One application is a solution of the case r = 3 of a matroidal version of Ryser's conjecture. Another is an upper bound on the minimal number of sets belonging to the intersection of two matroids, needed to cover their common ground set. This, in turn, is used to derive a weakened version of a conjecture of Rota. Bounds are also found on the dual parameter-the maximal number of disjoint sets, all spanning in each of two given matroids. We study in detail the case in which the complex is the complex of independent sets of a graph, and prove generalizations of known results on "independent systems of representatives" (which are the special case in which the matroid is a partition matroid). In particular, we define a notion of k-matroidal colorability of a graph, and prove a fractional version of a conjecture, that every graph G is 2A(G)-matroidally colorable. The methods used are mostly topological.

Proceedings Article
16 Jul 2006
TL;DR: Five new lower bound computation methods are defined: two are based on detecting inconsistencies via a unit propagation procedure that propagates unit clauses using an original ordering; the other three add an additional level of forward look-aheadbased on detecting failed literals.
Abstract: Many lower bound computation methods for branch and bound Max-SAT solvers can be explained as procedures that search for disjoint inconsistent subformulas in the Max-SAT instance under consideration The difference among them is the technique used to detect inconsistencies In this paper, we define five new lower bound computation methods: two of them are based on detecting inconsistencies via a unit propagation procedure that propagates unit clauses using an original ordering; the other three add an additional level of forward look-ahead based on detecting failed literals Finally, we provide empirical evidence that the new lower bounds are of better quality than the existing lower bounds, as well as that a solver with our new lower bounds greatly outperforms some of the best performing state-of-the-art Max-SAT solvers on Max-2SAT, Max-3SAT, and Max-Cut instances

Journal ArticleDOI
TL;DR: It is proved that if the vertex set of a d-regular graph is partitioned into classes of size d+⌞d/r⌟, then it is possible to select a transversal inducing vertex disjoint trees on at most r vertices, and some limitations on the power of this topological method are established.
Abstract: We introduce and discuss generalizations of the problem of independent transversals. Given a graph property $${\user1{\mathcal{R}}}$$, we investigate whether any graph of maximum degree at most d with a vertex partition into classes of size at least p admits a transversal having property $${\user1{\mathcal{R}}}$$. In this paper we study this problem for the following properties $${\user1{\mathcal{R}}}$$: “acyclic”, “H-free”, and “having connected components of order at most r”.We strengthen a result of [13]. We prove that if the vertex set of a d-regular graph is partitioned into classes of size d+⌞d/r⌟, then it is possible to select a transversal inducing vertex disjoint trees on at most r vertices. Our approach applies appropriate triangulations of the simplex and Sperner’s Lemma. We also establish some limitations on the power of this topological method.We give constructions of vertex-partitioned graphs admitting no independent transversals that partially settles an old question of Bollobas, Erdős and Szemeredi. An extension of this construction provides vertex-partitioned graphs with small degree such that every transversal contains a fixed graph H as a subgraph.Finally, we pose several open questions.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: An efficient storytelling implementation that embeds the CARTwheels redescription mining algorithm in an A* search procedure, using the former to supply next move operators on search branches to the latter, and which exploits the structure of partitions imposed by the given vocabulary.
Abstract: We formulate a new data mining problem called it storytelling as a generalization of redescription mining. In traditional redescription mining, we are given a set of objects and a collection of subsets defined over these objects. The goal is to view the set system as a vocabulary and identify two expressions in this vocabulary that induce the same set of objects. Storytelling, on the other hand, aims to explicitly relate object sets that are disjoint (and hence, maximally dissimilar) by finding a chain of (approximate) redescriptions between the sets. This problem finds applications in bioinformatics, for instance, where the biologist is trying to relate a set of genes expressed in one experiment to another set, implicated in a different pathway. We outline an efficient storytelling implementation that embeds the CART wheels redescription mining algorithm in an A* search procedure, using the former to supply next move operators on search branches to the latter. This approach is practical and effective for mining large datasets and, at the same time, exploits the structure of partitions imposed by the given vocabulary. Three application case studies are presented: a study of word overlaps in large English dictionaries, exploring connections between genesets in a bioinformatics dataset, and relating publications in the PubMed index of abstracts.

Journal ArticleDOI
TL;DR: This paper introduces a novel concept called conflicting link set which provides insights into the so-called trap problem, and develops a divide-and-conquer strategy called COnflicting Link Exclusion (COLE), which can outperform other approaches in terms of both the optimality and running time.
Abstract: Finding a disjoint path pair is an important component in survivable networks. Since the traffic is carried on the active (working) path most of the time, it is useful to find a disjoint path pair such that the length of the shorter path (to be used as the active path) is minimized. In this paper, we first address such a Min-Min problem. We prove that this problem is NP-complete in either single link cost (e.g., dedicated backup bandwidth) or dual link cost (e.g., shared backup bandwidth) networks. In addition, it is NP-hard to obtain a K-approximation to the optimal solution for any K > 1. Our proof is extended to another open question regarding the computational complexity of a restricted version of the Min-Sum problem in an undirected network with ordered dual cost links (called the MSOD problem). To solve the Min-Min problem efficiently, we introduce a novel concept called conflicting link set which provides insights into the so-called trap problem, and develop a divide-and-conquer strategy. The result is an effective heuristic for the Min-Min problem called COnflicting Link Exclusion (COLE), which can outperform other approaches in terms of both the optimality and running time. We also apply COLE to the MSOD problem to efficiently provide shared path protection and conduct comprehensive performance evaluation as well as comparison of various schemes for shared path protection. We show that COLE not only processes connection requests much faster than existing integer linear programming (ILP)-based approaches but also achieves a good balance among the active path length, bandwidth efficiency, and recovery time.

Journal ArticleDOI
TL;DR: The bound allows the Thomas–Wollan proof slightly to be modified slightly to show that every $2k-connected graph with average degree at least $12k$ is $k$-linked.
Abstract: A graph is $k$-linked if for every list of $2k$ vertices $\{s_1,{\ldots}\,s_k, t_1,{\ldots}\,t_k\}$, there exist internally disjoint paths $P_1,{\ldots}\, P_k$ such that each $P_i$ is an $s_i,t_i$-path. We consider degree conditions and connectivity conditions sufficient to force a graph to be $k$-linked.Let $D(n,k)$ be the minimum positive integer $d$ such that every $n$-vertex graph with minimum degree at least $d$ is $k$-linked and let $R(n,k)$ be the minimum positive integer $r$ such that every $n$-vertex graph in which the sum of degrees of each pair of non-adjacent vertices is at least $r$ is $k$-linked. The main result of the paper is finding the exact values of $D(n,k)$ and $R(n,k)$ for every $n$ and $k$.Thomas and Wollan [14] used the bound $D(n,k)\leq (n+3k)/2-2$ to give sufficient conditions for a graph to be $k$-linked in terms of connectivity. Our bound allows us to modify the Thomas–Wollan proof slightly to show that every $2k$-connected graph with average degree at least $12k$ is $k$-linked.

Journal ArticleDOI
TL;DR: This paper proposes a new apriori-based algorithm for mining graph data, where the basic building blocks are relatively large, disjoint paths and is proven to be sound and complete.
Abstract: Whereas data mining in structured data focuses on frequent data values, in semistructured and graph data mining, the issue is frequent labels and common specific topologies. The structure of the data is just as important as its content. We study the problem of discovering typical patterns of graph data, a task made difficult because of the complexity of required subtasks, especially subgraph isomorphism. In this paper, we propose a new apriori-based algorithm for mining graph data, where the basic building blocks are relatively large, disjoint paths. The algorithm is proven to be sound and complete. Empirical evidence shows practical advantages of our approach for certain categories of graphs

Book ChapterDOI
17 Aug 2006
TL;DR: The Nelson-Oppen decidability transfer result is strengthened, by showing that it applies to theories over disjoint signatures, whose satisfiability problem, in either arbitrary or infinite models, is decidable.
Abstract: In the context of combinations of theories with disjoint signatures, we classify the component theories according to the decidability of constraint satisfiability problems in arbitrary and in infinite models, respectively. We exhibit a theory T1 such that satisfiability is decidable, but satisfiability in infinite models is undecidable. It follows that satisfiability in T1∪T2 is undecidable, whenever T2 has only infinite models, even if signatures are disjoint and satisfiability in T2 is decidable. In the second part of the paper we strengthen the Nelson-Oppen decidability transfer result, by showing that it applies to theories over disjoint signatures, whose satisfiability problem, in either arbitrary or infinite models, is decidable. We show that this result covers decision procedures based on rewriting, complementing recent work on combination of theories in the rewrite-based approach to satisfiability.

Journal ArticleDOI
TL;DR: A notion of independence for these AEC’s is constructed and it is shown that under simplicity the notion has all the usual properties of first order non-forking over complete types.

Journal ArticleDOI
TL;DR: In this paper, the authors consider link diagrams on orientable surfaces up to the relation of stable equivalence, that is up to homeomorphisms of surfaces, Reidemeister moves and the addition or subtraction of handles disjoint from the diagram.
Abstract: In this paper we consider link diagrams on orientable surfaces up to the relation of stable equivalence, that is up to homeomorphisms of surfaces, Reidemeister moves and the addition or subtraction of handles disjoint from the diagram. Stable equivalence classes of link diagrams have an equivalent formulation in terms of so called “virtual” link diagrams pioneered by Kauffman (see for example the review articles Kauffman and Manturov [8] and Fenn, Kauffman and Manturov [6] and references therein). Many constructions for link diagrams on R can be reproduced for stable equivalence classes of link diagrams on surfaces, for example, one can define the Jones polynomial.

Journal ArticleDOI
TL;DR: It has been shown that arbitrary two finite point disjoint sets can be separated by using this algorithm and an application on classification problems with some real-world data sets has been implemented.
Abstract: We consider the problem of discriminating between two finite point sets A and B in the n-dimensional space by using a special type of polyhedral function. An effective finite algorithm for finding a separating function based on iterative solutions of linear programming subproblems is suggested. At each iteration a function whose graph is a polyhedral cone with vertex at a certain point is constructed and the resulting separating function is defined as a point-wise minimum of these functions. It has been shown that arbitrary two finite point disjoint sets can be separated by using this algorithm. An illustrative example is given and an application on classification problems with some real-world data sets has been implemented.

Journal ArticleDOI
17 Nov 2006
TL;DR: A general tool, called orbitopal fixing, for enhancing the capabilities of branch-and-cut algorithms in solving symmetric integer programming models, in which a subset of 0/1-variables encode a partitioning of a set of objects into disjoint subsets.
Abstract: The topic of this paper are integer programming models in which a subset of 0/1-variables encode a partitioning of a set of objects into disjoint subsets Such models can be surprisingly hard to solve by branch-and-cut algorithms if the order of the subsets of the partition is irrelevant, since this kind of symmetry unnecessarily blows up the search tree We present a general tool, called orbitopal fixing, for enhancing the capabilities of branch-and-cut algorithms in solving such symmetric integer programming models We devise a linear time algorithm that, applied at each node of the search tree, removes redundant parts of the tree produced by the above mentioned symmetry The method relies on certain polyhedra, called orbitopes, which have been introduced bei Kaibel and Pfetsch (Math Programm A, 114 (2008), 1-36) It does, however, not explicitly add inequalities to the model Instead, it uses certain fixing rules for variables We demonstrate the computational power of orbitopal fixing at the example of a graph partitioning problem

Journal ArticleDOI
Matthew Andrews1, Lisa Zhang1
TL;DR: It is shown that there is no log⅓ − ϵ approximation for the undirected Edge-Disjoint Paths problem unless n is the size of the graph and ϵ is any positive constant.
Abstract: We show that there is no logf − eM approximation for the undirected Edge-Disjoint Paths problem unless NP ⊆ ZPTIME(npolylog(n)), where M is the size of the graph and e is any positive constant. This hardness result also applies to the undirected All-or-Nothing Multicommodity Flow problem and the undirected Node-Disjoint Paths problem.

Journal ArticleDOI
04 Nov 2006-Order
TL;DR: It is proved that m(n) is the maximum integer such that every partially ordered set P with n elements contains two disjoint subsets A and B, each with cardinality m( n), such that either every element of A is greater thanevery element of B or every element in A is incomparable with every elementof B.
Abstract: Let m(n) be the maximum integer such that every partially ordered set P with n elements contains two disjoint subsets A and B, each with cardinality m(n), such that either every element of A is greater than every element of B or every element of A is incomparable with every element of B. We prove that \(m(n)=\Theta\left(\frac{n}{\log n}\right)\). Moreover, for fixed e ∈ (0,1) and n sufficiently large, we construct a partially ordered set P with n elements such that no element of P is comparable with \(n^{\varepsilon } \) other elements of P and for every two disjoint subsets A and B of P each with cardinality at least \(\frac{14n}{\epsilon\log_2 n}\), there is an element of A that is comparable with an element of B.

Journal ArticleDOI
TL;DR: Aristotle’s principle is maintained, instead halving Cantor”s principle to “equinumerous collections are in 1–1 correspondence” and the problem of finding a canonical way of attaching numerosities to all sets seems to be worth further investigation.

Journal ArticleDOI
TL;DR: In this paper, the basic combinatorial properties of a complete set of mutually unbiased bases (MUBs) of a q-dimensional Hilbert space with p being a prime and r a positive integer, are shown to be qualitatively mimicked by the configuration of points lying on a proper conic in a projective Hjelmslev plane defined over a Galois ring of characteristic p2 and rank r.
Abstract: The basic combinatorial properties of a complete set of mutually unbiased bases (MUBs) of a q-dimensional Hilbert space with p being a prime and r a positive integer, are shown to be qualitatively mimicked by the configuration of points lying on a proper conic in a projective Hjelmslev plane defined over a Galois ring of characteristic p2 and rank r. The q vectors of a basis of correspond to the q points of a (so-called) neighbour class and the q + 1 MUBs answer to the total number of (pairwise disjoint) neighbour classes on the conic.