scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2019"


Journal ArticleDOI
TL;DR: In this paper, a quantum entanglement measure for mixed quantum states was calculated in quantum error-correcting codes and the minimal cross sectional area of the entangle wedge in holographic codes with a quantum correction term equal to the logarithmic negativity between the bulk degrees of freedom on either side of the wedge cross section was calculated.
Abstract: We calculate logarithmic negativity, a quantum entanglement measure for mixed quantum states, in quantum error-correcting codes and find it to equal the minimal cross sectional area of the entanglement wedge in holographic codes with a quantum correction term equal to the logarithmic negativity between the bulk degrees of freedom on either side of the entanglement wedge cross section. This leads us to conjecture a holographic dual for logarithmic negativity that is related to the area of a cosmic brane with tension in the entanglement wedge plus a quantum correction term. This is closely related to (though distinct from) the holographic proposal for entanglement of purification. We check this relation for various configurations of subregions in ${\mathrm{AdS}}_{3}/{\mathrm{CFT}}_{2}$. These are disjoint intervals at zero temperature, as well as a single interval and adjacent intervals at finite temperature. We also find this prescription to effectively characterize the thermofield double state. We discuss how a deformation of a spherical entangling region complicates calculations and speculate how to generalize to a covariant description.

174 citations


Journal ArticleDOI
TL;DR: This paper introduces a scheme, which is called a spatial multiplexing technique, to effectively boost the computational power of the platform, which exploits disjoint dynamics, which originate from multiple different quantum systems driven by common input streams in parallel.
Abstract: $Q\phantom{\rule{0}{0ex}}u\phantom{\rule{0}{0ex}}a\phantom{\rule{0}{0ex}}n\phantom{\rule{0}{0ex}}t\phantom{\rule{0}{0ex}}u\phantom{\rule{0}{0ex}}m$ $r\phantom{\rule{0}{0ex}}e\phantom{\rule{0}{0ex}}s\phantom{\rule{0}{0ex}}e\phantom{\rule{0}{0ex}}r\phantom{\rule{0}{0ex}}v\phantom{\rule{0}{0ex}}o\phantom{\rule{0}{0ex}}i\phantom{\rule{0}{0ex}}r$ $c\phantom{\rule{0}{0ex}}o\phantom{\rule{0}{0ex}}m\phantom{\rule{0}{0ex}}p\phantom{\rule{0}{0ex}}u\phantom{\rule{0}{0ex}}t\phantom{\rule{0}{0ex}}i\phantom{\rule{0}{0ex}}n\phantom{\rule{0}{0ex}}g$ provides a scheme for exploiting the natural dynamics of quantum systems as a computational resource. An NMR spin-ensemble system is a realistic candidate for implementing the framework, which is currently available in laboratories. Considering realistic experimental constraints, the authors propose a spatial multiplexing technique to effectively boost the platform's computational power. This scheme exploits disjoint dynamics of multiple, different quantum systems driven by common input streams in parallel. This allows one to prepare a huge number of qubits from individually small quantum systems, which are easy to handle in experiments.

102 citations


Proceedings ArticleDOI
06 Jan 2019
TL;DR: In this paper, the authors give a unified approach that yields better approximation algorithms for matching and vertex cover in all these models, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model.
Abstract: There is a rapidly growing need for scalable algorithms that solve classical graph problems, such as maximum matching and minimum vertex cover, on massive graphs. For massive inputs, several different computational models have been introduced, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In each model, algorithms are analyzed in terms of resources such as space used or rounds of communication needed, in addition to the more traditional approximation ratio.In this paper, we give a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models. The highlights include:• The first one pass, significantly-better-than-2-approximation for matching in random arrival streams that uses subquadratic space, namely a (1.5 + e-approximation streaming algorithm that uses O(n1.5) space for constant e > 0.• The first 2-round, better-than-2-approximation for matching in the MPC model that uses sub-quadratic space per machine, namely a (1.5 + e)-approximation algorithm with [MATH HERE] memory per machine for constant e > 0.By building on our unified approach, we further develop parallel algorithms in the MPC model that give a (1+e)-approximation to matching and an 0(1)-approximation to vertex cover in only 0(log log n) MPC rounds and 0(n/polylog(n)) memory per machine. These results settle multiple open questions posed by Czumaj et al. [STOC 2018].We obtain our results by a novel combination of two previously disjoint set of techniques, namely randomized composable coresets and edge degree constrained subgraphs (EDCS). We significantly extend the power of these techniques and prove several new structural results. For example, we show that an EDCS is a sparse certificate for large matchings and small vertex covers that is quite robust to sampling and composition.

65 citations


Journal ArticleDOI
17 Jul 2019
TL;DR: A unified approach is presented to transfer learning that addresses several source and target domain label-space and annotation assumptions with a single model that outperforms alternatives in both unsupervised and semi-supervised settings.
Abstract: In this paper, a unified approach is presented to transfer learning that addresses several source and target domain labelspace and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.

54 citations


Journal ArticleDOI
TL;DR: The modular Hamiltonian of chiral fermions on the torus is determined, finding that, in addition to a local Unruh-like term, each point is nonlocally coupled to an infinite but discrete set of other points, even for a single interval.
Abstract: We determine the modular Hamiltonian of chiral fermions on the torus, for an arbitrary set of disjoint intervals at generic temperature. We find that, in addition to a local Unruh-like term, each point is nonlocally coupled to an infinite but discrete set of other points, even for a single interval. These accumulate near the boundaries of the intervals, where the coupling becomes increasingly redshifted. Remarkably, in the presence of a zero mode, this set of points "condenses" within the interval at low temperatures, yielding continuous nonlocality.

53 citations


Proceedings ArticleDOI
13 May 2019
TL;DR: SWeG is proposed, a fast parallel algorithm for summarizing graphs with compact representations designed for not only shared-memory but also MapReduce settings to summarize graphs that are too large to fit in main memory.
Abstract: Given a terabyte-scale graph distributed across multiple machines, how can we summarize it, with much fewer nodes and edges, so that we can restore the original graph exactly or within error bounds? As large-scale graphs are ubiquitous, ranging from web graphs to online social networks, compactly representing graphs becomes important to efficiently store and process them. Given a graph, graph summarization aims to find its compact representation consisting of (a) a summary graph where the nodes are disjoint sets of nodes in the input graph, and each edge indicates the edges between all pairs of nodes in the two sets; and (b) edge corrections for restoring the input graph from the summary graph exactly or within error bounds. Although graph summarization is a widely-used graph-compression technique readily combinable with other techniques, existing algorithms for graph summarization are not satisfactory in terms of speed or compactness of outputs. More importantly, they assume that the input graph is small enough to fit in main memory. In this work, we propose SWeG, a fast parallel algorithm for summarizing graphs with compact representations. SWeG is designed for not only shared-memory but also MapReduce settings to summarize graphs that are too large to fit in main memory. We demonstrate that SWeG is (a) Fast: SWeG is up to 5400 × faster than its competitors that give similarly compact representations, (b) Scalable: SWeG scales to graphs with tens of billions of edges, and (c) Compact: combined with state-of-the-art compression methods, SWeG achieves up to 3.4 × better compression than them.

44 citations


Journal ArticleDOI
TL;DR: In this paper, a new paradigm for stable phase retrieval was proposed by considering the problem of reconstructing F up to a phase factor that is not global, but that can be different for each of the subsets.
Abstract: The problem of phase retrieval is to determine a signal $$f\in \mathcal {H}$$ , with $$ \mathcal {H}$$ a Hilbert space, from intensity measurements $$|F(\omega )|$$ , where $$F(\omega ):=\langle f, \varphi _\omega \rangle $$ are measurements of f with respect to a measurement system $$(\varphi _\omega )_{\omega \in \Omega }\subset \mathcal {H}$$ . Although phase retrieval is always stable in the finite-dimensional setting whenever it is possible (i.e. injectivity implies stability for the inverse problem), the situation is drastically different if $$\mathcal {H}$$ is infinite-dimensional: in that case phase retrieval is never uniformly stable (Alaifari and Grohs in SIAM J Math Anal 49(3):1895–1911, 2017; Cahill et al. in Trans Am Math Soc Ser B 3(3):63–76, 2016); moreover, the stability deteriorates severely in the dimension of the problem (Cahill et al. 2016). On the other hand, all empirically observed instabilities are of a certain type: they occur whenever the function |F| of intensity measurements is concentrated on disjoint sets $$D_j\subset \Omega $$ , i.e. when $$F= \sum _{j=1}^k F_j$$ where each $$F_j$$ is concentrated on $$D_j$$ (and $$k \ge 2$$ ). Motivated by these considerations, we propose a new paradigm for stable phase retrieval by considering the problem of reconstructing F up to a phase factor that is not global, but that can be different for each of the subsets $$D_j$$ , i.e. recovering F up to the equivalence $$\begin{aligned} F \sim \sum _{j=1}^k e^{\mathrm {i}\alpha _j} F_j. \end{aligned}$$ We present concrete applications (for example in audio processing) where this new notion of stability is natural and meaningful and show that in this setting stable phase retrieval can actually be achieved, for instance, if the measurement system is a Gabor frame or a frame of Cauchy wavelets.

44 citations


Book ChapterDOI
13 Apr 2019
TL;DR: Graph burning studies how fast a contagion, modeled as a set of fires, spreads in a graph, to find a schedule that minimizes the number of rounds to burn graph.
Abstract: Numerous approaches study the vulnerability of networks against social contagion. Graph burning studies how fast a contagion, modeled as a set of fires, spreads in a graph. The burning process takes place in synchronous, discrete rounds. In each round, a fire breaks out at a vertex, and the fire spreads to all vertices that are adjacent to a burning vertex. The selection of vertices where fires start defines a schedule that indicates the number of rounds required to burn all vertices. Given a graph, the objective of an algorithm is to find a schedule that minimizes the number of rounds to burn graph. Finding the optimal schedule is known to be NP-hard, and the problem remains NP-hard when the graph is a tree or a set of disjoint paths. The only known algorithm is an approximation algorithm for disjoint paths, which has an approximation ratio of 1.5.

38 citations


Journal ArticleDOI
Eric Sharpe1
TL;DR: In this paper, the existence of a global one-form symmetry in two dimensions typically signals a violation of cluster decomposition, an issue resolved by the observation that such theories decompose into disjoint unions, a result that has been applied to Gromov-Witten theory and gauged linear sigma model phases.
Abstract: In this paper we discuss gauging one-form symmetries in two-dimensional theories. The existence of a global one-form symmetry in two dimensions typically signals a violation of cluster decomposition -- an issue resolved by the observation that such theories decompose into disjoint unions, a result that has been applied to, for example, Gromov-Witten theory and gauged linear sigma model phases. In this paper we describe how gauging one-form symmetries in two-dimensional theories can be used to select particular elements of that disjoint union, effectively undoing decomposition. We examine such gaugings explicitly in examples involving orbifolds, nonsupersymmetric pure Yang-Mills theories, and supersymmetric gauge theories in two dimensions. Along the way, we learn explicit concrete details of the topological configurations that path integrals sum over when gauging a one-form symmetry, and we also uncover `hidden' one-form symmetries.

37 citations


Journal ArticleDOI
TL;DR: A Hilton-Milner-type stability theorem is provided for the Erdős Matching Conjecture in a relatively wide range, in particular, for $n\ge (2+o(1))sk$ with $o( 1)$ depending on $s$ only, which provides a far-reaching generalization of an important classical result of Kleitman.

37 citations


Journal ArticleDOI
TL;DR: This study presents the first exact mixed integer linear program (MILP) in the literature that can partition an arbitrary graph into a given arbitrary number of components.
Abstract: Task allocation for a multi-robot system is often imposed with various constraints to promote efficiency and avoid unwanted consequences This study focuses on the min-max balanced connected q-partition problem (BCPq) that seeks the early completion time of a multi-robot system while satisfying mission constraints In the problem, each environmental region is modeled as a weighted node of a graph and the graph is to be partitioned into a predefined number, q, of disjoint node sets, such that nodes in each set are connected and the largest “sum of the set” is as small as possible The problem is NP-hard in general cases, and has a brute-force search space exponential to the number of nodes BCPq arises in many fields besides robotics To date, existing work has mainly been conducted on special versions of the problem, such as small q or special types of graphs In this study, we propose two approaches for the general case: one is exact and suitable for graphs of less than 200 nodes, while the other is approximate and can handle graphs with up to 3000 nodes Specifically, this study presents the first exact mixed integer linear program (MILP) in the literature that can partition an arbitrary graph into a given arbitrary number of components The MILP is based on a flow model and can solve graphs with 100 more nodes than the existing MILP that can only tackle 2-partition in the q = 2 case The study also presents an approximate genetic algorithm (GA) based on tree partition and tree evolution It is the first GA that can handle BCPq (q ≥ 2) problems On graphs of less than 500 nodes and 2-partition, the GA achieved all the optima in 20 runs On q ≤ 12, this GA also can achieve results close to the optima after a few individuals were evolved, ie, a gap smaller than 3% of the ideal average value Moreover, the GA is scalable to q as the consumed time is nearly stable, independent of q

Proceedings Article
06 Jul 2019
TL;DR: This paper demonstrates how to improve CBS with disjoint splitting and how to modify the low-level search of CBS to take maximal advantage of it, and shows that disjointed splitting increases the success rates and speeds of CBS and its variants by up to 2 orders of magnitude.
Abstract: Multi-Agent Path Finding (MAPF) is the planning problem of finding collision-free paths for a team of agents. We focus on Conflict-Based Search (CBS), a two-level tree-search state-of-the-art MAPF algorithm. The standard splitting strategy used by CBS is not disjoint, i.e., when it splits a problem into two subproblems, some solutions are shared by both subproblems, which can create duplication of search effort. In this paper, we demonstrate how to improve CBS with disjoint splitting and how to modify the low-level search of CBS to take maximal advantage of it. Experiments show that disjoint splitting increases the success rates and speeds of CBS and its variants by up to 2 orders of magnitude.

Journal ArticleDOI
TL;DR: A new measurement of the ionization energy of para-H_{2} is used to link the energy-level structure of the two nuclear-spin isomers of this fundamental molecule and enables the derivation of an upper bound of 1.5 MHz for a hypothetical global shift of the energy level structure of ortho-H{2} with respect to that of para.
Abstract: Nuclear-spin-symmetry conservation makes the observation of transitions between quantum states of ortho- and para-H_{2} extremely challenging. Consequently, the energy-level structure of H_{2} derived from experiment consists of two disjoint sets of level energies, one for para-H_{2} and the other for ortho-H_{2}. We use a new measurement of the ionization energy of para-H_{2} [E_{I}(H_{2})/(hc)=124 417.491 098(31) cm^{-1}] to determine the energy separation [118.486 770(50) cm^{-1}] between the ground states of para- and ortho-H_{2} and thus link the energy-level structure of the two nuclear-spin isomers of this fundamental molecule. Comparison with recent theoretical results [M. Puchalski et al., Phys. Rev. Lett. 122, 103003 (2019)PRLTAO0031-900710.1103/PhysRevLett.122.103003] enables the derivation of an upper bound of 1.5 MHz for a hypothetical global shift of the energy-level structure of ortho-H_{2} with respect to that of para-H_{2}.

Journal IssueDOI
01 Sep 2019
TL;DR: This work simplifies a recent result of Alweiss, Lovett, Wu and Zhang that gives an upper bound on the size of every family of sets of size $k$ that does not contain a sunflower and shows how to use the converse of Shannon's noiseless coding theorem to give a cleaner proof.
Abstract: Coding for sunflowers, Discrete Analysis 2020:2, 8 pp. The sunflower problem is an old problem of Erdős and Rado in extremal set theory. A _sunflower_ is defined to be a collection of sets $A_1,\dots,A_r$ such that if $B=\bigcap_{i=1}^rA_i$ is their intersection, then the sets $A_i\setminus B$ are disjoint. Equivalently, all the pairwise intersections $A_i\cap A_j$ (with $i e j$) are the same. Erdős asked how many sets of size $k$ there can be without a sunflower of size $r$. To see that there is some finite bound, observe that if one starts with $m$ sets, then either one can find $r$ of them that are disjoint, in which case one has a sunflower, or one can find a set $A$ that intersects at least $m/r-1$ of the other sets. By the pigeonhole principle, there is an element of $A$ that is contained in at least $k^{-1}(m/r-1)$ of the other sets. Removing that element from each of those sets gives a collection of sets of size $k-1$ and one can apply induction. A suitably careful version of this argument gives an upper bound on $m$ of $k!(r-1)^{k}$. Erdős and Rado conjectured that the correct bound is exponential in $k$ for fixed $r$. This problem is still open even for $r=3$. Until recently, the best known bound did not improve much on this factorial bound. The record when $r=3$, due to Kostochka, was $Ck!(\log\log\log k/\log\log k)^k$. To get a feel for this, note that $k!$ is approximately $(k/e)^k$, so Kostochka replaced the linear function $k/e$ by a function that was sublinear, but only because of a $\log\log$ factor. Then in 2019, Alweiss, Lovett, Wu and Zhang improved the bound dramatically, to one of the form $(\log k)^k(r\log\log k)^{O(k)}$, replacing Kostochka's almost linear function by a logarithmic one. If one imagines a spectrum of growth rates with exponential at one end and factorial at the other, this replaced a growth rate that was almost at the factorial end by one that was almost at the exponential end. To achieve this, they obtained an upper bound for the number of sets of size $k$ one could find without a structure called a _robust sunflower_, which is a collection of sets $A_1,\dots,A_r$ with intersection $B$ such that if elements of $\bigcup_i(A_i\setminus B)$ are chosen independently at random with probability $\alpha$, then with probability at least $1-\beta$ there is some $i$ such that all the elements of $A_i\setminus B$ are chosen. It can be shown that if $\alpha=\beta=r^{-1}$, then a robust sunflower with parameters $\alpha$ and $\beta$ contains a sunflower of size $r$. Interestingly, their upper bound for set systems without robust sunflowers was close to sharp, so they reached the natural barrier for their method. A subsequent paper of Frankston, Kahn, Narayanan and Park removed the $\log\log k$ factor from the bound, giving a bound of $(C\log k\log(rk))^k$. The main purpose of this paper is to give a shorter and cleaner proof of the result of Alweiss, Lovett, Wu and Zhang, using the converse of Shannon's noiseless coding theorem. In fact, it achieves the same bound as Frankston, Kahn, Narayanan and Park, and also gives essentially sharp estimates for the size of a set system without a robust sunflower with parameters $\alpha$ and $\beta$, which have other applications in theoretical computer science.

Journal ArticleDOI
TL;DR: In this article, the maximum possible number of copies of a fixed graph in an F -free graph on n vertices is investigated, where k F denotes k vertex disjoint copies of the fixed graph F.

Journal ArticleDOI
TL;DR: The largest value of
Abstract: In this paper, we study two families of codes with availability, namely, private information retrieval (PIR) codes and batch codes . While the former requires that every information symbol has $k$ mutually disjoint recovering sets, the latter imposes this property for each multiset request of $k$ information symbols. The main problem under this paradigm is to minimize the number of redundancy symbols. We denote this value by $r_{P}(n,k)$ and $r_{B}(n,k)$ , for PIR codes and batch codes, respectively, where $n$ is the number of information symbols. Previous results showed that for any constant $k$ , $r_{P}(n,k) = \Theta (\sqrt {n})$ and $r_{B}(n,k)= {\mathcal{ O}}(\sqrt {n}\log (n))$ . In this paper, we study the asymptotic behavior of these codes for non-constant $k$ and specifically for $k=\Theta (n^\epsilon)$ . We also study the largest value of $k$ such that the rate of the codes approaches 1 and show that for all $\epsilon , $r_{P}(n,n^\epsilon) = o(n)$ and $r_{B}(n,n^\epsilon) = o(n)$ . Furthermore, several more results are proved for the case of fixed $k$ .

Journal ArticleDOI
TL;DR: It is shown that not only can pressurized water be transported across a stable bridge, but also that the dependence of G_{b} on the angle between the axes of two nonaligned nanochannels may be used to tune the flow rate between the two.
Abstract: Water channels are important to new purification systems, osmotic power harvesting in salinity gradients, hydroelectric voltage conversion, signal transmission, drug delivery, and many other applications. To be effective, water channels must have structures more complex than a single tube. One way of building such structures is through a water bridge between two disjoint channels that are not physically connected. We report on the results of extensive molecular dynamics simulation of water transport through such bridges between two carbon nanotubes separated by a nanogap. We show that not only can pressurized water be transported across a stable bridge, but also that (i) for a range of the gap's width ${l}_{g}$ the bridge's hydraulic conductance ${G}_{b}$ does not depend on ${l}_{g}$, (ii) the overall shape of the bridge is not cylindrical, and (iii) the dependence of ${G}_{b}$ on the angle between the axes of two nonaligned nanochannels may be used to tune the flow rate between the two.

Journal ArticleDOI
TL;DR: In this paper, a holographic construction for the entanglement negativity of bipartite mixed state configurations of two disjoint intervals in 3D conformal field theories was presented.
Abstract: We advance a holographic construction for the entanglement negativity of bipartite mixed state configurations of two disjoint intervals in $$(1+1)$$ dimensional conformal field theories ( $$CFT_{1+1}$$ ) through the $$AdS_3/CFT_2$$ correspondence. Our construction constitutes the large central charge analysis of the entanglement negativity for mixed states under consideration and involves a specific algebraic sum of bulk space like geodesics anchored on appropriate intervals in the dual $$CFT_{1+1}$$ . The construction is utilized to compute the holographic entanglement negativity for such mixed states in $$CFT_{1+1}$$ s dual to bulk pure $$AdS_3$$ geometries and BTZ black holes respectively. Our analysis exactly reproduces the universal features of corresponding replica technique results in the large central charge limit which serves as a consistency check.

Journal ArticleDOI
TL;DR: In this article, it was shown that the second Neumann eigenvalue of the Laplace operator on smooth domains of R N with prescribed measure m attains its maximum on the union of two disjoint balls of measure m 2.
Abstract: In this paper we prove that the second (non-trivial) Neumann eigenvalue of the Laplace operator on smooth domains of R N with prescribed measure m attains its maximum on the union of two disjoint balls of measure m 2. As a consequence, the Polya conjecture for the Neumann eigenvalues holds for the second eigenvalue and for arbitrary domains. We moreover prove that a relaxed form of the same inequality holds in the context of non-smooth domains and densities.

Journal ArticleDOI
TL;DR: This work proposes an ensemble-based approach, called EnCoD, that leverages the solutions produced by various disjoint community detection algorithms to discover the overlapping community structure and shows that it is generic enough to be applied to networks where the vertices are associated with explicit semantic features.
Abstract: While there has been a plethora of approaches for detecting disjoint communities from real-world complex networks, some methods for detecting overlapping community structures have also been recently proposed. In this work, we argue that, instead of developing separate approaches for detecting overlapping communities, a promising alternative is to infer the overlapping communities from multiple disjoint community structures. We propose an ensemble-based approach, called EnCoD , that leverages the solutions produced by various disjoint community detection algorithms to discover the overlapping community structure. Specifically, EnCoD generates a feature vector for each vertex from the results of the base algorithms and learns which features lead to detect densely connected overlapping regions in an unsupervised way. It keeps on iterating until the likelihood of each vertex belonging to its own community maximizes. Experiments on both synthetic and several real-world networks (with known ground-truth community structures) reveal that EnCoD significantly outperforms nine state-of-the-art overlapping community detection algorithms Finally, we show that EnCoD is generic enough to be applied to networks where the vertices are associated with explicit semantic features. To the best of our knowledge, EnCoD is the second ensemble-based overlapping community detection approach after MEDOC Chakraborty (2016).

Journal ArticleDOI
TL;DR: In this paper, the authors present some properties of orthogonality and relate them with support disjoint and norm inequalities in -Schatten ideals and locally uniformly convex spaces.
Abstract: We present some properties of orthogonality and relate them with support disjoint and norm inequalities in -Schatten ideals and locally uniformly convex spaces. Later on, we study the case when an operator is norm-parallel to the identity operator. Finally, we give some equivalence assertions about the norm-parallelism of compact operators. Some applications and generalizations are discussed for certain operators.

Posted Content
TL;DR: In this paper, Giroux et al. showed that any closed hypersurface in a contact manifold can be approximated by a convex one, and showed that the existence of compatible open book decompositions for contact manifolds and an existence h-principle for codimension 2 contact submanifolds.
Abstract: We lay the foundations of convex hypersurface theory (CHT) in contact topology, extending the work of Giroux in dimension three. Specifically, we prove that any closed hypersurface in a contact manifold can be $C^0$-approximated by a convex one. We also prove that a $C^0$-generic family of mutually disjoint closed hypersurfaces parametrized by $t \in [0,1]$ is convex except at finitely many times $t_1, \dots, t_N$, and that crossing each $t_i$ corresponds to a bypass attachment. As applications of CHT, we prove the existence of compatible (relative) open book decompositions for contact manifolds and an existence h-principle for codimension 2 contact submanifolds.

Journal ArticleDOI
TL;DR: This work develops a fast algorithm for permanents over the ring Zt[X], where t is a power of 2, by modifying Valiant's 1979 algorithm for the permanent over Zt (Less).
Abstract: Given an undirected graph and two pairs of vertices (si, ti) for i ϵ{ 1, 2} we show that there is a polynomial time Monte Carlo algorithm that finds disjoint paths of smallest total length joining si and ti for i ϵ{1, 2}, respectively, or concludes that there most likely are no such paths at all. Our algorithm applies to both the vertex- and edge-disjoint versions of the problem. Our algorithm is algebraic and uses permanents over the polynomial ring Z4[X] in combination with the isolation lemma of Mulmuley, Vazirani, and Vazirani to detect a solution. To this end, we develop a fast algorithm for permanents over the ring Zt[X], where t is a power of 2, by modifying Valiant's 1979 algorithm for the permanent over Zt (Less)

Journal ArticleDOI
TL;DR: A sphere-packing approach is developed for upper bounding the parameter k and it is shown that a family of binary linear LRCs with < inline-formula> $ {d}\geq \textbf {6}$ goes large, as goes large.
Abstract: For locally repairable codes (LRCs), Cadambe and Mazumdar derived the first field-dependent parameter bound, known as the C-M bound. However, the C-M bound depends on an undetermined parameter $ {k}^{( {q})}_{{\textbf {opt}}}( {n}, {d})$ . In this paper, a sphere-packing approach is developed for upper bounding the parameter $ {k}$ for $[ {n}, {k}, {d}]$ linear LRCs with locality $ {r}$ . When restricted to the binary field, three upper bounds (i.e., Bound A , Bound B , and Bound C ) are derived in an explicit form. More specifically, Bound A holds under the hypothesis that the local repair groups are disjoint and of equal size. Comparing with previous bounds obtained under the same hypothesis, Bound A either covers them as special cases or has an advantage due to its explicit form. Then, the hypothesis is removed in Bound B and Bound C . As the price for explicit form, Bound B specially holds for $ {d}\geq \text {5}$ and Bound C for $ {r}=\textbf {2}$ . Through specific comparisons, we show that Bound B and Bound C both tend to outperform the C-M bound, as $ {n}$ goes large. Moreover, a family of binary linear LRCs with $ {d}\geq \textbf {6}$ attaining Bound B are constructed and later extended to a wider range of parameters by a shortening technique. Lastly, most of the bounds and constructions are extended to $ {q}$ -ary LRCs.

Journal ArticleDOI
24 Sep 2019
TL;DR: In this paper, a comprehensive review of the averaged Hausdorff distances that have recently been introduced as quality indicators in multi-objective optimization problems (MOPs) is presented.
Abstract: A brief but comprehensive review of the averaged Hausdorff distances that have recently been introduced as quality indicators in multi-objective optimization problems (MOPs) is presented. First, we introduce all the necessary preliminaries, definitions, and known properties of these distances in order to provide a stat-of-the-art overview of their behavior from a theoretical point of view. The presentation treats separately the definitions of the ( p , q ) -distances GD p , q , IGD p , q , and Δ p , q for finite sets and their generalization for arbitrary measurable sets that covers as an important example the case of continuous sets. Among the presented results, we highlight the rigorous consideration of metric properties of these definitions, including a proof of the triangle inequality for distances between disjoint subsets when p , q ⩾ 1 , and the study of the behavior of associated indicators with respect to the notion of compliance to Pareto optimality. Illustration of these results in particular situations are also provided. Finally, we discuss a collection of examples and numerical results obtained for the discrete and continuous incarnations of these distances that allow for an evaluation of their usefulness in concrete situations and for some interesting conclusions at the end, justifying their use and further study.

Journal ArticleDOI
TL;DR: A novel and energy efficient target coverage algorithm that produces disjoint as well as non-disjoint cover sets and gives the energy optimized minimum path from sink to the sensor node and from cover set to the sink is proposed.

Journal ArticleDOI
TL;DR: In this paper, the relationship between material and geometry was investigated on a wide variety of periodic gratings, and it was shown that the material proportionality given above does indeed emerge in realistic structures, at least within the range of explored values of $\ensuremath{\chi}$.
Abstract: The super-Planckian features of radiative heat transfer in the near field are known to depend strongly on both material and geometric design properties. However, the relative importance and interplay of these two facets, and the degree to which they can be used to ultimately control energy flow, remains an open question. Recently derived bounds suggest that enhancements as large as ${|\ensuremath{\chi}|}^{4}{\ensuremath{\lambda}}^{2}/\left[{\left(4\ensuremath{\pi}\right)}^{2}\text{Im}{\left[\ensuremath{\chi}\right]}^{2}{d}^{2}\right]$ are possible between extended structures (compared to blackbody), but geometries reaching this bound, or designs revealing the predicted material ($\ensuremath{\chi}$) scaling, are lacking. Here, exploiting inverse techniques, in combination with fast computational approaches enabled by the low-rank properties of elliptic operators for disjoint bodies, we investigate this relation between material and geometry on a wide variety of periodic gratings. Crucially, we find that the material proportionality given above does indeed emerge in realistic structures, at least within the range of explored values of $\ensuremath{\chi}$. In reaching this result, we also show that (in two dimensions) lossy metals such as tungsten, typically considered to be poor candidate materials for strongly enhancing heat transfer in the near infrared, can be structured to selectively realize flux rates that come within $50%$ of those exhibited by an ideal pair of resonant lossless metals for separations as small as $2%$ of a tunable design wavelength.

Journal ArticleDOI
TL;DR: In this article, the determinant formula of a signed graphic Laplacian is reclaimed and shown to be determined by the maximal positive-circle-free elements, and spanning trees are equivalent to single-element order ideals.
Abstract: Restrictions of incidence preserving path maps produce oriented hypergraphic All Minors Matrix-tree Theorems for Laplacian and adjacency matrices. The images of these maps produce a locally signed graphic, incidence generalization, of cycle covers and basic figures that correspond to incidence-k-forests. When restricted to bidirected graphs, the natural partial ordering of maps results in disjoint signed Boolean lattices whose minor calculations correspond to principal order ideals. As an application, (1) the determinant formula of a signed graphic Laplacian is reclaimed and shown to be determined by the maximal positive-circle-free elements, and (2) spanning trees are equivalent to single-element order ideals.

Journal ArticleDOI
TL;DR: In this article, a restricted Dirichlet-to-Neumann map Λ S, R T associated with the operator ∂ t 2 − Δ g + A + q where Δ g is the Laplace-Beltrami operator of a Riemannian manifold (M, g ), and A and q are a vector field and a function on M.

Journal ArticleDOI
TL;DR: The corona of two disjoint sets of n 1 and n 2 vertices is the graph formed from one copy of G1 and n 1 copies of G2.
Abstract: Let G1 and G2 be two graphs on disjoint sets of n1 and n2 vertices, respectively. The corona of graphs G1 and G2, denoted by G1 ∘ G2, is the graph formed from one copy of G1 and n1 copies of G2 whe...