scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2012"


Journal ArticleDOI
TL;DR: A systematic method to extract the negativity in the ground state of a 1+1 dimensional relativistic quantum field theory is developed, using a path integral formalism to construct the partial transpose ρ(A)(T(2) of the reduced density matrix of a subsystem and applied to conformal field theories.
Abstract: We develop a systematic method to extract the negativity in the ground state of a $1+1$ dimensional relativistic quantum field theory, using a path integral formalism to construct the partial transpose ${\ensuremath{\rho}}_{A}^{{T}_{2}}$ of the reduced density matrix of a subsystem $A={A}_{1}\ensuremath{\cup}{A}_{2}$, and introducing a replica approach to obtain its trace norm which gives the logarithmic negativity $\mathcal{E}=\mathrm{ln} \ensuremath{\Vert}{\ensuremath{\rho}}_{A}^{{T}_{2}}\ensuremath{\Vert}$. This is shown to reproduce standard results for a pure state. We then apply this method to conformal field theories, deriving the result $\mathcal{E}\ensuremath{\sim}(c/4)\mathrm{ln} [{\ensuremath{\ell}}_{1}{\ensuremath{\ell}}_{2}/({\ensuremath{\ell}}_{1}+{\ensuremath{\ell}}_{2})]$ for the case of two adjacent intervals of lengths ${\ensuremath{\ell}}_{1}$, ${\ensuremath{\ell}}_{2}$ in an infinite system, where $c$ is the central charge. For two disjoint intervals it depends only on the harmonic ratio of the four end points and so is manifestly scale invariant. We check our findings against exact numerical results in the harmonic chain.

456 citations


Journal ArticleDOI
TL;DR: For random collections of self-avoiding loops in two-dimensional domains, a simple and natural conformal restriction property is conjecturally satised by the scaling limits of interfaces in models from statistical physics.
Abstract: For random collections of self-avoiding loops in two-dimensional domains, we dene a simple and natural conformal restriction property that is conjecturally satised by the scaling limits of interfaces in models from statistical physics. This property is basically the combination of conformal invariance and the locality of the interaction in the model. Unlike the Markov property that Schramm used to characterize SLE curves (which involves conditioning on partially generated interfaces up to arbitrary stopping times), this property only involves conditioning on entire loops and thus appears at rst glance to be weaker. Our rst main result is that there exists exactly a one-dimensional family of random loop collections with this property | one for each 2 (8=3; 4] | and that the loops are forms of SLE . The proof proceeds in two steps. First, uniqueness is established by showing that every such loop ensemble can be generated by an \exploration" process based on SLE. Second, existence is obtained using the two-dimensional Brownian loopsoup, which is a Poissonian random collection of loops in a planar domain. When the intensity parameterc of the loop-soup is less than 1, we show that the outer boundaries of the loop clusters are disjoint simple loops (when c > 1 there is almost surely only one cluster) that satisfy the conformal restriction axioms. We prove various results about loop-soups, cluster sizes, and the c = 1 phase transition. Taken together, our results imply that the following families are equiv

239 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic approach for the calculation of the negativity in the ground state of a one-dimensional quantum field theory is presented, where the partial transpose rho_A^{T_2} of the reduced density matrix of a subsystem A=A_1 U A_2 is explicitly constructed as an imaginary-time path integral and from this the replicated traces Tr (rho_a^{T-2})^n are obtained.
Abstract: We report on a systematic approach for the calculation of the negativity in the ground state of a one-dimensional quantum field theory. The partial transpose rho_A^{T_2} of the reduced density matrix of a subsystem A=A_1 U A_2 is explicitly constructed as an imaginary-time path integral and from this the replicated traces Tr (rho_A^{T_2})^n are obtained. The logarithmic negativity E= log||rho_A^{T_2}|| is then the continuation to n->1 of the traces of the even powers. For pure states, this procedure reproduces the known results. We then apply this method to conformally invariant field theories in several different physical situations for infinite and finite systems and without or with boundaries. In particular, in the case of two adjacent intervals of lengths L1, L2 in an infinite system, we derive the result E\sim(c/4) ln(L1 L2/(L1+L2)), where c is the central charge. For the more complicated case of two disjoint intervals, we show that the negativity depends only on the harmonic ratio of the four end-points and so is manifestly scale invariant. We explicitly calculate the scale-invariant functions for the replicated traces in the case of the CFT for the free compactified boson, but we have not so far been able to obtain the n->1 continuation for the negativity even in the limit of large compactification radius. We have checked all our findings against exact numerical results for the harmonic chain which is described by a non-compactified free boson.

218 citations


Journal ArticleDOI
TL;DR: The time complexity of all the algorithms with the most expensive part depending on Robertson and [email protected]?s algorithm can be improved to O(n^2), for example, the membership testing for minor-closed class of graphs.

216 citations


Book
01 Jan 2012
TL;DR: Introduction Algorithm Analysis.
Abstract: Introduction Algorithm Analysis Lists, Stacks, and Queues Trees Hashing Priority Queues Sorting The Disjoint Set ADT Graph Algorithms Algorithm Design Techniques Amortized Analysis Advanced Data Structures and Implementation

200 citations


Journal ArticleDOI
TL;DR: The results extend the existing work on complex contagions in several directions by providing solutions for coupled random networks whose vertices are neither identical nor disjoint, and showing that content-dependent propagation over a multiplex network leads to a subtle relation between the giant vulnerable component of the graph and the global cascade condition that is not seen in the existing models in the literature.
Abstract: We study the diffusion of influence in random multiplex networks where links can be of $r$ different types, and, for a given content (e.g., rumor, product, or political view), each link type is associated with a content-dependent parameter ${c}_{i}$ in $[0,\ensuremath{\infty}]$ that measures the relative bias type $i$ links have in spreading this content. In this setting, we propose a linear threshold model of contagion where nodes switch state if their ``perceived'' proportion of active neighbors exceeds a threshold $\ensuremath{\tau}$. Namely a node connected to ${m}_{i}$ active neighbors and ${k}_{i}\ensuremath{-}{m}_{i}$ inactive neighbors via type $i$ links will turn active if $\ensuremath{\sum}{c}_{i}{m}_{i}/\ensuremath{\sum}{c}_{i}{k}_{i}$ exceeds its threshold $\ensuremath{\tau}$. Under this model, we obtain the condition, probability and expected size of global spreading events. Our results extend the existing work on complex contagions in several directions by (i) providing solutions for coupled random networks whose vertices are neither identical nor disjoint, (ii) highlighting the effect of content on the dynamics of complex contagions, and (iii) showing that content-dependent propagation over a multiplex network leads to a subtle relation between the giant vulnerable component of the graph and the global cascade condition that is not seen in the existing models in the literature.

193 citations


Journal ArticleDOI
TL;DR: A new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four cycles.
Abstract: We present a method to construct low-density parity-check (LDPC) codes with low error floors on the binary symmetric channel. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the trapping set ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four cycles.

129 citations


Journal ArticleDOI
TL;DR: Nakanishi and Schlag as mentioned in this paper showed that the initial data set splits into nine nonempty, pairwise disjoint regions which are characterized by the distinct behaviors of the solution for large time: blowup, scattering to 0, or scattering to the family of ground states generated by the phase and scaling freedom.
Abstract: We extend the result in Nakanishi and Schlag (J Differ Equ 250:2299–2333, 2011) on the nonlinear Klein–Gordon equation to the nonlinear Schrodinger equation with the focusing cubic nonlinearity in three dimensions, for radial data of energy at most slightly above that of the ground state. We prove that the initial data set splits into nine nonempty, pairwise disjoint regions which are characterized by the distinct behaviors of the solution for large time: blow-up, scattering to 0, or scattering to the family of ground states generated by the phase and scaling freedom. Solutions of this latter type form a smooth center-stable manifold, which contains the ground states and separates the phase space locally into two connected regions exhibiting blow-up and scattering to 0, respectively. The special solutions found by Duyckaerts and Roudenko (Rev Mater Iberoam 26(1):1–56, 2010), following the seminal work on threshold solutions by Duyckaerts and Merle (Funct Anal 18(6):1787–1840, 2009), appear here as the unique one-dimensional unstable/stable manifolds emanating from the ground states. In analogy with Nakanishi and Schlag (J Differ Equ 250:2299–2333, 2011), the proof combines the hyperbolic dynamics near the ground states with the variational structure away from them. The main technical ingredient in the proof is a “one-pass” theorem which precludes “almost homoclinic orbits”, i.e., those solutions starting in, then moving away from, and finally returning to, a small neighborhood of the ground states. The main new difficulty compared with the Klein–Gordon case is the lack of finite propagation speed. We need the radial Sobolev inequality for the error estimate in the virial argument. Another major difference between Nakanishi and Schlag (J Differ Equ 250:2299–2333, 2011) and this paper is the need to control two modulation parameters.

120 citations


Journal ArticleDOI
TL;DR: A bird’s-eye view of recent developments in the study of the nearness of sets is given to give a bird's- eye view of problems based on human perception and engineering and science problems.
Abstract: Near sets are disjoint sets that resemble each other. Resemblance is determined by considering set descriptions defined by feature vectors (n-dimensional vectors of numerical features that represent characteristics of objects such as digital image pixels). Near sets are useful in solving problems based on human perception [44, 76, 49, 51, 56] that arise in areas such as image analysis [52, 14, 41, 48, 17, 18], image processing [41], face recognition [13], ethology [63], as well as engineering and science problems [53, 63, 44, 19, 17, 18]. As an illustration of the degree of nearness between two sets, consider an example of the Henry color model for varying degrees of nearness between sets [17, §4.3]. The two pairs of ovals in Figures 1 and 2 contain colored segments. Each segment in the figures corresponds to an equivalence class where all pixels in the class have matching descriptions, i.e., pixels with matching colors. Thus, the ovals in Figure 1 are closer (more near) to each other in terms of their descriptions than the ovals in Figure 2. It is the purpose of this article to give a bird’s-eye view of recent developments in the study of the nearness of sets.

117 citations


Journal ArticleDOI
TL;DR: This paper verifies the conjecture that for any $t < \frac{n}{3k^2}$, every k-uniform hypergraph on n vertices without t disjoint edges has at most max $binom{kt-1}{k}-\binom-n-t-t+1-k$ edges.
Abstract: More than forty years ago, ErdA‘s conjectured that for any $t \leq \frac{n}{k}$ , every k-uniform hypergraph on n vertices without t disjoint edges has at most max ${\binom{kt-1}{k}, \binom{n}{k}-\binom{n-t+1}{k}\}$ edges. Although this appears to be a basic instance of the hypergraph Turan problem (with a t-edge matching as the excluded hypergraph), progress on this question has remained elusive. In this paper, we verify this conjecture for all $t . This improves upon the best previously known range $t = O\bigl(\frac{n}{k^3}\bigr)$ , which dates back to the 1970s.

106 citations


Proceedings Article
26 Jun 2012
TL;DR: This work proposes a model in which objects are characterised by a latent feature vector that achieves significantly improved predictive performance on social and biological link prediction tasks and indicates that models with a single layer hierarchy over-simplify real networks.
Abstract: Latent variable models for network data extract a summary of the relational structure underlying an observed network. The simplest possible models subdivide nodes of the network into clusters; the probability of a link between any two nodes then depends only on their cluster assignment. Currently available models can be classified by whether clusters are disjoint or are allowed to overlap. These models can explain a "flat" clustering structure. Hierarchical Bayesian models provide a natural approach to capture more complex dependencies. We propose a model in which objects are characterised by a latent feature vector. Each feature is itself partitioned into disjoint groups (subclusters), corresponding to a second layer of hierarchy. In experimental comparisons, the model achieves significantly improved predictive performance on social and biological link prediction tasks. The results indicate that models with a single layer hierarchy over-simplify real networks.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the entanglement and Renyi entropies of two disjoint intervals in minimal models of conformal field theory, and used the conformal block expansion and fusion rules to define a systematic expansion in the elliptic parameter of the trace of the nth power of the reduced density matrix.
Abstract: We study the entanglement and Renyi entropies of two disjoint intervals in minimal models of conformal field theory. We use the conformal block expansion and fusion rules of twist fields to define a systematic expansion in the elliptic parameter of the trace of the nth power of the reduced density matrix. Keeping only the first few terms we obtain an approximate expression that is easily analytically continued to , leading to an approximate formula for the entanglement entropy. These predictions are checked against some known exact results as well as against existing numerical data.

Journal ArticleDOI
TL;DR: It is proved that the two defining criteria for defining the nonclassicality of bipartite bosonic quantum systems are maximally inequivalent, and suggested that there are other quantum correlations in nature than those revealed by entanglement and quantum discord.
Abstract: We consider two celebrated criteria for defining the nonclassicality of bipartite bosonic quantum systems, the first stemming from information theoretic concepts and the second from physical constraints on the quantum phase space. Consequently, two sets of allegedly classical states are singled out: (i) the set C composed of the so-called classical-classical (CC) states—separable states that are locally distinguishable and do not possess quantum discord; (ii) the set P of states endowed with a positive P representation (P-classical states)—mixtures of Glauber coherent states that, e.g., fail to show negativity of their Wigner function. By showing that C and P are almost disjoint, we prove that the two defining criteria are maximally inequivalent. Thus, the notions of classicality that they put forward are radically different. In particular, generic CC states show quantumness in their P representation, and vice versa, almost all P-classical states have positive quantum discord and, hence, are not CC. This inequivalence is further elucidated considering different applications of P-classical and CC states. Our results suggest that there are other quantum correlations in nature than those revealed by entanglement and quantum discord.

Proceedings ArticleDOI
14 May 2012
TL;DR: This article proposes a novel registration algorithm, based on the distance between Three-Dimensional Normal Distributions Transforms, which is evaluated and shown to be more accurate and faster, compared to a state of the art implementation of the Iterative Closest Point and 3D-NDT Point-to-Distribution algorithms.
Abstract: Point set registration-the task of finding the best fitting alignment between two sets of point samples, is an important problem in mobile robotics. This article proposes a novel registration algorithm, based on the distance between Three-Dimensional Normal Distributions Transforms. 3D-NDT models — a sub-class of Gaussian Mixture Models with uniformly weighted, largely disjoint components, can be quickly computed from range point data. The proposed algorithm constructs 3D-NDT representations of the input point sets and then formulates an objective function based on the L 2 distance between the considered models. Analytic first and second order derivatives of the objective function are computed and used in a standard Newton method optimization scheme, to obtain the best-fitting transformation. The proposed algorithm is evaluated and shown to be more accurate and faster, compared to a state of the art implementation of the Iterative Closest Point and 3D-NDT Point-to-Distribution algorithms.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the probability of having long paths that avoid I^u is exponentially small, with logarithmic corrections for d = 3, under the assumption that the mutual distance between the sets A_1 and A_2 is much smaller than their diameters.
Abstract: In this paper we establish a decoupling feature of the random interlacement process I^u in Z^d, at level u, for d \geq 3. Roughly speaking, we show that observations of I^u restricted to two disjoint subsets A_1 and A_2 of Z^d are approximately independent, once we add a sprinkling to the process I^u by slightly increasing the parameter u. Our results differ from previous ones in that we allow the mutual distance between the sets A_1 and A_2 to be much smaller than their diameters. We then provide an important application of this decoupling for which such flexibility is crucial. More precisely, we prove that, above a certain critical threshold u**, the probability of having long paths that avoid I^u is exponentially small, with logarithmic corrections for d=3. To obtain the above decoupling, we first develop a general method for comparing the trace left by two Markov chains on the same state space. This method is based in what we call the soft local time of a chain. In another crucial step towards our main result, we also prove that any discrete set can be "smoothened" into a slightly enlarged discrete set, for which its equilibrium measure behaves in a regular way. Both these auxiliary results are interesting in themselves and are presented independently from the rest of the paper.

Journal ArticleDOI
TL;DR: The present paper is a revised version of Bachler et al. (2010) and includes the proofs of correctness and termination of the decomposition algorithm and illustrates the algorithm with further instructive examples and describes its Maple implementation.

Journal ArticleDOI
TL;DR: In this article, the existence of d-universal functions of exponential type zero for arbitrary finite tuples of pairwise distinct translation operators was shown for arbitrary separable infinite-dimensional Frechet spaces.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the usage of classification approaches in order to facilitate the accurate estimation of probabilistic constraints in optimization problems under uncertainty, and the efficiency of the proposed framework is achieved with the combination of a conventional topology optimization method and a classification approach.
Abstract: This research explores the usage of classification approaches in order to facilitate the accurate estimation of probabilistic constraints in optimization problems under uncertainty. The efficiency of the proposed framework is achieved with the combination of a conventional topology optimization method and a classification approach- namely, probabilistic neural networks (PNN). Specifically, the implemented framework using PNN is useful in the case of highly nonlinear or disjoint failure domain problems. The effectiveness of the proposed framework is demonstrated with three examples. The first example deals with the estimation of the limit state function in the case of disjoint failure domains. The second example shows the efficacy of the proposed method in the design of stiffest structure through the topology optimization process with the consideration of random field inputs and disjoint failure phenomenon, such as buckling. The third example demonstrates the applicability of the proposed method in a practical engineering problem.

Proceedings ArticleDOI
19 May 2012
TL;DR: Two constructions of (very) dense graphs which are edge disjoint unions of large induced matchings are described, which disproves (in a strong form) a conjecture of Meshulam, substantially improves a result of Birk, Linial andMeshulam on communicating over a shared channel, and extends the analysis of Hastad and Wigderson of the graph test of Samorodnitsky and Trevisan for linearity.
Abstract: We describe two constructions of (very) dense graphs which are edge disjoint unions of large induced matchings. The first construction exhibits graphs on N vertices with (N2)-o(N2) edges, which can be decomposed into pairwise disjoint induced matchings, each of size N1-o(1). The second construction provides a covering of all edges of the complete graph KN by two graphs, each being the edge disjoint union of at most N2-δ induced matchings, where δ>0.076. This disproves (in a strong form) a conjecture of Meshulam, substantially improves a result of Birk, Linial and Meshulam on communicating over a shared channel, and (slightly) extends the analysis of Hastad and Wigderson of the graph test of Samorodnitsky and Trevisan for linearity. Additionally, our constructions settle a combinatorial question of Vempala regarding a candidate rounding scheme for the directed Steiner tree problem.

Journal ArticleDOI
TL;DR: This work considers the following generalization of the problem: fix a subset S of vertices of G, and shows that again there exists a constant c such that G either contains k disjoint S-cycles, or there is a set of at most cklogk vertices intersecting every S-cycle.

Journal ArticleDOI
TL;DR: A ‘selective’ RDA is introduced, which preferentially identifies critical disjoint cut sets and link sets to calculate the probabilities of network disconnection events with a significantly reduced number of identified sets and a graph decomposition scheme based on the probabilities associated with the subgraphs.
Abstract: SUMMARY For effective hazard mitigation planning and prompt-but-prudent post-disaster responses, it is essential to evaluate the reliability of infrastructure networks accurately and efficiently. A nonsimulation-based algorithm, termed as a recursive decomposition algorithm (RDA), was recently proposed to identify disjoint cut sets and link sets and to compute the network reliability. This paper introduces a ‘selective’ RDA, which preferentially identifies critical disjoint cut sets and link sets to calculate the probabilities of network disconnection events with a significantly reduced number of identified sets. To this end, the original RDA is improved by replacing the shortest path algorithm with an algorithm that identifies the most reliable path, and by using a graph decomposition scheme based on the probabilities associated with the subgraphs. The critical sets identified by the algorithm are also used to compute conditional probability-based importance measures that quantify the relative importance of network components by their contributions to network disconnection events. This paper also introduces a risk assessment framework for lifeline networks based on the use of the selective RDA, which can consider both interevent and intraevent uncertainties of spatially correlated ground motions. The risk assessment framework and the selective RDA are demonstrated by a hypothetical network example, and the gas and water transmission networks of Shelby County in Tennessee, USA. The examples show that the proposed framework and the selective RDA greatly improve efficiency of risk assessment of complex lifeline networks, which are characterized by a large number of components, complex network topology, and statistical dependence between component failures. Copyright © 2012 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a different association of classical and quantum quantities that renders classical theory a natural subset of quantum theory letting them coexist as required, and this proposal also shines light on alternative linking assignments of classical quantities that offer different perspectives on the very meaning of quantization.
Abstract: Although classical mechanics and quantum mechanics are separate disciplines, we live in a world where Planck’s constant ℏ > 0, meaning that the classical and quantum world views must actually coexist. Traditionally, canonical quantization procedures postulate a direct linking of various c-number and q-number quantities that lie in disjoint realms, along with the quite distinct interpretations given to each realm. In this paper we propose a different association of classical and quantum quantities that renders classical theory a natural subset of quantum theory letting them coexist as required. This proposal also shines light on alternative linking assignments of classical and quantum quantities that offer different perspectives on the very meaning of quantization. In this paper we focus on elaborating the general principles, while elsewhere we have published several examples of what this alternative viewpoint can achieve; these examples include removal of singularities in classical solutions to certain models, and an alternative quantization of several field theory models that are trivial when quantized by traditional methods but become well defined and nontrivial when viewed from the new viewpoint.

Journal ArticleDOI
TL;DR: This work proves a formula for the maximum number of edges in a k-uniform n-vertex hypergraph without a matching of size s and proves this conjecture for k = 3 and all s ≥ 1 and n ≥ 4s.
Abstract: In 1965 ErdA‘s conjectured a formula for the maximum number of edges in a k-uniform n-vertex hypergraph without a matching of size s. We prove this conjecture for k = 3 and all s ≥ 1 and n ≥ 4s.

Journal ArticleDOI
TL;DR: With the use of the double bootstrap and Fisher's exact test (two-tailed), the auditing of concepts and especially roots of overlapping partial-areas is shown to yield a statistically significant higher proportion of errors.

Journal ArticleDOI
TL;DR: In this paper, it was shown that there exists a complete, proper minimal immersion f : M → D, which can be chosen so that the limit sets of distinct ends of M are disjoint connected compact sets in ∂ D.

Journal ArticleDOI
01 Jan 2012-EPL
TL;DR: In this article, the authors studied the entanglement of two disjoint blocks in spin- chains obtained by merging solvable models, such as XX and quantum Ising models, focusing on the universal quantities that can be extracted from the Renyi entropies Sα.
Abstract: We study the entanglement of two disjoint blocks in spin- chains obtained by merging solvable models, such as XX and quantum Ising models. We focus on the universal quantities that can be extracted from the Renyi entropies Sα. The most important information is encoded in some functions denoted by Fα. We compute F2 and we show that Fα− 1 and Fv.N., corresponding to the von Neumann entropy, can be negative, in contrast to what observed in all models examined so far. An exact relation between the entanglement of disjoint subsystems in the XX model and that in a chain embodying two quantum Ising models is a by-product of our investigations.

Journal ArticleDOI
TL;DR: An algorithm for the time-dependent shortest paths problem for availability intervals is presented that runs in time O((Fd+γ)(|E|+|V|log |V |)) where Fd is the output size and γ is the input size where λ denotes the total number of availability intervals in the entire network.
Abstract: In this paper, we study the time-dependent shortest paths problem for two types of time-dependent FIFO networks. First, we consider networks where the availability of links, given by a set of disjoint time intervals for each link, changes over time. Here, each interval is assigned a non-negative real value which represents the travel time on the link during the corresponding interval. The resulting shortest path problem is the time-dependent shortest path problem for availability intervals ($\mathcal{TDSP}_{\mathrm{int}}$ ), which asks to compute all shortest paths to any (or all) destination node(s) d for all possible start times at a given source node s. Second, we study time-dependent networks where the cost of using a link is given by a non-decreasing piece-wise linear function of a real-valued argument. Here, each piece-wise linear function represents the travel time on the link based on the time when the link is used. The resulting shortest paths problem is the time-dependent shortest path problem for piece-wise linear functions ($\mathcal{TDSP}_{\mathrm{lin}}$ ) which asks to compute, for a given source node s and destination d, the shortest paths from s to d, for all possible starting times. We present an algorithm for the $\mathcal{TDSP}_{\mathrm{lin}}$ problem that runs in time O((F d +γ)(|E|+|V|log |V|)) where F d is the output size (i.e., number of linear pieces needed to represent the earliest arrival time function to d) and γ is the input size (i.e., number of linear pieces needed to represent the local earliest arrival time functions for all links in the network). We then solve the $\mathcal{TDSP}_{\mathrm{int}}$ problem in O(λ(|E|+|V|log |V|)) time by reducing it to an instance of the $\mathcal{TDSP}_{\mathrm{lin}}$ problem. Here, λ denotes the total number of availability intervals in the entire network. Both methods improve significantly on the previously known algorithms.

Posted Content
TL;DR: In this article, it has been shown that these two statements are actually equivalent and moreover, they both are undecidable, which is the first time in which one encounters an undecidability proposition in the recently coined theory of lineability.
Abstract: Recently it has been proved that, assuming that there is an almost disjoint family of cardinality (2^{\mathfrak c}) in (\mathfrak c) (which is assured, for instance, by either Martin's Axiom, or CH, or even $2^{<\mathfrak c=\mathfrak c$}) one has that the set of Sierpi\'nski-Zygmund functions is (2^{\mathfrak{c}})-strongly algebrable (and, thus, (2^{\mathfrak{c}})-lineable). Here we prove that these two statements are actually equivalent and, moreover, they both are undecidable. This would be the first time in which one encounters an undecidable proposition in the recently coined theory of lineability.

Proceedings ArticleDOI
14 May 2012
TL;DR: This paper iteratively builds a constructive proof that two configurations lie in disjoint components of the free configuration space, and first generates samples that correspond to configurations for which the robot is in collision with an obstacle.
Abstract: In this paper, we address the problem determining the connectivity of a robot's free configuration space. Our method iteratively builds a constructive proof that two configurations lie in disjoint components of the free configuration space. Our algorithm first generates samples that correspond to configurations for which the robot is in collision with an obstacle. These samples are then weighted by their generalized penetration distance, and used to construct alpha shapes. The alpha shape defines a collection of simplices that are fully contained within the configuration space obstacle region. These simplices can be used to quickly solve connectivity queries, which in turn can be used to define termination conditions for sampling-based planners. Such planners, while typically either resolution complete or probabilistically complete, are not able to determine when a path does not exist, and therefore would otherwise rely on heuristics to determine when the search for a free path should be abandoned. An implementation of the algorithm is provided for the case of a 3D Euclidean configuration space, and a proof of correctness is provided.

Posted Content
TL;DR: In this article, the authors showed an O(poly log k)-approximation algorithm for EDPwC with congestion c=2, by rounding the standard multi-commodity flow relaxation of the problem.
Abstract: In the Edge-Disjoint Paths with Congestion problem (EDPwC), we are given an undirected n-vertex graph G, a collection M={(s_1,t_1),...,(s_k,t_k)} of demand pairs and an integer c. The goal is to connect the maximum possible number of the demand pairs by paths, so that the maximum edge congestion - the number of paths sharing any edge - is bounded by c. When the maximum allowed congestion is c=1, this is the classical Edge-Disjoint Paths problem (EDP). The best current approximation algorithm for EDP achieves an $O(\sqrt n)$-approximation, by rounding the standard multi-commodity flow relaxation of the problem. This matches the $\Omega(\sqrt n)$ lower bound on the integrality gap of this relaxation. We show an $O(poly log k)$-approximation algorithm for EDPwC with congestion c=2, by rounding the same multi-commodity flow relaxation. This gives the best possible congestion for a sub-polynomial approximation of EDPwC via this relaxation. Our results are also close to optimal in terms of the number of pairs routed, since EDPwC is known to be hard to approximate to within a factor of $\tilde{\Omega}((\log n)^{1/(c+1)})$ for any constant congestion c. Prior to our work, the best approximation factor for EDPwC with congestion 2 was $\tilde O(n^{3/7})$, and the best algorithm achieving a polylogarithmic approximation required congestion 14.