scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2015"


Journal ArticleDOI
TL;DR: This paper analyzes the (im)possibility of the exact distinguishability of orthogonal multipartite entangled states under {\em restricted local operation and classical communication} and proposes a new scheme for quantum secret sharing (QSS).
Abstract: In this paper, we analyze the (im)possibility of the exact distinguishability of orthogonal multipartite entangled states under restricted local operation and classical communication. Based on this local distinguishability analysis, we propose a quantum secret sharing scheme (which we call LOCC-QSS). Our LOCC-QSS scheme is quite general and cost efficient compared to other schemes. In our scheme, no joint quantum operation is needed to reconstruct the secret. We also present an interesting $(2,n)$-threshold LOCC-QSS scheme, where any two cooperating players, one from each of two disjoint groups of players, can always reconstruct the secret. This LOCC-QSS scheme is quite uncommon, as most $(k,n)$-threshold quantum secret sharing schemes have the restriction $k\ensuremath{\ge}\ensuremath{\lceil}\frac{n}{2}\ensuremath{\rceil}$.

118 citations


Journal ArticleDOI
TL;DR: In this article, the existence of knotted and linked thin vortex tubes for steady solutions to the incompressible Euler equation was proved, given a finite collection of (possibly linked and knotted) disjoint thin tubes.
Abstract: We prove the existence of knotted and linked thin vortex tubes for steady solutions to the incompressible Euler equation in \({\mathbb{R}^{3}}\). More precisely, given a finite collection of (possibly linked and knotted) disjoint thin tubes in \({\mathbb{R}^{3}}\), we show that they can be transformed with a Cm-small diffeomorphism into a set of vortex tubes of a Beltrami field that tends to zero at infinity. The structure of the vortex lines in the tubes is extremely rich, presenting a positive-measure set of invariant tori and infinitely many periodic vortex lines. The problem of the existence of steady knotted thin vortex tubes can be traced back to Lord Kelvin.

112 citations


Journal ArticleDOI
TL;DR: In this article, the authors employed a numerical method based on rational interpolations to extrapolate the entanglement entropy of two disjoint intervals for the conformal field theories given by the free compact boson and the Ising model.
Abstract: The entanglement entropy and the logarithmic negativity can be computed in quantum field theory through a method based on the replica limit. Performing these analytic continuations in some cases is beyond our current knowledge, even for simple models. We employ a numerical method based on rational interpolations to extrapolate the entanglement entropy of two disjoint intervals for the conformal field theories given by the free compact boson and the Ising model. The case of three disjoint intervals is studied for the Ising model and the non compact free massless boson. For the latter model, the logarithmic negativity of two disjoint intervals has been also considered. Some of our findings have been checked against existing numerical results obtained from the corresponding lattice models.

108 citations


Book ChapterDOI
21 Apr 2015
TL;DR: In this paper, a general family of facility location problems on planar graphs and on the 2-dimensional plane was studied, where a subset of k objects has to be selected, satisfying certain packing (disjointness) and covering constraints.
Abstract: We study a general family of facility location problems defined on planar graphs and on the 2-dimensional plane. In these problems, a subset of k objects has to be selected, satisfying certain packing (disjointness) and covering constraints. Our main result is showing that, for each of these problems, the \(n^{{\mathcal{O}}(k)}\) time brute force algorithm of selecting k objects can be improved to \(n^{{\mathcal{O}}(\sqrt{k})}\) time. The algorithm is based on focusing on the Voronoi diagram of a hypothetical solution of k objects; this idea was introduced recently in the design of geometric QPTASs, but was not yet used for exact algorithms and for planar graphs. As concrete consequences of our main result, we obtain \(n^{{\mathcal{O}}(\sqrt{k})}\) time algorithms for the following problems: d-Scattered Set in planar graphs (find k vertices at pairwise distance d); d-Dominating Set/(k,d)-Center in planar graphs (find k vertices such that every vertex is at distance at most d from these vertices); select k pairwise disjoint connected vertex sets from a given collection; select k pairwise disjoint disks in the plane (of possibly different radii) from a given collection; cover a set of points in the plane by selecting k disks/axis-parallel squares from a given collection. We complement these positive results with lower bounds suggesting that some similar, but slightly more general problems (such as covering points with axis-parallel rectangles) do not admit \(n^{{\mathcal{O}}(\sqrt{k})}\) time algorithms.

102 citations


Proceedings ArticleDOI
14 Jun 2015
TL;DR: A binary linear code of length equation is constructed which has locality r and availability t for all coordinates, the only known construction that can achieve arbitrary locality and availability.
Abstract: The ith coordinate of an [n, k] code is said to have locality r and availability t if there exist t disjoint groups, each containing at most r other coordinates that can together recover the value of the ith coordinate. This property is particularly useful for codes for distributed storage systems because it permits local repair of failed nodes and parallel access of hot data. In this paper, for any positive integers r and t, we construct a binary linear code of length equation which has locality r and availability t for all coordinates. Although it only achieves the trivial minimum distance (i.e. t + 1), its information rate attains equation, which is higher than that of the direct product code, the only known construction that can achieve arbitrary locality and availability.

89 citations


Journal ArticleDOI
01 Dec 2015
TL;DR: A partition scheme to partition the sets into several subsets and guarantee that two sets are similar only if they share a common subset, and an adaptive grouping mechanism that can reduce the complexity to O(s log s).
Abstract: We study the exact set similarity join problem, which, given two collections of sets, finds out all the similar set pairs from the collections. Existing methods generally utilize the prefix filter based framework. They generate a prefix for each set and prune all the pairs whose prefixes are disjoint. However the pruning power is limited, because if two dissimilar sets share a common element in their prefixes, they cannot be pruned. To address this problem, we propose a partition-based framework. We design a partition scheme to partition the sets into several subsets and guarantee that two sets are similar only if they share a common subset. To improve the pruning power, we propose a mixture of the subsets and their 1-deletion neighborhoods (the subset of a set by eliminating one element). As there are multiple allocation strategies to generate the mixture, we evaluate different allocations and design a dynamic-programming algorithm to select the optimal one. However the time complexity of generating the optimal one is O(s3) for a set with size s. To speed up the allocation selection, we develop a greedy algorithm with an approximation ratio of 2. To further reduce the complexity, we design an adaptive grouping mechanism, and the two techniques can reduce the complexity to O(s log s). Experimental results on three real-world datasets show our method achieves high performance and outperforms state-of-the-art methods by 2-5 times.

80 citations


Journal ArticleDOI
TL;DR: A hybrid memetic framework for coverage optimization (Hy-MFCO) is presented to cope with the hybrid problem using two major components: a memetic algorithm (MA)-based scheduling strategy and a heuristic recursive algorithm (HRA).
Abstract: One of the critical concerns in wireless sensor networks (WSNs) is the continuous maintenance of sensing coverage. Many particular applications, such as battlefield intrusion detection and object tracking, require a full-coverage at any time, which is typically resolved by adding redundant sensor nodes. With abundant energy, previous studies suggested that the network lifetime can be maximized while maintaining full coverage through organizing sensor nodes into a maximum number of disjoint sets and alternately turning them on. Since the power of sensor nodes is unevenly consumed over time, and early failure of sensor nodes leads to coverage loss, WSNs require dynamic coverage maintenance. Thus, the task of permanently sustaining full coverage is particularly formulated as a hybrid of disjoint set covers and dynamic-coverage-maintenance problems, and both have been proven to be nondeterministic polynomial-complete. In this paper, a hybrid memetic framework for coverage optimization (Hy-MFCO) is presented to cope with the hybrid problem using two major components: 1) a memetic algorithm (MA)-based scheduling strategy and 2) a heuristic recursive algorithm (HRA). First, the MA-based scheduling strategy adopts a dynamic chromosome structure to create disjoint sets, and then the HRA is utilized to compensate the loss of coverage by awaking some of the hibernated nodes in local regions when a disjoint set fails to maintain full coverage. The results obtained from real-world experiments using a WSN test-bed and computer simulations indicate that the proposed Hy-MFCO is able to maximize sensing coverage while achieving energy efficiency at the same time. Moreover, the results also show that the Hy-MFCO significantly outperforms the existing methods with respect to coverage preservation and energy efficiency.

75 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the probability of having long paths that avoid I u is exponentially small, with logarithmic corrections for d = 3, under the assumption that the mutual distance between the sets A1 and A2 is smaller than their diameters.
Abstract: In this paper we establish a decoupling feature of the random interlacement process I uZ d at level u, d � 3. Roughly speaking, we show that observations of I u restricted to two disjoint subsets A1 and A2 of Z d are approximately independent, once we add a sprinkling to the process I u by slightly increasing the parameter u. Our results differ from previous ones in that we allow the mutual distance between the sets A1 and A2 to be much smaller than their diameters. We then provide an important application of this decoupling for which such flexibility is crucial. More precisely, we prove that, above a certain critical threshold u , the probability of having long paths that avoid I u is exponentially small, with logarithmic corrections for d = 3. To obtain the above decoupling, we first develop a general method for comparing the trace left by two Markov chains on the same state space. This method is based in what we call the soft local time of a chain. In another crucial step towards our main result, we also prove that any discrete set can be "smoothened" into a slightly enlarged discrete set, for which its equilibrium measure behaves in a regular way. Both these auxiliary results are interesting in themselves and are presented independently from the rest of the paper. This work is mainly concerned with the decoupling of the random interlacements model introduced by A.S. Sznitman in (23). In other words, we show that the restrictions the interlacement set I u to two disjoint subsets A1 and A2 of Z d are approximately independent in a certain sense. To this aim, we first develop a general method, based on what we call soft local times, to obtain an approximate stochastic domination between the ranges of two general Markov chains on the same state space. To apply this coupling method to the model of random interlacements, we first need to modify the sets A1 and A2 through a procedure we call smoothening. This consists of enclosing a discrete set AZ d into a slightly enlarged set A 0 , whose equilibrium distribution behaves "regularly", resembling what happens for a large discrete ball. Finally, as an application of our decoupling result, we obtain upper bounds for the connectivity function of the vacant set V u = Z d n I u , for intensities u above a critical threshold u��. These bounds are considerably sharp, presenting a behaviour very similarly to that of their corresponding lower bounds. We believe that these four results are interesting in their own. Therefore, we structured the article in a way so they can be read independently from each other. Below we give a more detailed description of each of these results.

66 citations


Journal ArticleDOI
TL;DR: It is found that the cluster reducibility can be characterized for semistable systems based on a projected controllability Gramian that leads to an a priori H 2 -error bound of the state discrepancy caused by aggregation.

57 citations


Proceedings ArticleDOI
13 Apr 2015
TL;DR: An online algorithm is designed that generates the optimal PLA in terms of representation size while meeting the prescribed max-error guarantee, and can reduce the representation size of f by around 15% on average compared with the current best methods.
Abstract: Given a time series S = ((x 1 , y 1 ), (x 2 , y 2 ), …) and a prescribed error bound e, the piecewise linear approximation (PLA) problem with max-error guarantees is to construct a piecewise linear function f such that |f(x i )-y i | ≤ e for all i In addition, we would like to have an online algorithm that takes the time series as the records arrive in a streaming fashion, and outputs the pieces of f on-the-fly This problem has applications wherever time series data is being continuously collected, but the data collection device has limited local buffer space and communication bandwidth, so that the data has to be compressed and sent back during the collection process Prior work addressed two versions of the problem, where either f consists of disjoint segments, or f is required to be a continuous piecewise linear function In both cases, existing algorithms can produce a function f that has the minimum number of pieces while meeting the prescribed error bound e However, we observe that neither minimizes the true representation size of f, ie, the number of parameters required to represent f In this paper, we design an online algorithm that generates the optimal PLA in terms of representation size while meeting the prescribed max-error guarantee Our experiments on many real-world data sets show that our algorithm can reduce the representation size of f by around 15% on average compared with the current best methods, while still requiring O(1) processing time per data record and small space

55 citations


Journal ArticleDOI
TL;DR: In this article, the authors employed a numerical method based on rational interpolations to extrapolate the entanglement entropy of two disjoint intervals for the conformal field theories given by the free compact boson and the Ising model.
Abstract: The entanglement entropy and the logarithmic negativity can be computed in quantum field theory through a method based on the replica limit. Performing these analytic continuations in some cases is beyond our current knowledge, even for simple models. We employ a numerical method based on rational interpolations to extrapolate the entanglement entropy of two disjoint intervals for the conformal field theories given by the free compact boson and the Ising model. The case of three disjoint intervals is studied for the Ising model and the non compact free massless boson. For the latter model, the logarithmic negativity of two disjoint intervals has been also considered. Some of our findings have been checked against existing numerical results obtained from the corresponding lattice models.

Journal ArticleDOI
TL;DR: In this article, the authors studied the parameterized complexity of Independent Set on 2-union graphs and on subclasses like strip graphs and showed that the problem is fixed-parameter tractable with respect to the parameter $$k$$k.
Abstract: Numerous applications in scheduling, such as resource allocation or steel manufacturing, can be modeled using the NP-hard Independent Set problem (given an undirected graph and an integer $$k$$k, find a set of at least $$k$$k pairwise non-adjacent vertices). Here, one encounters special graph classes like 2-union graphs (edge-wise unions of two interval graphs) and strip graphs (edge-wise unions of an interval graph and a cluster graph), on which Independent Set remains $$\mathrm{NP}$$NP-hard but admits constant ratio approximations in polynomial time. We study the parameterized complexity of Independent Set on 2-union graphs and on subclasses like strip graphs. Our investigations significantly benefit from a new structural "compactness" parameter of interval graphs and novel problem formulations using vertex-colored interval graphs. Our main contributions are as follows:1.We show a complexity dichotomy: restricted to graph classes closed under induced subgraphs and disjoint unions, Independent Set is polynomial-time solvable if both input interval graphs are cluster graphs, and is $$\mathrm{NP}$$NP-hard otherwise.2.We chart the possibilities and limits of effective polynomial-time preprocessing (also known as kernelization).3.We extend Halldorsson and Karlsson (2006)'s fixed-parameter algorithm for Independent Set on strip graphs parameterized by the structural parameter "maximum number of live jobs" to show that the problem (also known as Job Interval Selection) is fixed-parameter tractable with respect to the parameter $$k$$k and generalize their algorithm from strip graphs to 2-union graphs. Preliminary experiments with random data indicate that Job Interval Selection with up to 15 jobs and $$5\cdot 10^5$$5·105 intervals can be solved optimally in less than 5 min.

Journal ArticleDOI
TL;DR: In this article, the trajectories of a solution to an Ito stochastic differential equation were studied as the process passes between two disjoint open sets, and the probability law of these transition paths was described in terms of a transition path process.
Abstract: We study the trajectories of a solution \(X_t\) to an Ito stochastic differential equation in \({\mathbb { R}}^d\), as the process passes between two disjoint open sets, \(A\) and \(B\). These segments of the trajectory are called transition paths or reactive trajectories, and they are of interest in the study of chemical reactions and thermally activated processes. In that context, the sets \(A\) and \(B\) represent reactant and product states. Our main results describe the probability law of these transition paths in terms of a transition path process \(Y_t\), which is a strong solution to an auxiliary SDE having a singular drift term. We also show that statistics of the transition path process may be recovered by empirical sampling of the original process \(X_t\). As an application of these ideas, we prove various representation formulas for statistics of the transition paths. We also identify the density and current of transition paths. Our results fit into the framework of the transition path theory by Weinan and Vanden-Eijnden.

Journal ArticleDOI
TL;DR: This work compute the leading contribution to the mutual information (MI) of two disjoint spheres in the large distance regime for arbitrary conformal field theories (CFT) in any dimension by refining the operator product expansion method introduced by Cardy.
Abstract: We compute the leading contribution to the mutual information (MI) of two disjoint spheres in the large distance regime for arbitrary conformal field theories (CFT) in any dimension. This is achieved by refining the operator product expansion method introduced by Cardy \cite{Cardy:2013nua}. For CFTs with holographic duals the leading contribution to the MI at long distances comes from bulk quantum corrections to the Ryu-Takayanagi area formula. According to the FLM proposal\cite{Faulkner:2013ana} this equals the bulk MI between the two disjoint regions spanned by the boundary spheres and their corresponding minimal area surfaces. We compute this quantum correction and provide in this way a non-trivial check of the FLM proposal.

Journal ArticleDOI
TL;DR: This article introduces a novel RDF indexing technique that supports efficient SPARQL solution in compressed space and enhances this model with two compact indexes listing the predicates related to each different subject and object in the dataset, in order to address the specific weaknesses of vertically partitioned representations.
Abstract: The Web of Data has been gaining momentum in recent years. This leads to increasingly publish more and more semi-structured datasets following, in many cases, the RDF (Resource Description Framework) data model based on atomic triple units of subject, predicate, and object. Although it is a very simple model, specific compression methods become necessary because datasets are increasingly larger and various scalability issues arise around their organization and storage. This requirement is even more restrictive in RDF stores because efficient SPARQL solution on the compressed RDF datasets is also required. This article introduces a novel RDF indexing technique that supports efficient SPARQL solution in compressed space. Our technique, called $$\hbox {k}^2$$k2-triples, uses the predicate to vertically partition the dataset into disjoint subsets of pairs (subject, object), one per predicate. These subsets are represented as binary matrices of subjects $$\times $$× objects in which 1-bits mean that the corresponding triple exists in the dataset. This model results in very sparse matrices, which are efficiently compressed using $$\hbox {k}^2$$k2-trees. We enhance this model with two compact indexes listing the predicates related to each different subject and object in the dataset, in order to address the specific weaknesses of vertically partitioned representations. The resulting technique not only achieves by far the most compressed representations, but also achieves the best overall performance for RDF retrieval in our experimental setup. Our approach uses up to 10 times less space than a state-of-the-art baseline and outperforms its time performance by several orders of magnitude on the most basic query patterns. In addition, we optimize traditional join algorithms on $$\hbox {k}^2$$k2-triples and define a novel one leveraging its specific features. Our experimental results show that our technique also overcomes traditional vertical partitioning for join solution, reporting the best numbers for joins in which the non-joined nodes are provided, and being competitive in most of the cases.

Journal ArticleDOI
TL;DR: In this paper, the moments of the partial transpose of the reduced density matrix of two intervals for the free massless Dirac fermion were studied in terms of the Riemann theta function.
Abstract: We study the moments of the partial transpose of the reduced density matrix of two intervals for the free massless Dirac fermion. By means of a direct calculation based on coherent state path integral, we find an analytic form for these moments in terms of the Riemann theta function. We show that the moments of arbitrary order are equal to the same quantities for the compactified boson at the self-dual point. These equalities imply the non trivial result that also the negativity of the free fermion and the self-dual boson are equal.

Journal ArticleDOI
TL;DR: In this paper, the partial transpose of the spin reduced density matrix of two disjoint blocks in spin chains admitting a representation in terms of free fermions, such as XY chains, is considered.
Abstract: We consider the partial transpose of the spin reduced density matrix of two disjoint blocks in spin chains admitting a representation in terms of free fermions, such as XY chains. We exploit the solution of the model in terms of Majorana fermions and show that such partial transpose in the spin variables is a linear combination of four Gaussian fermionic operators. This representation allows to explicitly construct and evaluate the integer moments of the partial transpose. We numerically study critical XX and Ising chains and we show that the asymptotic results for large blocks agree with conformal eld theory predictions if corrections to the scaling are properly taken into account.

Journal ArticleDOI
TL;DR: In this paper, the moments of the reduced density matrix of two disjoint intervals and of its partial transposition with respect to one interval for critical free fermionic lattice models were investigated.
Abstract: We reconsider the moments of the reduced density matrix of two disjoint intervals and of its partial transpose with respect to one interval for critical free fermionic lattice models. It is known that these matrices are sums of either two or four Gaussian matrices and hence their moments can be reconstructed as computable sums of products of Gaussian operators. We find that, in the scaling limit, each term in these sums is in one-to-one correspondence with the partition function of the corresponding conformal field theory on the underlying Riemann surface with a given spin structure. The analytical findings have been checked against numerical results for the Ising chain and for the XX spin chain at the critical point.

Journal ArticleDOI
15 Jul 2015-Filomat
TL;DR: In this article, a new type boundary value problem is investigated, which consists of the equation $-y''(x) + (\mathcal{B}y)(x) = \lambda y (x)$ on two disjoint intervals $(-1, 0) \ \textrm{and} \ (0, 1)$ together with transmission conditions at the point of interaction $x = 0$ and with eigenparameter dependent boundary conditions.
Abstract: The aim of this study is to investigate a new type boundary value problems which consist of the equation $-y''(x) + (\mathcal{B}y)(x) = \lambda y(x)$ on two disjoint intervals $(-1 , 0) \ \textrm{and} \ (0 , 1)$ together with transmission conditions at the point of interaction $x = 0$ and with eigenparameter dependent boundary conditions, where $\mathcal{B}$ is an abstract linear operator. By suggesting an own approaches we introduce modified Hilbert space and linear operator in it such a way that the considered problem can be interpreted as an eigenvalue problem of this operator. We establish such properties as isomorphism and coerciveness with respect to spectral parameter, maximal decreasing of the resolvent operator and discreteness of the spectrum. Further we examine asymptotic behaviour of the eigenvalues.

Journal ArticleDOI
TL;DR: In this article, the authors studied the holographic R\'enyi entropy of a large interval on a circle at high temperature for the two-dimensional conformal field theory (CFT) dual to pure gravity.
Abstract: In this paper, we study the holographic R\'enyi entropy of a large interval on a circle at high temperature for the two-dimensional conformal field theory (CFT) dual to pure ${\mathrm{AdS}}_{3}$ gravity. In the field theory, the R\'enyi entropy is encoded in the CFT partition function on $n$-sheeted torus connected with each other by a large branch cut. As proposed by Chen and Wu [Large interval limit of R\'enyi entropy at high temperature, arXiv:1412.0763], the effective way to read the entropy in the large interval limit is to insert a complete set of state bases of the twist sector at the branch cut. Then the calculation transforms into an expansion of four-point functions in the twist sector with respect to ${e}^{\ensuremath{-}\frac{2\ensuremath{\pi}TR}{n}}$. By using the operator product expansion of the twist operators at the branch points, we read the first few terms of the R\'enyi entropy, including the leading and next-to-leading contributions in the large central charge limit. Moreover, we show that the leading contribution is actually captured by the twist vacuum module. In this case by the Ward identity the four-point functions can be derived from the correlation function of four twist operators, which is related to double interval entanglement entropy. Holographically, we apply the recipe in [T. Faulkner, The entanglement R\'enyi entropies of disjoint intervals in AdS/CFT, arXiv:1303.7221] and [T. Barrella et al., Holographic entanglement beyond classical gravity, J. High Energy Phys. 09 (2013) 109] to compute the classical R\'enyi entropy and its one-loop quantum correction, after imposing a new set of monodromy conditions. The holographic classical result matches exactly with the leading contribution in the field theory up to ${e}^{\ensuremath{-}4\ensuremath{\pi}TR}$ and ${l}^{6}$, while the holographical one-loop contribution is in exact agreement with next-to-leading results in field theory up to ${e}^{\ensuremath{-}\frac{6\ensuremath{\pi}TR}{n}}$ and ${l}^{4}$ as well.

Posted Content
TL;DR: It is shown that under suitable restrictions on the dimensions, a well-known Deleted Product Criterion is not only necessary but also sufficient for the existence of maps without r-Tverberg points, which is a higher-multiplicity version of the classical Whitney trick.
Abstract: Motivated by topological Tverberg-type problems and by classical results about embeddings (maps without double points), we study the question whether a finite simplicial complex K can be mapped into R^d without triple, quadruple, or, more generally, r-fold points. Specifically, we are interested in maps f from K to R^d that have no r-Tverberg points, i.e., no r-fold points with preimages in r pairwise disjoint simplices of K, and we seek necessary and sufficient conditions for the existence of such maps. We present a higher-multiplicity analogue of the completeness of the Van Kampen obstruction for embeddability in twice the dimension. Specifically, we show that under suitable restrictions on the dimensions, a well-known Deleted Product Criterion (DPC) is not only necessary but also sufficient for the existence of maps without r-Tverberg points. Our main technical tool is a higher-multiplicity version of the classical Whitney trick. An important guiding idea for our work was that sufficiency of the DPC, together with an old result of Ozaydin on the existence of equivariant maps, might yield an approach to disproving the remaining open cases of the long-standing topological Tverberg conjecture. Unfortunately, our proof of the sufficiency of the DPC requires a "codimension 3" proviso, which is not satisfied for when K is the N-simplex. Recently, Frick found an extremely elegant way to overcome this last "codimension 3" obstacle and to construct counterexamples to the topological Tverberg conjecture for d at least 3r+1 (r not a prime power). Here, we present a different construction that yields counterexamples for d at least 3r (r not a prime power).

Proceedings Article
07 Dec 2015
TL;DR: This paper gives constant-factor approximation algorithms for maximizing monotone k-submodular functions subject to several size constraints and experimentally demonstrates that these algorithms outperform baseline algorithms in terms of the solution quality.
Abstract: A k-submodular function is a generalization of a submodular function, where the input consists of k disjoint subsets, instead of a single subset, of the domain. Many machine learning problems, including influence maximization with k kinds of topics and sensor placement with k kinds of sensors, can be naturally modeled as the problem of maximizing monotone k-submodular functions. In this paper, we give constant-factor approximation algorithms for maximizing monotone k-submodular functions subject to several size constraints. The running time of our algorithms are almost linear in the domain size. We experimentally demonstrate that our algorithms outperform baseline algorithms in terms of the solution quality.

Journal ArticleDOI
TL;DR: In this paper, a new variant of variable neighborhood search for solving the maximally diverse grouping problem is proposed. But the algorithm is not suitable for the problem of finding a partition of a given set of elements into a fixed number of mutually disjoint subsets.

01 May 2015
TL;DR: The extensive computational results show that the new heuristic significantly outperforms the current state of the art for solving the maximally diverse grouping problem.
Abstract: The maximally diverse grouping problem requires finding a partition of a given set of elements into a fixed number of mutually disjoint subsets (or groups) in order to maximize the overall diversity between elements of the same group. In this paper we develop a new variant of variable neighborhood search for solving the problem. The extensive computational results show that our new heuristic significantly outperforms the current state of the art. Moreover, the best known solutions have been improved on 531 out of 540 test instances from the literature.

Journal ArticleDOI
TL;DR: In this article, an integer linear programming model and a satisfiability test model were proposed for the packing coloring problem. But the packing chromatic number is known to be NP-hard.

Posted Content
TL;DR: In this article, the relation between certain random tiling models and interacting particle systems belonging to the anisotropic KPZ (Kardar-Parisi-Zhang) universality class in 2+1-dimensions is explained.
Abstract: We explain the relation between certain random tiling models and interacting particle systems belonging to the anisotropic KPZ (Kardar-Parisi-Zhang) universality class in 2+1-dimensions. The link between these two \emph{a priori} disjoint sets of models is a consequence of the presence of shuffling algorithms that generate random tilings under consideration. To see the precise connection, we represent both a random tiling and the corresponding particle system through a set of non-intersecting lines, whose dynamics is induced by the shuffling algorithm or the particle dynamics. The resulting class of measures on line ensembles also fits into the framework of the Schur processes.

01 Jan 2015
TL;DR: In this article, a new class of sets known as δ S -closed sets in ideal topological spaces is introduced, which lies between -I-closed (19) sets and g-closed sets, and its unique feature is it forms topology and it is independent of open sets.
Abstract: In this paper we introduce a new class of sets known as δ S -closed sets in ideal topological spaces and we studied some of its basic properties and characterizations. This new class of sets lies between -I-closed (19) sets and g-closed sets, and its unique feature is it forms topology and it is independent of open sets.

Journal ArticleDOI
TL;DR: A polynomial-time algorithm is provided to solve a rooted version of the k edge-disjoint directed paths problem (for fixed k) in digraphs with independence number bounded by a fixed integer α.

Proceedings ArticleDOI
Damien Pous1
14 Jan 2015
TL;DR: In this article, the authors use symbolic automata, where the transition function is compactly represented using (multi-terminal) binary decision diagrams (BDD), to check language equivalence of finite automata over a large alphabet.
Abstract: We propose algorithms for checking language equivalence of finite automata over a large alphabet. We use symbolic automata, where the transition function is compactly represented using (multi-terminal) binary decision diagrams (BDD). The key idea consists in computing a bisimulation by exploring reachable pairs symbolically, so as to avoid redundancies. This idea can be combined with already existing optimisations, and we show in particular a nice integration with the disjoint sets forest data-structure from Hopcroft and Karp's standard algorithm. Then we consider Kleene algebra with tests (KAT), an algebraic theory that can be used for verification in various domains ranging from compiler optimisation to network programming analysis. This theory is decidable by reduction to language equivalence of automata on guarded strings, a particular kind of automata that have exponentially large alphabets. We propose several methods allowing to construct symbolic automata out of KAT expressions, based either on Brzozowski's derivatives or on standard automata constructions. All in all, this results in efficient algorithms for deciding equivalence of KAT expressions.

Journal ArticleDOI
TL;DR: A method based on information theory and a kernel-based clustering algorithm is proposed to detect efficiently the set of land-cover classes which are common to both domains as well as the additional or missing classes in the target domain image.
Abstract: This paper addresses the problem of land-cover classification of remotely sensed image pairs in the context of domain adaptation. The primary assumption of the proposed method is that the training data are available only for one of the images (source domain), whereas for the other image (target domain), no labeled data are available. No assumption is made here on the number and the statistical properties of the land-cover classes that, in turn, may vary from one domain to the other. The only constraint is that at least one land-cover class is shared by the two domains. Under these assumptions, a novel graph theoretic cross-domain cluster mapping algorithm is proposed to detect efficiently the set of land-cover classes which are common to both domains as well as the additional or missing classes in the target domain image. An interdomain graph is introduced, which contains all of the class information of both images, and subsequently, an efficient subgraph-matching algorithm is proposed to highlight the changes between them. The proposed cluster mapping algorithm initially clusters the target domain data into an optimal number of groups given the available source domain training samples. To this end, a method based on information theory and a kernel-based clustering algorithm is proposed. Considering the fact that the spectral signature of land-cover classes may overlap significantly, a postprocessing step is applied to refine the classification map produced by the clustering algorithm. Two multispectral data sets with medium and very high geometrical resolution and one hyperspectral data set are considered to evaluate the robustness of the proposed technique. Two of the data sets consist of multitemporal image pairs, while the remaining one contains images of spatially disjoint geographical areas. The experiments confirm the effectiveness of the proposed framework in different complex scenarios.