scispace - formally typeset
Search or ask a question

Showing papers on "Disjoint sets published in 2007"


BookDOI
01 Jan 2007
TL;DR: A. Kuba and G.T. Herman Discrete point X-ray (DPT) reconstruction as discussed by the authors is a well-known technique in the field of discrete tomography.
Abstract: ANHA Series Preface Preface List of Contributors Introduction / A. Kuba and G.T. Herman Part I. Foundations of Discrete Tomography An Introduction to Discrete Point X-Rays / P. Dulio, R.J. Gardner, and C. Peri Reconstruction of Q-Convex Lattice Sets / S. Brunetti and A. Daurat Algebraic Discrete Tomography / L. Hajdu and R. Tijdeman Uniqueness and Additivity for n-Dimensional Binary Matrices with Respect to Their 1-Marginals / E. Vallejo Constructing (0, 1)-Matrices with Given Line Sums and Certain Fixed Zeros / R.A. Brualdi and G. Dahl Reconstruction of Binary Matrices under Adjacency Constraints / S. Brunetti, M.C. Costa, A. Frosini, F. Jarray, and C. Picouleau Part II. Discrete Tomography Reconstruction Algorithms Decomposition Algorithms for Reconstructing Discrete Sets with Disjoint Components / P. Balazs Network Flow Algorithms for Discrete Tomography / K.J. Batenburg A Convex Programming Algorithm for Noisy Discrete Tomography / T.D. Capricelli and P.L. Combettes Variational Reconstruction with DC-Programming / C. Schnoerr, T. Schule, and S. Weber Part III. Applications of Discrete Tomography Direct Image Reconstruction-Segmentation, as Motivated by Electron Microscopy / Hstau Y. Liao and Gabor T. Herman Discrete Tomography for Generating Grain Maps of Polycrystals / A. Alpers, L. Rodek, H.F. Poulsen, E. Knudsen, G.T. Herman Discrete Tomography Methods for Nondestructive Testing / J. Baumann, Z. Kiss, S. Krimmel, A. Kuba, A. Nagy, L. Rodek, B. Schillinger, and J. Stephan Emission Discrete Tomography / E. Barcucci, A. Frosini, A. Kuba, A. Nagy, S. Rinaldi, M. Samal, and S. Zopf Application of a Discrete Tomography Approach to Computerized Tomography / Y. Gerard and F. Feschet Index

256 citations


Journal ArticleDOI
31 May 2007
TL;DR: An illumination-tolerant appearance representation is proposed, which is capable of coping with the typical illumination changes occurring in surveillance scenarios and is based on an online k-means colour clustering algorithm, a data-adaptive intensity transformation and the incremental use of frames.
Abstract: Tracking single individuals as they move across disjoint camera views is a challenging task since their appearance may vary significantly between views. Major changes in appearance are due to different and varying illumination conditions and the deformable geometry of people. These effects are hard to estimate and take into account in real-life applications. Thus, in this paper we propose an illumination-tolerant appearance representation, which is capable of coping with the typical illumination changes occurring in surveillance scenarios. The appearance representation is based on an online k-means colour clustering algorithm, a data-adaptive intensity transformation and the incremental use of frames. A similarity measurement is also introduced to compare the appearance representations of any two arbitrary individuals. Post-matching integration of the matching decision along the individuals‘ tracks is performed in order to improve reliability and robustness of matching. Once matching is provided for any two views of a single individual, its tracking across disjoint cameras derives straightforwardly. Experimental results presented in this paper from a real surveillance camera network show the effectiveness of the proposed method.

129 citations


Proceedings ArticleDOI
07 Jan 2007
TL;DR: First fully dynamic subquadratic algorithms for: computing maximum matching size, computingmaximum bipartite matching weight, computing maximum number of vertex disjoint paths and testing directed vertex k-connectivity of the graph are presented.
Abstract: We present first fully dynamic subquadratic algorithms for: computing maximum matching size, computing maximum bipartite matching weight, computing maximum number of vertex disjoint s, t paths and testing directed vertex k-connectivity of the graph. The presented algorithms are randomized. The algorithms for maximum matching size and disjoint paths support operations in O(n1.495) time. The algorithm for computing the maximum bipartite matching weight maintains the graph with integer edge weights from the set 1,..., W in O(W2.495n1.495) time. The algorithm for testing directed vertex k-connectivity supports updates in O(n1.575 + nk2) time. For all of these problems the presented dynamic algorithms break the input size barrier --- O(n2).As a side result we obtain a dynamic algorithm for the dynamic maintenance of the rank of the matrix that support updates in O(n1.495) time.

124 citations


Journal ArticleDOI
TL;DR: It is shown that taking the fact that an element can belong to some degree to several "soft similarity classes" at the same time may lead to new and interesting definitions of lower and upper approximations.
Abstract: Traditional rough set theory uses equivalence relations to compute lower and upper approximations of sets. The corresponding equivalence classes either coincide or are disjoint. This behaviour is lost when moving on to a fuzzy T-equivalence relation. However, none of the existing studies on fuzzy rough set theory tries to exploit the fact that an element can belong to some degree to several "soft similarity classes" at the same time. In this paper we show that taking this truly fuzzy characteristic into account may lead to new and interesting definitions of lower and upper approximations. We explore two of them in detail and we investigate under which conditions they differ from the commonly used definitions. Finally we show the possible practical relevance of the newly introduced approximations for query refinement

122 citations


Journal ArticleDOI
TL;DR: The notion of disjointness of hypercyclic operators was introduced in this article for finitely many hyper cyclic operators acting on a common space, a notion that is weaker than Furstenberg's notion of fluid flows.

107 citations


Book ChapterDOI
16 Sep 2007
TL;DR: The concept of comprehensive triangular decomposition (CTD) for a parametric polynomial system F with coefficients in a field is introduced and an algorithm for computing the CTD of F is proposed, based on the RegularChains library in MAPLE.
Abstract: We introduce the concept of comprehensive triangular decomposition (CTD) for a parametric polynomial system F with coefficients in a field. In broad words, this is a finite partition of the the parameter space into regions, so that within each region the "geometry" (number of irreducible components together with their dimensions and degrees) of the algebraic variety of the specialized system F(u) is the same for all values u of the parameters. We propose an algorithm for computing the CTD of F. It relies on a procedure for solving the following set theoretical instance of the coprime factorization problem. Given a family of constructible sets A1,..., As, compute a family B1,..., Bt of pairwise disjoint constructible sets, such that for all 1 ≤ i ≤ s the set Ai writes as a union of some of the B1,...,Bt. We report on an implementation of our algorithm computing CTDs, based on the RegularChains library in MAPLE. We provide comparative benchmarks with MAPLE implementations of related methods for solving parametric polynomial systems. Our results illustrate the good performances of our CTD code.

103 citations


Journal ArticleDOI
TL;DR: In the model of a common random string it is proved that O(k) communication bits are sufficient, regardless of n, and in the models of private random coins O( k+ log log n) bits suffice.
Abstract: We study the communication complexity of the disjointness function, in which each of two players holds a k-subset of a universe of size n and the goal is to determine whether the sets are disjoint. In the model of a common random string we prove that O(k) communication bits are sufficient, regardless of n. In the model of private random coins O(k+ log log n) bits suffice. Both results are asymptotically tight.

98 citations



Journal ArticleDOI
TL;DR: Cases where combinatorial optimization problems are polynomial are identified, for example when the edges of a given color form a connected subgraph, and otherwise hardness and non approximability results for these problems are identified.
Abstract: This article investigates complexity and approximability properties of combinatorial optimization problems yielded by the notion of Shared Risk Resource Group (SRRG). SRRG has been introduced in order to capture network survivability issues where a failure may break a whole set of resources, and has been formalized as colored graphs, where a set of resources is represented by a set of edges with same color. We consider here the analogous of classical problems such as determining paths or cuts with the minimum numbers of colors or color disjoint paths. These optimization problems are much more difficult than their counterparts in classical graph theory. In particular standard relationship such as the Max Flow - Min Cut equality do not hold any longer. In this article we identify cases where these problems are polynomial, for example when the edges of a given color form a connected subgraph, and otherwise give hardness and non approximability results for these problems.

92 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that there exist geometric coding trees of preimages of points from B with all branches convergent to points from C. This implies that the Riemann map onto B has radial limits everywhere and that the Julia set of f consists of disjoint curves tending to infinity, homeomorphic to a half-line, composed of points with a given symbolic itinerary and attached to the unique point accessible from B (endpoint of the hair).
Abstract: Let f be an entire transcendental map of finite order, such that all the singularities of f −1 are contained in a compact subset of the immediate basin B of an attracting fixed point. It is proved that there exist geometric coding trees of preimages of points from B with all branches convergent to points from $${\hat {\mathbb C}}$$ . This implies that the Riemann map onto B has radial limits everywhere. Moreover, the Julia set of f consists of disjoint curves (hairs) tending to infinity, homeomorphic to a half-line, composed of points with a given symbolic itinerary and attached to the unique point accessible from B (endpoint of the hair). These facts generalize the corresponding results for exponential maps.

88 citations


Journal ArticleDOI
TL;DR: A new parameter-free graph-morphology-based segmentation algorithm is proposed to address the problem of partitioning a 3D triangular mesh into disjoint submeshes that correspond to the physical parts of the underlying object.
Abstract: A new parameter-free graph-morphology-based segmentation algorithm is proposed to address the problem of partitioning a 3D triangular mesh into disjoint submeshes that correspond to the physical parts of the underlying object. Curvedness, which is a rotation and translation invariant shape descriptor, is computed at every vertex in the input triangulation. Iterative graph dilation and morphological filtering of the outlier curvedness values result in multiple disjoint maximally connected submeshes such that each submesh contains a set of vertices with similar curvedness values, and vertices in disjoint submeshes have significantly different curvedness values. Experimental evaluations using the triangulations of a number of complex objects demonstrate the robustness and the efficiency of the proposed algorithm and the results prove that it compares well with a number of state-of-the-art mesh segmentation algorithms.

Proceedings ArticleDOI
07 Jan 2007
TL;DR: The first polynomial-time approximation algorithm for TSPN with neighborhoods was proposed in this paper, which is based on an extension of the m-guillotine method.
Abstract: The Euclidean TSP with neighborhoods (TSPN) problem seeks a shortest tour that visits a given collection of n regions (neighborhoods). We present the first polynomial-time approximation scheme for TSPN for a set of regions given by arbitrary disjoint fat regions in the plane. This improves substantially upon the known approximation algorithms, and is the first PTAS for TSPN on regions of non-comparable sizes. Our result is based on a novel extension of the m-guillotine method. The result applies to regions that are "fat" in a very weak sense: each region Pi contains a disk of radius Ω(diam(Pi)), but is otherwise arbitrary. Further, the result applies even if the regions intersect arbitrarily, provided that there exists a packing of disjoint disks, of radii Ω(diam(Pi)), contained within their respective regions. Finally, the PTAS result applies also to the case in which the regions are sets of points or polygons, each each lying within one of a given set of disjoint fat regions.

Patent
29 Mar 2007
TL;DR: In this article, divide phase processing is performed to partition the data set into two or more partitions forming a hierarchy of the objects, and merge phase processing may be performed using the hierarchy to determine one or more disjoint clusters of objects of the dataset.
Abstract: Described are techniques for clustering a data set of objects. Divide phase processing is performed to partition the data set into two or more partitions forming a hierarchy of the objects. Merge phase processing may be performing using the hierarchy to determine one or more disjoint clusters of objects of the data set. Optional preprocessing may be performed to determine weights for one or more features of an object.

Posted Content
TL;DR: It is found that the basin entropy scales with system size only in critical regimes, suggesting that the informationally optimal partition of the state space is achieved when the system is operating at the critical boundary between the ordered and disordered phases.
Abstract: The information processing capacity of a complex dynamical system is reflected in the partitioning of its state space into disjoint basins of attraction, with state trajectories in each basin flowing towards their corresponding attractor. We introduce a novel network parameter, the basin entropy, as a measure of the complexity of information that such a system is capable of storing. By studying ensembles of random Boolean networks, we find that the basin entropy scales with system size only in critical regimes, suggesting that the informationally optimal partition of the state space is achieved when the system is operating at the critical boundary between the ordered and disordered phases.

Journal ArticleDOI
TL;DR: An algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules.
Abstract: Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.

Journal ArticleDOI
TL;DR: The introduction of a nearness relation makes it possible to extend Pawlak's model for an approximation space and to consider the extension of generalized approximations spaces.
Abstract: The problem considered in this paper is the extension of an approximation space to include a nearness relation. Approximation spaces were introduced by Zdzis?aw Pawlak during the early 1980s as frameworks for classifying objects by means of attributes. Pawlak introduced approximations as a means of approximating one set of objects with another set of objects using an indiscernibility relation that is based on a comparison between the feature values of objects. Until now, the focus has been on the overlap between sets. It is possible to introduce a nearness relation that can be used to determine the "nearness" of sets of objects that are possibly disjoint and, yet, qualitatively near to each other. Several members of a family of nearness relations are introduced in this article. The contribution of this article is the introduction of a nearness relation that makes it possible to extend Pawlak's model for an approximation space and to consider the extension of generalized approximations spaces.

Journal ArticleDOI
TL;DR: It is obtained as a corollary that graphs without k disjoint circuits of length l or more have tree-width O(lk2), thereby sharpening a result of C. Thomassen.
Abstract: We show that, for every l, the family $$\mathcal{F}_{l}$$ of circuits of length at least l satisfies the Erdős–Posa property, with f(k)=13l(k−1)(k−2)+(2l+3)(k−1), thereby sharpening a result of C. Thomassen. We obtain as a corollary that graphs without k disjoint circuits of length l or more have tree-width O(lk2).

Journal ArticleDOI
TL;DR: A new combinatorial construction for q-ary constant-weight codes is introduced which yields several families of optimal codes and asymptotically optimal codes.
Abstract: This paper introduces a new combinatorial construction for q-ary constant-weight codes which yields several families of optimal codes and asymptotically optimal codes. The construction reveals intimate connection between q-ary constant-weight codes and sets of pairwise disjoint combinatorial designs of various types

Journal ArticleDOI
TL;DR: The nine‐intersection model is extended by capturing metric details for line–line relations through splitting ratios and closeness measures, which are integrated into compact representations of topological relations, thereby addressing topological and metric properties of arbitrarily complex line– line relations.
Abstract: Many real and artificial entities in geographic space, such as transportation networks and trajectories of movement, are typically modelled as lines in geographic information systems. In a similar fashion, people also perceive such objects as lines and communicate about them accordingly as evidence from research on sketching habits suggests. To facilitate new modalities like sketching that rely on the similarity between qualitative representations, oftentimes multi-resolution models are needed to allow comparisons between sketches and database scenes through successively increasing levels of detail. Within such a setting, topology alone is sufficient only for a coarse estimate of the spatial similarity between two scenes, whereas metric refinements may help extract finer details about the relative positioning and geometry between the objects. The nine-intersection is a topological model that distinguishes 33 relations between two lines based on the content invariant (empty-non-empty intersections) among boundaries, interiors, and exteriors of the lines. This paper extends the nine-intersection model by capturing metric details for line-line relations through splitting ratios and closeness measures. Splitting ratios, which apply to the nine-intersection's non-empty values, are normalized values of lengths and areas of intersections. Closeness measures, which apply to the nine-intersection's empty values, are normalized distances between disjoint object parts. Both groups of measures are integrated into compact representations of topological relations, thereby addressing topological and metric properties of arbitrarily complex line-line relations.

Journal ArticleDOI
TL;DR: In this paper, a simple derivation of the entanglement entropy for a region made up of a union of disjoint intervals in 1+1 dimensional quantum field theories using holographic techniques is presented.
Abstract: We present a simple derivation of the entanglement entropy for a region made up of a union of disjoint intervals in 1+1 dimensional quantum field theories using holographic techniques. This generalizes the results for 1+1 dimensional conformal field theories derived previously by exploiting the uniformization map. We further comment on the generalization of our result to higher dimensional field theories.

Journal ArticleDOI
TL;DR: This paper formulates the problem of colored-trees construction as an integer linear program (ILP) and develops the first distributed algorithm to construct the colored trees using only local information, which is demonstrated the effectiveness of the distributed algorithm by evaluating it on grid and random topologies and comparing to the optimal obtained by solving the ILP.

Journal ArticleDOI
TL;DR: A maximal number of node-disjoint paths are constructed between every two distinct nodes of the hierarchical hypercube network, which can facilitate the VLSI design and fabrication.

Journal ArticleDOI
TL;DR: This paper describes an algorithm, termed as rough-fuzzy c{\hbox{-}}{\rm{medoids}} (RFCMdd) algorithm, to select the most informative bio-bases, comprised of a judicious integration of the principles of rough sets, fuzzy sets, the c-box-- c-medoids algorithm, and the amino acid mutation matrix.
Abstract: In most pattern recognition algorithms, amino acids cannot be used directly as inputs since they are nonnumerical variables. They, therefore, need encoding prior to input. In this regard, bio-basis function maps a nonnumerical sequence space to a numerical feature space. It is designed using an amino acid mutation matrix. One of the important issues for the bio-basis function is how to select the minimum set of bio-bases with maximum information. In this paper, we describe an algorithm, termed as rough-fuzzy c{\hbox{-}}{\rm{medoids}} (RFCMdd) algorithm, to select the most informative bio-bases. It is comprised of a judicious integration of the principles of rough sets, fuzzy sets, the c{\hbox{-}}{\rm{medoids}} algorithm, and the amino acid mutation matrix. While the membership function of fuzzy sets enables efficient handling of overlapping partitions, the concept of lower and upper bounds of rough sets deals with uncertainty, vagueness, and incompleteness in class definition. The concept of crisp lower bound and fuzzy boundary of a class, introduced in RFCMdd, enables efficient selection of the minimum set of the most informative bio-bases. Some new indices are introduced for evaluating quantitatively the quality of selected bio-bases. The effectiveness of the proposed algorithm, along with a comparison with other algorithms, has been demonstrated on different types of protein data sets.

Book ChapterDOI
25 Jun 2007
TL;DR: A general tool, called orbitopal fixing, for enhancing the capabilities of branch-and-cut algorithms in solving such symmetric integer programming models, in which a subset of 0/1-variables encode a partitioning of a set of objects into disjoint subsets.
Abstract: The topic of this paper are integer programming models in which a subset of 0/1-variables encode a partitioning of a set of objects into disjoint subsets. Such models can be surprisingly hard to solve by branch-and-cut algorithms if the order of the subsets of the partition is irrelevant. This kind of symmetry unnecessarily blows up the branch-and-cut tree. We present a general tool, called orbitopal fixing, for enhancing the capabilities of branch-and-cut algorithms in solving such symmetric integer programming models. We devise a linear time algorithm that, applied at each node of the branch-and-cut tree, removes redundant parts of the tree produced by the above mentioned symmetry. The method relies on certain polyhedra, called orbitopes, which have been investigated in [11]. It does, however, not add inequalities to the model, and thus, it does not increase the difficulty of solving the linear programming relaxations. We demonstrate the computational power of orbitopal fixing at the example of a graph partitioning problem motivated from frequency planning in mobile telecommunication networks.

DOI
01 Nov 2007
TL;DR: It is proved that finding an optimal auto-partition is NP-hard and proposed an exact algorithm for finding optimal rectilinear r-partitions whose running time is polynomial when r is a constant, and a faster 2-approximation algorithm.
Abstract: Spatial data structures form a core ingredient of many geometric algorithms, both in theory and in practice. Many of these data structures, especially the ones used in practice, are based on partitioning the underlying space (examples are binary space partitions and decompositions of polygons) or partitioning the set of objects (examples are bounding-volume hierarchies). The efficiency of such data structures---and, hence, of the algorithms that use them---depends on certain characteristics of the partitioning. For example the performance of many algorithms that use binary space partitions (BSPs) depends on the size of the BSPs. Similarly, the performance of answering range queries using bounding-volume hierarchies (BVHs) depends on the so-called crossing number that can be associated with the partitioning on which the BVH is based. Much research has been done on the problem of computing partitioning whose characteristics are good in the worst case. In this thesis, we studied the problem from a different point of view, namely instance-optimality. In particular, we considered the following question: given a class of geometric partitioning structures---like BSPs, simplicial partitions, polygon triangulations, …---and a cost function---like size or crossing number---can we design an algorithm that computes a structure whose cost is optimal or close to optimal for any input instance (rather than only worst-case optimal). We studied the problem of finding optimal data structures for some of the most important spatial data structures. As an example having a set of n points and an input parameter r, It has been proved that there are input sets for which any simplicial partitions has crossing number ?(vr). It has also been shown that for any set of n input points and the parameter r one can make a simplicial partition with stabbing number O(vr). However, there are input point sets for which one can make simplicial partition with lower stabbing number. As an example when the points are on a diagonal, one can always make a simplicial partition with stabbing number 1. We started our research by studying BSPs for line segments in the plane, where the cost function is the size of the BSPs. A popular type of BSPs for line segments are the so-called auto-partitions. We proved that finding an optimal auto-partition is NP-hard. In fact, finding out if a set of input segments admits an auto-partition without any cuts is already NP-hard. We also studied the relation between two other types of BSPs, called free and restricted BSPs, and showed that the number of cuts of an optimal restricted BSP for a set of segments in R2 is at most twice the number of cuts of an optimal free BSP for that set. The details are being represented in Chapter 1 of the thesis. Then we turned our attention to so-called rectilinear r-partitions for planar point sets, with the crossing number as cost function. A rectilinear r-partition of a point set P is a partitioning of P into r subsets, each having roughly |P|/r points. The crossing number of the partition is defined using the bounding boxes of the subsets; in particular, it is the maximum number of bounding boxes that can be intersected by any horizontal or vertical line. We performed some theoretical as well as experimental studies on rectilinear r-partitions. On the theoretical side, we proved that computing a rectilinear r-partition with optimal stabbing number for a given set of points and parameter r is NP-hard. We also proposed an exact algorithm for finding optimal rectilinear r-partitions whose running time is polynomial when r is a constant, and a faster 2-approximation algorithm. Our last theoretical result showed that considering only partitions whose bounding boxes are disjoint is not sufficient for finding optimal rectilinear r-partitions. On the experimental side, we performed a comparison between four different heuristics for constructing rectilinear r-partitions. The so-called windmill KD-tree gave the best results. Chapter 2 of the thesis describes all the details of our research on rectilinear r-partitions. We studied another spatial data structure in Chapter 3 of the thesis. Decomposition of the interior of polygons is one of the fundamental problems in computational geometry. In case of a simple polygon one usually wants to make a Steiner triangulation of it, and when we have a rectilinear polygon at hand, one typically wants to make a rectilinear decomposition for it. Due to this reason there are algorithms which make Steiner triangulations and rectangular decompositions with low stabbing number. These algorithms are worst-case optimal. However, similar to the two previous data structures, there are polygons for which one can make decompositions with lower stabbing numbers. In 3 we proposed a 3-approximation for finding an optimal rectangular decomposition of a rectilinear polygon. We also proposed an O(1)-approximation for finding optimal Steiner triangulation of a simple polygon. Finally, in Chapter 4 of the thesis, we considered another optimization problem, namely how to approximate a piecewise-linear function F: R ?R with another piecewise-linear function with fewer pieces. Here one can distinguish two versions of the problem. The first one is called the min-k problem; the goal is then to approximate the function within a given error e such that the resulting function has the minimum number of links. The second one is called the min-e problem; here the goal is to find an approximation with at most k links (for a given k) such that the error is minimized. These problems have already been studied before. Our contribution is to consider the problem for so-called uncertain functions, where the value of the input function F at its vertices is given as a discrete set of different values, each with an associated probability. We show how to compute an approximation that minimizes the expected error.

Journal ArticleDOI
TL;DR: In this article, the Julia set is all of C and the dynamics is described in detail for every point using symbolic dynamics, and the strongest possible version (in the plane) of the dimension paradox is obtained.
Abstract: We discuss in detail the dynamics of maps z↦aez+be-z for which both critical orbits are strictly preperiodic. The points that converge to ∞ under iteration contain a set R consisting of uncountably many curves called rays, each connecting ∞ to a well-defined “landing point” in C, so that every point in C is either on a unique ray or the landing point of several rays. The key features of this article are the following: (1) this is the first example of a transcendental dynamical system, where the Julia set is all of C and the dynamics is described in detail for every point using symbolic dynamics; (2) we get the strongest possible version (in the plane) of the “dimension paradox”: the set R of rays has Hausdorff dimension 1, and each point in C\R is connected to ∞ by one or more disjoint rays in R. As the complement of a 1-dimensional set, C\R of course has Hausdorff dimension 2 and full Lebesgue measure

Proceedings ArticleDOI
12 Nov 2007
TL;DR: An efficient algorithm to generate all feasible disjoint patterns starting with the set of feasible connected patterns is described, achieving orders of magnitude speedup while generating the identical set of candidate disJoint patterns.
Abstract: Extensible processors allow addition of application-specific custom instructions to the core instruction set architecture. These custom instructions are selected through an analysis of the program's dataflow graphs. The characteristics of certain applications and the modern compiler optimization techniques (e.g., loop unrolling, region formation, etc.) have lead to substantially larger dataflow graphs. Hence, it is computationally expensive to automatically select the optimal set of custom instructions. Heuristic techniques are often employed to quickly search the design space. In order to leverage full potential of custom instructions, our previous work proposed an efficient algorithm for exact enumeration of all possible candidate instructions (or patterns) given the dataflow graphs. But the algorithm was restricted to connected computation patterns. In this paper, we describe an efficient algorithm to generate all feasible disjoint patterns starting with the set of feasible connected patterns. Compared to the state-of-the-art technique, our algorithm achieves orders of magnitude speedup while generating the identical set of candidate disjoint patterns.

Journal ArticleDOI
TL;DR: It is proved that there exist five or six non-isotopic families of such semifields, the families F"i, i=0,...,5 (F"3 might be empty), according to the different configurations of the associated linear sets of PG(3,q^3).

Proceedings ArticleDOI
TL;DR: This paper details the implementation of a persistent union-find data structure as efficient as its imperative counterpart and is a significant example of a data structure whose side effects are safely hidden behind a persistent interface.
Abstract: The problem of disjoint sets, also known as union-find, consists in maintaining a partition of a finite set within a data structure This structure provides two operations: a function find returning the class of an element and a function union merging two classes An optimal and imperative solution is known since 1975 However, the imperative nature of this data structure may be a drawback when it is used in a backtracking algorithm This paper details the implementation of a persistent union-find data structure as efficient as its imperative counterpart To achieve this result, our solution makes heavy use of imperative features and thus it is a significant example of a data structure whose side effects are safely hidden behind a persistent interface To strengthen this last claim, we also detail a formalization using the Coq proof assistant which shows both the correctness of our solution and its observational persistence

Book ChapterDOI
09 Jul 2007
TL;DR: The directed vertex disjoint cycle problem is hard for the parameterized complexity class W[1], and to the best of the knowledge the algorithm given is the first fpt approximation algorithm for a natural W [1]-hard problem.
Abstract: We give an fpt approximation algorithm for the directed vertex disjoint cycle problem. Given a directed graph G with n vertices and a positive integer k, the algorithm constructs a family of at least k/ρ(k) disjoint cycles of G if the graph G has a family of at least k disjoint cycles (and otherwise may still produce a solution, or just report failure). Here ρ is a computable function such that k/ρ(k) is nondecreasing and unbounded. The running time of our algorithm is polynomial. The directed vertex disjoint cycle problem is hard for the parameterized complexity class W[1], and to the best of our knowledge our algorithm is the first fpt approximation algorithm for a natural W[1]-hard problem.