scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2006"


Journal ArticleDOI
TL;DR: This paper presents a new method for visualizing compound graphs based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together and discusses the results based on an informal evaluation provided by potential users of such visualizations.
Abstract: A compound graph is a frequently encountered type of data set. Relations are given between items, and a hierarchy is defined on the items as well. We present a new method for visualizing such compound graphs. Our approach is based on visually bundling the adjacency edges, i.e., non-hierarchical edges, together. We realize this as follows. We assume that the hierarchy is shown via a standard tree visualization method. Next, we bend each adjacency edge, modeled as a B-spline curve, toward the polyline defined by the path via the inclusion edges from one node to another. This hierarchical bundling reduces visual clutter and also visualizes implicit adjacency edges between parent nodes that are the result of explicit adjacency edges between their respective child nodes. Furthermore, hierarchical edge bundling is a generic method which can be used in conjunction with existing tree visualization techniques. We illustrate our technique by providing example visualizations and discuss the results based on an informal evaluation provided by potential users of such visualizations

1,057 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical optimization algorithm was proposed to obtain the maximum synchronizability and fast random walk spreading for a particular type of extremely homogeneous regular networks, with long loops and poor modular structure, called entangled networks.
Abstract: We report on some recent developments in the search for optimal network topologies First we review some basic concepts on spectral graph theory, including adjacency and Laplacian matrices, paying special attention to the topological implications of having large spectral gaps We also introduce related concepts such as 'expanders', Ramanujan, and Cage graphs Afterwards, we discuss two different dynamical features of Networks, synchronizability and flow of random walkers, so that they are optimized if the corresponding Laplacian matrix has a large spectral gap From this, we show, by developing a numerical optimization algorithm, that maximum synchronizability and fast random walk spreading are obtained for a particular type of extremely homogeneous regular networks, with long loops and poor modular structure, that we call entangled networks These turn out to be related to Ramanujan and Cage graphs We argue also that these graphs are very good finite-size approximations to Bethe lattices, and provide optimal or almost optimal solutions to many other problems, for instance searchability in the presence of congestion or performance of neural networks Finally, we study how these results are modified when studying dynamical processes controlled by a normalized (weighted and directed) dynamics; much more heterogeneous graphs are optimal in this case Finally, a critical discussion of the limitations and possible extensions of this work is presented

155 citations


Journal ArticleDOI
TL;DR: Almost tight lower and upper bounds for the bounded error quantum query complexity of Connectivity, StrongConnectivity, Minimum Spanning Tree, and Single Source Shortest Paths are given.
Abstract: Quantum algorithms for graph problems are considered, both in the adjacency matrix model and in an adjacency list-like array model. We give almost tight lower and upper bounds for the bounded error quantum query complexity of Connectivity, Strong Connectivity, Minimum Spanning Tree, and Single Source Shortest Paths. For example, we show that the query complexity of Minimum Spanning Tree is in $\Theta(n^{3/2})$ in the matrix model and in $\Theta(\sqrt{nm})$ in the array model, while the complexity of Connectivity is also in $\Theta(n^{3/2})$ in the matrix model but in $\Theta(n)$ in the array model. The upper bounds utilize search procedures for finding minima of functions under various conditions.

143 citations


Proceedings ArticleDOI
15 May 2006
TL;DR: An improved algorithm for the multi-robot complete coverage problem, builds on a single robot coverage algorithm, Boustrophedon decomposition, and demonstrates the viability of employing the algorithm to perform distributed coverage of a given unknown area with multiple robots.
Abstract: In this paper, we proposed an improved algorithm for the multi-robot complete coverage problem. Real world applications such as lawn mowing, chemical spill clean-up, and humanitarian de-mining can be automated by the employment of a team of autonomous mobile robots. Our approach builds on a single robot coverage algorithm, Boustrophedon decomposition. The robots are initially distributed through space and each robot is allocated a virtually bounded area to cover. The area is decomposed into cells where each cell width is fixed. The decomposed area is represented using an adjacency graph, which is incrementally constructed and shared among all the robots. Communication between the robots is available without any restrictions. Experiments on both simulated and physical hardware demonstrated the viability of employing the algorithm to perform distributed coverage of a given unknown area with multiple robots

104 citations


Proceedings ArticleDOI
23 Apr 2006
TL;DR: This scheme uses no geographic information, makes few assumptions on the network model, and achieves better load balancing and structured data processing and aggregation even for sensor fields with complex geometric shapes and non-trivial topology.
Abstract: For a wide variety of sensor network environments, location information is unavailable or expensive to obtain. We propose a location-free, lightweight, distributed, and data-centric storage/retrieval scheme for information producers and information consumers in sensor networks. Our scheme is built upon the Gradient Landmark-Based Distributed Routing protocol (GLIDER) [8], a two-level routing scheme where sensor nodes are partitioned into tiles by their graph distances to a small set of local landmarks so that localized and efficient routing can be achieved inside and across tiles. Our information storage and retrieval scheme uses two ideas on top of the GLIDER hierarchy — a distributed hash table on the combinatorial tile adjacency graph and a double-ruling scheme within each tile. Queries follow a path that will provably reach the data replicated by the producer(s). We show that this scheme compares favorably with previously proposed schemes, such as Geographic Hash Tables (GHT), providing comparable data storage performance and better locality-aware data retrieval performance. More importantly, this scheme uses no geographic information, makes few assumptions on the network model, and achieves better load balancing and structured data processing and aggregation even for sensor fields with complex geometric shapes and non-trivial topology.

100 citations


Journal ArticleDOI
TL;DR: This paper shows how a Scale–Space technique can extract features that are invariant with respect to the global structure of the model as well as small perturbations that 3D laser scanning process introduce, and introduces a new distance function defined on triangles instead of points.
Abstract: A primary shortcoming of existing techniques for hree-dimensional (3D) model matching is the reliance on global information of the model’s structure. Models are matched in their entirety, depending on overall topology and geometry information. A currently open challenge is how to perform partial matching. Partial matching is important for finding similarities across part models with different global shape properties and for the segmentation and matching of data acquired from 3D scanners. This paper presents a Scale–Space feature extraction technique based on recursive decomposition of polyhedral surfaces into surface patches. The experimental results presented in this paper suggest that this technique can potentially be used to perform matching based on local model structure. In our previous work, Scale–Space decomposition has been used successfully to extract features from mechanical artifacts. Scale–Space techniques can be parameterized to generate decompositions that correspond to manufacturing, assembly or surface features relevant to mechanical design. One application of these technique is to support matching and content-based retrieval of solid models. This paper shows how a Scale–Space technique can extract features that are invariant with respect to the global structure of the model as well as small perturbations that 3D laser scanning process introduce. In order to accomplish this, we introduce a new distance function defined on triangles instead of points. We believe this technique offers a new way to control the feature decomposition process, which results in the extraction of features that are more meaningful from an engineering viewpoint. The new technique is computationally practical for use in indexing large models. Examples are provided that demonstrate effective feature extraction on 3D laser scanned models. In addition, a simple sub-graph isomorphism algorithm was used to show that the feature adjacency graphs, obtained through feature extraction, are meaningful descriptors of 3D CAD objects. All of the data used in the experiments for this work is freely available at: http://www.designrepository.org/datasets/ .

89 citations


Proceedings Article
01 Jan 2006
TL;DR: Property of point sets are investigated to derive criteria for automatic hole detection and a final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic holes detection algorithm.
Abstract: Models of non-trivial objects resulting from a 3d data acquisition process (e.g. Laser Range Scanning) often contain holes due to occlusion, reflectance or transparency. As point set surfaces are unstructured surface representations with no adjacency or connectivity information, defining and detecting holes is a non-trivial task. In this paper we investigate properties of point sets to derive criteria for automatic hole detection. For each point, we combine several criteria into an integrated boundary probability. A final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic hole detection algorithm.

73 citations


Journal ArticleDOI
TL;DR: A few algorithms for obtaining good quality hierarchical graph decompositions are described and the parallel implementation of the factorization procedure is discussed.
Abstract: PHIDAL (parallel hierarchical interface decomposition algorithm) is a parallel incomplete factorization method which exploits a hierarchical interface decomposition of the adjacency graph of the coefficient matrix. The idea of the decomposition is similar to that of the well-known wirebasket techniques used in domain decomposition. However, the method is devised for general, irregularly structured, sparse linear systems. This paper describes a few algorithms for obtaining good quality hierarchical graph decompositions and discusses the parallel implementation of the factorization procedure. Numerical experiments are reported to illustrate the scalability of the algorithm and its effectiveness as a general purpose parallel linear system solver.

65 citations


Journal ArticleDOI
TL;DR: In this paper, the adjacency and Laplacian matrices of four graph products are diagonalized and efficient methods are obtained for calculating their eigenvalues and eigenvectors.
Abstract: Eigenvalues and eigenvectors of graphs have many applications in structural mechanics and combinatorial optimization. For a regular space structure, the visualization of its graph model as the product of two simple graphs results in a substantial simplification in the solution of the corresponding eigenproblems. In this paper, the adjacency and Laplacian matrices of four graph products, namely, Cartesian, strong Cartesian, direct and lexicographic products are diagonalized and efficient methods are obtained for calculating their eigenvalues and eigenvectors. An exceptionally efficient method is developed for the eigensolution of the Laplacian matrices of strong Cartesian and direct products. Special attention is paid to the lexicographic product, which is not studied in the past as extensively as the other three graph products. Examples are provided to illustrate some applications of the methods in structural mechanics. Copyright © 2006 John Wiley & Sons, Ltd.

64 citations


Proceedings Article
04 Dec 2006
TL;DR: A notion of weighted boundary volume is introduced, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution, and it is shown that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary.
Abstract: One of the intuitions underlying many graph-based methods for clustering and semi-supervised learning, is that class or cluster boundaries pass through areas of low probability density. In this paper we provide some formal analysis of that notion for a probability distribution. We introduce a notion of weighted boundary volume, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution. We show that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary.

60 citations


Journal ArticleDOI
TL;DR: The authors repeated similar field measurements and found that the adjacency effect usually has a negligible influence at short distances, decreasing with wavelength in agreement with theory, but can have a small influence in high-reflectance contrast environments.
Abstract: It is well known that the adjacency effect has to be taken into account during the retrieval of surface reflectance from high spatial resolution satellite imagery. The effect results from atmospheric scattering, depends on the reflectance contrast between a target pixel and its large-scale neighborhood, and decreases with wavelength. Recently, ground reflectance field measurements were published, claiming a substantial influence of the adjacency effect at short distance measurements (< 2 m), and an increase of the effect with wavelength. The authors repeated similar field measurements and found that the adjacency effect usually has a negligible influence at short distances, decreasing with wavelength in agreement with theory, but can have a small influence in high-reflectance contrast environments. Radiative transfer calculations were performed to quantify the influence at short and long distances for cases of practical interest (vegetation and soil in a low-reflectance background). For situations with large reflectance contrasts, the atmospheric backscatter component of the adjacency effect can influence ground measurements over small-area targets, and should therefore be taken into account. However, it is not possible to draw a general conclusion, since some of the considered surfaces are known for exhibiting strong directional effects

Journal Article
TL;DR: B-matching as discussed by the authors is a generalization of traditional maximum weight matching, which is solvable in polynomial time and produces a binary matrix with rows and columns summing to a positive integer b.
Abstract: We propose preprocessing spectral clustering with b-matching to remove spurious edges in the adjacency graph prior to clustering. B-matching is a generalization of traditional maximum weight matching and is solvable in polynomial time. Instead of a permutation matrix, it produces a binary matrix with rows and columns summing to a positive integer b. The b-matching procedure prunes graph edges such that the in-degree and out-degree of each node is b, producing a more balanced variant of k-nearest-neighbor. The combinatorial algorithm optimally solves for the maximum weight subgraph and makes subsequent spectral clustering more stable and accurate. Experiments on standard datasets, visualizations, and video data support the use of b-matching to prune graphs prior to spectral clustering.

Journal ArticleDOI
TL;DR: Simulation of simulated annealing was essentially independent of the starting point, giving it an important advantage over random hill climbing and the genetic algorithm was not well suited to the strict adjacency problem.

Journal IssueDOI
01 Sep 2006
TL;DR: In this article, it was shown that the proper homogeneous pair decomposition is in fact unnecessary for trigraphs and that the Strong Perfect Graph Theorem for trigrams can be extended to Berge graphs.
Abstract: A graph is Berge if no induced subgraph of it is an odd cycle of length at least five or the complement of one. In joint work with Robertson, Seymour, and Thomas we recently proved the Strong Perfect Graph Theorem, which was a conjecture about the chromatic number of Berge graphs. The proof consisted of showing that every Berge graph either belongs to one of a few basic classes, or admits one of a few kinds of decompositions. We used three kinds of decompositions: skew-partitions, 2-joins, and proper homogeneous pairs. At that time we were not sure whether all three decompositions were necessary. In this article we show that the proper homogeneous pair decomposition is in fact unnecessary. This is a consequence of a general decomposition theorem for “Berge trigraphs.” A trigraph T is a generalization of a graph, where the adjacency of some vertex pairs is “undecided.” A trigraph is Berge if however we decide the undecided pairs, the resulting graph is Berge. We show that the decomposition result of [2] for Berge graphs extends (with slight modifications) to Berge trigraphs; that is for a Berge trigraph T, either T belongs to one of a few basic classes or T admits one of a few decompositions. Moreover, the decompositions are such that, however, we decide the undecided pairs of T, the resulting graph admits the same decomposition. This last property is crucial for the application. The full proof of this result is over 200 pages long and was the author's PhD thesis. In this article we present the parts that differ significantly from the proof of the decomposition theorem for Berge graphs, and only in the case needed for the application. © 2006 Wiley Periodicals, Inc. J Graph Theory 53: 1–55, 2006 This research was partially conducted during the period the author served as a Clay Mathematics Institute Research Fellow.

Book ChapterDOI
25 Oct 2006
TL;DR: A method for efficient simultanous calculation of the intrinsic volumes of sets observed in binary images is developed and the concepts discretization w.r.t an adjacencies system and complementarity of adjacency systems are introduced.
Abstract: The intrinsic volumes – in 3d up to constants volume, surface area, integral of mean curvature, and Euler number – are a very useful set of geometric characteristics Combining integral and digital geometry we develop a method for efficient simultanous calculation of the intrinsic volumes of sets observed in binary images In order to achieve consistency in the derived intrinsic volumes for both foreground and background, suitable pairs of discrete connectivities have to be used To make this rigorous, the concepts discretization w.r.t an adjacency system and complementarity of adjacency systems are introduced.

Journal ArticleDOI
TL;DR: The proposed scheme generates heterogeneous object models with higher data consistencies and lower redundancies; naturally avoids unnecessary/repetitive computations and thus improves computation efficiencies; and represents versatile material variations/distributions using different heterogeneous feature tree (HFT) structures.
Abstract: This paper presents a new approach to model complex heterogeneous objects with simultaneous geometry intricacies as well as complex material distributions. Different from most of the existing approaches, which utilize manifold B-Rep and the assembly representations, the proposed scheme takes advantage of the non-manifoldcellular representations to model the geometries of the heterogeneous objects. With the aid of the cell adjacency information and attribute based reasoning, complex, smooth and versatile material distributions can be defined upon the intricate geometries. Compared with other similar approaches, the proposed scheme (1) generates heterogeneous object models with higher data consistencies and lower redundancies; (2) naturally avoids unnecessary/repetitive computations and thus improves computation efficiencies; (3) represents versatile material variations/distributions using different heterogeneous feature tree (HFT) structures. The detailed representation, associated algorithms and a prototype software package are presented. Example heterogeneous objects modeled with the proposed approach are provided.

Posted Content
TL;DR: A new index is introduced that satisfies two main properties: it can be applied to both binary or weighted graphs, and once suitably standardized, it distributes as a standard normal over all possible adjacency/weights matrices.
Abstract: This paper proposes a simple procedure to decide whether the empirically-observed adjacency or weights matrix, which characterizes the graph underlying a socio-economic network, is sufficiently symmetric (respectively, asymmetric) to justify an undirected (respectively, directed) network analysis. We introduce a new index that satisfies two main properties. First, it can be applied to both binary or weighted graphs. Second, once suitably standardized, it distributes as a standard normal over all possible adjacency/weights matrices. To test the index in practice, we present an application that employs a set of well-known empirically-observed social and economic networks.

Proceedings ArticleDOI
24 Jan 2006
TL;DR: A technique that locates the task next to the borders of the free area for as many cycles as possible, trying to minimize the area fragmentation is proposed and combined with a look-ahead heuristic that allows delaying the scheduling of a task to the next event, increasing the solution search space.
Abstract: To get efficient HW management in 2D reconfigurable systems, heuristics are needed to select the best place to locate each arriving task. We propose a technique that locates the task next to the borders of the free area for as many cycles as possible, trying to minimize the area fragmentation. Moreover, we combine it with a look-ahead heuristic that allows delaying the scheduling of a task to the next event, increasing the solution search space.

Book ChapterDOI
Marco Terzer1, Jörg Stelling1
11 Sep 2006
TL;DR: This work introduces new concepts for the enumeration of adjacent rays, which is one of the critical and stubborn facets of the algorithms of EFM analysis at the whole-cell level.
Abstract: Elementary flux modes (EFMs)—formalized metabolic pathways—are central and comprehensive tools for metabolic network analysis under steady state conditions. They act as a generating basis for all possible flux distributions and, thus, are a minimal (constructive) description of the solution space. Algorithms to compute EFMs descend from computational geometry; they are mostly synonymous to the enumeration of extreme rays of polyhedral cones. This problem is combinatorially complex, and algorithms do not scale well. Here, we introduce new concepts for the enumeration of adjacent rays, which is one of the critical and stubborn facets of the algorithms. They rely on variants of k-d-trees to store and analyze bit sets representing (intermediary) extreme rays. Bit set trees allow for speed-up of computations primarily for low-dimensional problems. Extensions to pattern trees to narrow candidate pairs for adjacency tests scale with problem size, yielding speed-ups on the order of one magnitude relative to current algorithms. Additionally, fast algebraic tests can easily be used in the framework. This constitutes one step towards EFM analysis at the whole-cell level.

Book ChapterDOI
TL;DR: This paper defines thelinear complexity of a graph to be the linear complexity of any one of its associated adjacency matrices, and compute or give upper bounds for the Linear complexity of several classes of graphs.
Abstract: The linear complexity of a matrix is a measure of the number of additions, subtractions, and scalar multiplications required to multiply that matrix and an arbitrary vector. In this paper, we define the linear complexity of a graph to be the linear complexity of any one of its associated adjacency matrices. We then compute or give upper bounds for the linear complexity of several classes of graphs.

Journal ArticleDOI
TL;DR: This paper shows how to construct a linear deformable model for graph structure by performing principal components analysis (PCA) on the vectorised adjacency matrix, and illustrates the utility of the resulting method for shape-analysis.

Proceedings ArticleDOI
14 Aug 2006
TL;DR: This paper develops a fully automated framework for generation of robot plans from robot abstract task specifications given in terms of linear temporal logic (LTL) formulas over regions of interest.
Abstract: We approach the general problem of planning and controlling groups of robots from logical and temporal specifications over regions of interest in 2D or 3D environments. The focus of this paper is on planning, and, enabled by our previous results, we assume that the environment is partitioned and described in the form of a graph whose nodes label the partition regions and whose edges capture adjacency relations among these regions. We also assume that the robots can synchronize when penetrating from a region to another. We develop a fully automated framework for generation of robot plans from robot abstract task specifications given in terms of linear temporal logic (LTL) formulas over regions of interest. Inter-robot collision avoidance is guaranteed, and the assignment of plans to specific robots is automatic. The main tools underlying our framework are model checking and bisimilarity equivalence relations

Journal ArticleDOI
TL;DR: The Grassmann space of fc-subspaces of a polar space is defined and its geometry is examined in this article, in particular its cliques, subspaces and automorphisms are characterized.
Abstract: The Grassmann space of fc-subspaces of a polar space is defined and its geometry is examined. In particular, its cliques, subspaces and automorphisms are characterized. An analogue of Chow's theorem for the Grassmann space of fc-subspaces of a polar spaces is proved.

Posted Content
TL;DR: In this paper, it was shown that widehat{y} is a rational number, and the existence of a pair of cospectral graphs with respect to yJ - A for two distinct values of y is known.
Abstract: Let J be the all-ones matrix, and let A denote the adjacency matrix of a graph. An old result of Johnson and Newman states that if two graphs are cospectral with respect to yJ - A for two distinct values of y, then they are cospectral for all y. Here we will focus on graphs cospectral with respect to yJ - A for exactly one value widehat{y} of y. We call such graphs widehat{y}-cospectral. It follows that widehat{y} is a rational number, and we prove existence of a pair of widehat{y}-cospectral graphs for every rational widehat{y}. In addition, we generate by computer all widehat{y}-cospectral pairs on most nine vertices. Recently, Chesnokov and the second author constructed pairs of widehat{y}-cospectral graphs for all rational widehat{y}{in}(0,1), where one graph is regular and the other one is not. This phenomenon is only possible for the mentioned values of widehat{y}, and by computer we find all such pairs of widehat{y}-cospectral graphs on at most eleven vertices.

Journal Article
TL;DR: A relational algorithm for computing a feedback vertex set of minimum size is developed in combination with a BDD-implementation of relations that allows to exactly solve this NP-hard problem for medium-sized graphs.
Abstract: A feedback vertex set of a graph is a subset of vertices containing at least one vertex from every cycle of the graph. Given a directed graph by its adjacency relation, we develop a relational algorithm for computing a feedback vertex set of minimum size. In combination with a BDD-implementation of relations, it allows to exactly solve this NP-hard problem for medium-sized graphs.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: Experimental results on real SAR images show that the proposed method can reduce the computational complexity greatly and provide precise segmentation results.
Abstract: A fast approach to obtain segmentation of SAR images has been suggested here based on the local statistical characteristics using Markov Random Field (MRF) Model on region adjacency graph (RAG). First, an initially over-segmented image derived from the watershed segmentation algorithm as well as the original SAR image is taken as the inputs of the proposed method. Secondly, a MRF is defined on RAG of the initial over segmented regions, with a novel multilevel logistic (MLL) model for the region class labels and Gamma distribution for the marginal distribution of each class in the SAR images. The criterion used for getting the optimal segmentation is the maximization of the posterior marginal (MPM), which minimizing the expected value of the number of the misclassified regions in the over-segmented image. In the implementation, the expectation maximization (EM) algorithm is used to estimate the parameters of Gamma distribution, and the parameters of the MLL model is derived from the RAG. Experimental results on real SAR images show that the proposed method can reduce the computational complexity greatly and provide precise segmentation results.

Book ChapterDOI
25 Oct 2006
TL;DR: A general definition of discrete hyperspheres and the characterization of the k-minimal ones thanks to an arithmetic definition based on a non-constant thickness function and a link to adjacency and separatingness with norms is linked.
Abstract: In the framework of the arithmetic discrete geometry, a discrete object is provided with its own analytical definition corresponding to a discretization scheme It can thus be considered as the equivalent, in a discrete space, of a Euclidean object Linear objects, namely lines and hyperplanes, have been widely studied under this assumption and are now deeply understood This is not the case for discrete circles and hyperspheres for which no satisfactory definition exists In the present paper, we try to fill this gap Our main results are a general definition of discrete hyperspheres and the characterization of the k-minimal ones thanks to an arithmetic definition based on a non-constant thickness function To reach such topological properties, we link adjacency and separatingness with norms.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: An innovative approach to automatically generate adjacency grammars describing graphical symbols by infering the grammar productions consisting of the ruleset most likely to occur from a set of symbol instances sketched by a user using a digital pen.
Abstract: In this paper we present an innovative approach to automatically generate adjacency grammars describing graphical symbols. A grammar production is formulated in terms of rule sets of geometrical constraints among symbol primitives. Given a set of symbol instances sketched by a user using a digital pen, our approach infers the grammar productions consisting of the ruleset most likely to occur. The performance of our work is evaluated using a comprehensive benchmarking database of on-line symbols

01 Jan 2006
TL;DR: This paper presents an automatic method for individual tree crown segmentation in aerial forestry images and builds a cluster adjacency graph where clusters belonging to the same crown are merged.
Abstract: Individual tree crown segmentation is frequently required in forest inventory, biomass measurement, change detection, tree species recognition, etc. It is almost impossible to do manual segmentation of huge forest by human. In this paper, we present an automatic method for individual tree crown segmentation in aerial forestry images. We first extract treetops using the method in (1). Next we apply mean shift clustering to group pixels into clusters having homogeneous properties. Then we build a cluster adjacency graph where clusters belonging to the same crown are merged. We tested our method on some forestry images and obtained good results.

Journal ArticleDOI
TL;DR: It is proved that a Smale space can be reconstructed from the adjacency semigroup of its Markov partition, using the notion of the limit solenoid of a contracting self-similar semigroup.
Abstract: Self-similar inverse semigroups are defined using automata theory. Adjacency semigroups of s-resolved Markov partitions of Smale spaces are introduced. It is proved that a Smale space can be reconstructed from the adjacency semigroup of its Markov partition, using the notion of the limit solenoid of a contracting self-similar semigroup. The notions of the limit solenoid and a contracting semigroup is described.