scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2008"


Proceedings ArticleDOI
11 Feb 2008
TL;DR: This work presents a compression scheme for the web graph specifically designed to accommodate community queries and other random access algorithms on link servers, and uses a frequent pattern mining approach to extract meaningful connectivity formations.
Abstract: A link server is a system designed to support efficient implementations of graph computations on the web graph. In this work, we present a compression scheme for the web graph specifically designed to accommodate community queries and other random access algorithms on link servers. We use a frequent pattern mining approach to extract meaningful connectivity formations. Our Virtual Node Miner achieves graph compression without sacrificing random access by generating virtual nodes from frequent itemsets in vertex adjacency lists. The mining phase guarantees scalability by bounding the pattern mining complexity to O(E log E). We facilitate global mining, relaxing the requirement for the graph to be sorted by URL, enabling discovery for both inter-domain as well as intra-domain patterns. As a consequence, the approach allows incremental graph updates. Further, it not only facilitates but can also expedite graph computations such as PageRank and local random walks by implementing them directly on the compressed graph. We demonstrate the effectiveness of the proposed approach on several publicly available large web graph data sets. Experimental results indicate that the proposed algorithm achieves a 10- to 15-fold compression on most real word web graph data sets

258 citations


Proceedings ArticleDOI
12 Jul 2008
TL;DR: This work exhaustively extracts inherent networks of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces by using the well-known family of $NK$ landscapes as an example.
Abstract: We propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces (Doye, 2002). We use the well-known family of $NK$ landscapes as an example. In our case the inherent network is the graph where the vertices are all the local maxima and edges mean basin adjacency between two maxima. We exhaustively extract such networks on representative small NK landscape instances, and show that they are 'small-worlds'. However, the maxima graphs are not random, since their clustering coefficients are much larger than those of corresponding random graphs. Furthermore, the degree distributions are close to exponential instead of Poissonian. We also describe the nature of the basins of attraction and their relationship with the local maxima network.

167 citations


Proceedings ArticleDOI
20 Nov 2008
TL;DR: It is shown that the adjacency graph of permutations is a subgraph of a multi-dimensional array of a special size, a property that enables code designs based on Lee- metric codes.
Abstract: We investigate error-correcting codes for a novel storage technology for flash memories, the rank-modulation scheme. In this scheme, a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The resulting scheme eliminates the need for discrete cell levels, overcomes overshoot errors when programming cells (a serious problem that reduces the writing speed), and mitigates the problem of asymmetric errors. In this paper, we study the properties of error correction in rank modulation codes. We show that the adjacency graph of permutations is a subgraph of a multi-dimensional array of a special size, a property that enables code designs based on Lee- metric codes. We present a one-error-correcting code whose size is at least half of the optimal size. We also present additional error-correcting codes and some related bounds.

129 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that no two non-isomorphic lollipop graphs are cospectral with respect to the adjacency matrix, and for p odd the LLLP graphs are determined by its Laplacian spectrum.

90 citations


Book
08 Sep 2008
TL;DR: Algorithmic Aspects of Graph Connectivity as mentioned in this paper is the first comprehensive book on graph and network theory, emphasizing its algorithmic aspects, which can be used as a textbook in graduate courses in mathematical sciences, such as discrete mathematics, combinatorics, and operations research.
Abstract: Algorithmic Aspects of Graph Connectivity is the first comprehensive book on this central notion in graph and network theory, emphasizing its algorithmic aspects. Because of its wide applications in the fields of communication, transportation, and production, graph connectivity has made tremendous algorithmic progress under the influence of the theory of complexity and algorithms in modern computer science. The book contains various definitions of connectivity, including edge-connectivity and vertex-connectivity, and their ramifications, as well as related topics such as flows and cuts. The authors comprehensively discuss new concepts and algorithms that allow for quicker and more efficient computing, such as maximum adjacency ordering of vertices. Covering both basic definitions and advanced topics, this book can be used as a textbook in graduate courses in mathematical sciences, such as discrete mathematics, combinatorics, and operations research, and as a reference book for specialists in discrete mathematics and its applications.

84 citations


Proceedings ArticleDOI
05 Jul 2008
TL;DR: The central issue in representing graph-structured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices, and this work presents a new system of invariant graph features which it calls the skew spectrum of graphs.
Abstract: The central issue in representing graph-structured data instances in learning algorithms is designing features which are invariant to permuting the numbering of the vertices. We present a new system of invariant graph features which we call the skew spectrum of graphs. The skew spectrum is based on mapping the adjacency matrix of any (weigted, directed, unlabeled) graph to a function on the symmetric group and computing bispectral invariants. The reduced form of the skew spectrum is computable in O(n3) time, and experiments show that on several benchmark datasets it can outperform state of the art graph kernels.

72 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A new object representation, called connected segmentation tree (CST), is proposed, which captures canonical characteristics of the object in terms of the photometric, geometric, and spatial adjacency and containment properties of its constituent image regions.
Abstract: This paper proposes a new object representation, called connected segmentation tree (CST), which captures canonical characteristics of the object in terms of the photometric, geometric, and spatial adjacency and containment properties of its constituent image regions. CST is obtained by augmenting the objectpsilas segmentation tree (ST) with inter-region neighbor links, in addition to their recursive embedding structure already present in ST. This makes CST a hierarchy of region adjacency graphs. A regionpsilas neighbors are computed using an extension to regions of the Voronoi diagram for point patterns. Unsupervised learning of the CST model of a category is formulated as matching the CST graph representations of unlabeled training images, and fusing their maximally matching subgraphs. A new learning algorithm is proposed that optimizes the model structure by simultaneously searching for both the most salient nodes (regions) and the most salient edges (containment and neighbor relationships of regions) across the image graphs. Matching of the category model to the CST of a new image results in simultaneous detection, segmentation and recognition of all occurrences of the category, and a semantic explanation of these results.

61 citations


Proceedings Article
01 Oct 2008
TL;DR: Noise tolerance of maximal quasi-bicliques is improved by allowing every vertex to tolerate up to the same number, or the same percentage, of missing edges to lead to a more natural interaction between the two vertex sets— a balanced most-versus-most adjacency.
Abstract: The rigid all-versus-all adjacency required by a maximal biclique for its two vertex sets is extremely vulnerable to missing data In the past, several types of quasi-bicliques have been proposed to tackle this problem, however their noise tolerance is usually unbalanced and can be very skewed In this paper, we improve the noise tolerance of maximal quasi-bicliques by allowing every vertex to tolerate up to the same number, or the same percentage, of missing edges This idea leads to a more natural interaction between the two vertex sets— a balanced most-versus-most adjacency This generalization is also non-trivial, as many large-size maximal quasi-biclique subgraphs do not contain any maximal bicliques This observation implies that direct expansion from maximal bicliques may not guarantee a complete enumeration of all maximal quasi-bicliques We present important properties of maximal quasi-bicliques such as a bounded closure property and a fixed point property to design efficient algorithms Maximal quasi-bicliques are closely related to co-clustering problems such as documents and words co-clustering, images and features coclustering, stocks and financial ratios co-clustering, etc Here, we demonstrate the usefulness of our concepts using a new application—a bioinformatics example— where prediction of true protein interactions is investigated

52 citations


Journal ArticleDOI
TL;DR: This paper proposes the first optimal representations for 3-connected planar graphs and triangulations, which are the most standard classes of graphs underlying meshes with spherical topology, and asymptotically match the respective entropy of the two classes.

46 citations


Journal ArticleDOI
01 Dec 2008
TL;DR: This work presents a scheme for efficient traversal of mesh edges that builds on the adjacency primitives and programmable geometry shaders introduced in recent graphics hardware, and aims to minimize the number of primitives while maximizing SIMD parallelism.
Abstract: Processing of mesh edges lies at the core of many advanced realtime rendering techniques, ranging from shadow and silhouette computations, to motion blur and fur rendering. We present a scheme for efficient traversal of mesh edges that builds on the adjacency primitives and programmable geometry shaders introduced in recent graphics hardware. Our scheme aims to minimize the number of primitives while maximizing SIMD parallelism. These objectives reduce to a set of discrete optimization problems on the dual graph of the mesh, and we develop practical solutions to these graph problems. In addition, we extend two existing vertex cache optimization algorithms to produce cache-efficient traversal orderings for adjacency primitives. We demonstrate significant runtime speedups for several practical real-time rendering algorithms.

43 citations


Book ChapterDOI
15 Sep 2008
TL;DR: A lower bound is proved in the cell probe model that it is impossible to achieve the information-theory lower bound within lower order terms unless the graph is too sparse or too dense.
Abstract: We consider the problem of encoding a graph with nvertices and medges compactly supporting adjacency, neighborhood and degree queries in constant time in the logn-bit word RAM model. The adjacency query asks whether there is an edge between two vertices, the neighborhood query reports the neighbors of a given vertex in constant time per neighbor, and the degree query reports the number of incident edges to a given vertex. We study the problem in the context of succinctness, where the goal is to achieve the optimal space requirement as a function of nand m, to within lower order terms. We prove a lower bound in the cell probe model that it is impossible to achieve the information-theory lower bound within lower order terms unless the graph is too sparse (namely m= o(ni¾?) for any constant i¾?> 0) or too dense (namely m= i¾?(n2 i¾? i¾?) for any constant i¾?> 0). Furthermore, we present a succinct encoding for graphs for all values of n,msupporting queries in constant time. The space requirement of the representation is always within a multiplicative 1 + i¾?factor of the information-theory lower bound for any arbitrarily small constant i¾?> 0. This is the best achievable space bound according to our lower bound where it applies. The space requirement of the representation achieves the information-theory lower bound tightly within lower order terms when the graph is sparse (m= o(ni¾?) for any constant i¾?> 0).

Book ChapterDOI
27 Jun 2008
TL;DR: This paper proposes a new method to compress a Web graph that is more efficient than Boldi and Vigna's method with respect to the size of the compressed data.
Abstract: Several methods have been proposed for compressing the linkage data of a Web graph. Among them, the method proposed by Boldi and Vigna is known as the most efficient one. In the paper, we propose a new method to compress a Web graph. Our method is more efficient than theirs with respect to the size of the compressed data. For example, our method needs only 1.99 bits per link to compress a Web graph containing 3,216,152 links connecting 325,557 pages, while the method of Boldi and Vigna needs 2.84 bits per link to compress the same Web graph.

Proceedings ArticleDOI
16 Jul 2008
TL;DR: An improved quadtree method (IQM) for split-merge called as neighbour naming based image segmentation method (NNBISM) in Kelkar, D. and Grupta, S., (2008), where top-down and bottom-up approaches of region based segmentation techniques are chained.
Abstract: Image segmentation is one of the important steps in Image processing. This paper introduces an improved quadtree method (IQM) for split-merge called as neighbour naming based image segmentation method (NNBISM) in Kelkar, D. and Grupta, S., (2008), where top-down and bottom-up approaches of region based segmentation techniques are chained. IQM mainly composed of splitting image, onitializing neighbour list and then merging splitted regions. First step uses quadtree for representing splitted Image. In second step neighbour list of every quadtree node, is populated using neighbour naming method (NNM). NNM works at region level, and leads to fast initialisation of adjacency information thus improving the performance of IQM for split merge image segmentation. This populated list is basis for third step which is decomposed in two phases, in-house merge and ginal merge. This decomposing reduces problems involved in handling lengthy neighbour list during merging process .

Journal ArticleDOI
TL;DR: This paper analyzes the relation between the growth rate of the knowledge stock of the agents from R&D collaborations and the properties of the adjacency matrix associated with the network of collaborations.
Abstract: We investigate some of the properties and extensions of a dynamic innovation network model recently introduced in [36]. In the model, the set of efficient graphs ranges, depending on the cost for maintaining a link, from the complete graph to the (quasi-) star, varying within a well defined class of graphs. However, the interplay between dynamics on the nodes and topology of the network leads to equilibrium networks which are typically not efficient and are characterized, as observed in empirical studies of R&D networks, by sparseness, presence of clusters and heterogeneity of degree. In this paper, we analyze the relation between the growth rate of the knowledge stock of the agents from R&D collaborations and the properties of the adjacency matrix associated with the network of collaborations. By means of computer simulations we further investigate how the equilibrium network is

Journal ArticleDOI
TL;DR: In this paper, the authors obtained the optimal structural result for adjacency preserving maps on complex hermitian matrices over the complex field by replacing the bijectivity assumption with the weaker assumption of preserving adjacence in one direction only.
Abstract: Hua’s fundamental theorem of the geometry of hermitian matrices characterizes bijective maps on the space of all hermitianmatrices preserving adjacency in both directions. The problem of possible improvements has been open for a while. There are three natural problems here. Do we need the bijectivity assumption? Can we replace the assumption of preserving adjacency in both directions by the weaker assumption of preserving adjacency in one direction only? Can we obtain such a characterization formaps acting between the spaces of hermitian matrices of different sizes? We answer all three questions for the complex hermitian matrices, thus obtaining the optimal structural result for adjacency preserving maps on hermitian matrices over the complex field.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: This paper assumes that the high contrast, dominant contours of an object are fairly repeatable, and uses them to compute partial matching cost (PMC) between regions, and integrates PMC in a many-to-one label assignment framework for matching RAGs, and solves it using belief propagation.
Abstract: Region based features are getting popular due to their higher descriptive power relative to other features. However, real world images exhibit changes in image segments capturing the same scene part taken at different time, under different lighting conditions, from different viewpoints, etc. Segmentation algorithms reflect these changes, and thus segmentations exhibit poor repeatability. In this paper we address the problem of matching regions of similar objects under unstable segmentations. Merging and splitting of regions makes it difficult to find such correspondences using one-to-one matching algorithms. We present partial region matching as a solution to this problem. We assume that the high contrast, dominant contours of an object are fairly repeatable, and use them to compute partial matching cost (PMC) between regions. Region correspondences are obtained under region adjacency constraints encoded by region adjacency graph (RAG). We integrate PMC in a many-to-one label assignment framework for matching RAGs, and solve it using belief propagation. We show that our algorithm can match images of similar objects across unstable image segmentations. We also compare the performance of our algorithm with that of the standard one-to-one matching algorithm on three motion sequences. We conclude that our partial region matching approach is robust under segmentation irrepeatabilities.

Proceedings ArticleDOI
25 Oct 2008
TL;DR: This work considers the problem of online sublinear expander reconstruction and its relation to random walks in ``noisy" expanders, and shows that a random walk from almost any vertex in the expander part will have fast mixing properties in the general setting of irreducible finite Markov chains.
Abstract: We consider the problem of online sublinear expander reconstruction and its relation to random walks in ``noisy" expanders. Given access to an adjacency list representation of a bounded-degree graph G, we want to convert this graph into a bounded-degree expander G' changing G as little aspossible. The graph G' will be output by a distributed filter: this is sublinear time procedure that given a query vertex, outputs all its neighbors in G', and can do so even in a distributed manner, ensuring consistency in all the answers.One of the main tools in our analysis is a result on the behavior of random walks in graph that are almost expanders: graphs that are formed by arbitrarily connecting a small unknown graph (the noise) to a large expander. We show that a random walk from almost any vertex in the expander part will have fast mixing properties, in the general setting of irreducible finite Markov chains. We alsodesign sublinear time procedures to distinguish vertices of the expander part from those in the noise part, and use this procedure in the reconstruction algorithm.

Proceedings ArticleDOI
01 Jan 2008
TL;DR: A new algorithm is proposed that uses the maximum spanning tree of a graph defining potential connectivity and adjacency in recorded stripes to solve the problem of relating projected and recorded stripes.
Abstract: Structured light is a well-known technique for capturing 3D surface measurements but has yet to achieve satisfactory results for applications demanding high resolution models at frame rate. For these requirements a dense set of uniform uncoded white stripes seems attractive. But the problem of relating projected and recorded stripes, here called the Indexing Problem, has proved to be difficult to overcome reliably for uncoded patterns. We propose a new algorithm that uses the maximum spanning tree of a graph defining potential connectivity and adjacency in recorded stripes. Results are significantly more accurate and reliable than previous attempts. We do however also identify an important limitation of uncoded patterns and claim that, in general, additional stripe coding is necessary. Our algorithm adapts easily to accommodate a minimal coding scheme that increases neither sample size nor acquisition time.

Journal ArticleDOI
TL;DR: In this paper, the authors present the twin objectives of testing for isomorphism and compactness using the Hamming matrices and moment matrices in planetary gear trains (PGTs).
Abstract: New planetary gear trains (PGTs) are generated using graph theory. A geared kinematic chain is converted to a graph and a graph in turn is algebraically represented by a vertex-vertex adjacency matrix. Checking for isomorphism needs to be an integral part of the enumeration process of PGTs. Hamming matrix is written from the adjacency matrix, using a set of rules, which is adequate to detect isomorphism in PGTs. The present work presents the twin objectives of testing for isomorphism and compactness using the Hamming matrices and moment matrices.

Journal ArticleDOI
TL;DR: In this article, a method of determining column layouts for orthogonal buildings is developed using the sweep line algorithm coupled to an adjacency graph, which is tested on various examples and is shown to work well unless the building shape results in a large number of partitions.

Journal ArticleDOI
TL;DR: This paper considers a segmentation as a set of connected regions, separated by a frontier, and defines four classes of graphs for which it is proved, thanks to the notion of cleft, that one of these classes is the class of graphs in which any cleft is thin.
Abstract: Region merging methods consist of improving an initial segmentation by merging some pairs of neighboring regions. In this paper, we consider a segmentation as a set of connected regions, separated by a frontier. If the frontier set cannot be reduced without merging some regions then we call it a cleft, or binary watershed. In a general graph framework, merging two regions is not straightforward. We define four classes of graphs for which we prove, thanks to the notion of cleft, that some of the difficulties for defining merging procedures are avoided. Our main result is that one of these classes is the class of graphs in which any cleft is thin. None of the usual adjacency relations on ?2 and ?3 allows a satisfying definition of merging. We introduce the perfect fusion grid on ? n , a regular graph in which merging two neighboring regions can always be performed by removing from the frontier set all the points adjacent to both regions.

Proceedings Article
01 Jan 2008
TL;DR: Experiments and evaluations on DUC04 data show that this cluster-adjacency based method to order sentences for multi-document summarization tasks gets better performance than other existing sentence ordering methods.
Abstract: In this paper, we propose a cluster-adjacency based method to order sentences for multi-document summarization tasks. Given a group of sentences to be organized into a summary, each sentence was mapped to a theme in source documents by a semi-supervised classification method, and adjacency of pairs of sentences is learned from source documents based on adjacency of clusters they belong to. Then the ordering of the summary sentences can be derived with the first sentence determined. Experiments and evaluations on DUC04 data show that this method gets better performance than other existing sentence ordering methods.

Journal ArticleDOI
TL;DR: In this article, real nonsingular cubic hypersurfaces XP 5 up to defor- mation equivalence combined with projective equivalence are classified by the conjugacy classes of involutions induced by the complex conjuga- tion in H4(X).
Abstract: We study real nonsingular cubic hypersurfaces XP 5 up to defor- mation equivalence combined with projective equivalence and prove that they are classified by the conjugacy classes of involutions induced by the complex conjuga- tion in H4(X). Moreover, we provide a graph K4 whose vertices represent the equivalence classes of such cubics and edges represent their adjacency. It turns out that the graph K4 essentially coincides with the graph K3 characterizing a certain adjacency of real non-polarized K3-surfaces. The most familiar logics in the modal family are con- structed from a weak logic called K (after Saul Kripke). Under the narrow reading, modal logic concerns neces- sity and possibility. A variety of different systems may be developed for such logics using K as a foundation.

Book
01 Jan 2008
TL;DR: Quantum Angular Momentum Composite Systems Graphs and Adjacency Diagrams Generating Functions The D-Polynomials: Form Operator Actions in Hilbert Space The General Linear and Unitary Groups Tensor Operator Theory Compendium A: Basic Algebraic Objects Compendium B: Combinatorial Objects
Abstract: Quantum Angular Momentum Composite Systems Graphs and Adjacency Diagrams Generating Functions The D-Polynomials: Form Operator Actions in Hilbert Space The D-Polynomials: Structure The General Linear and Unitary Groups Tensor Operator Theory Compendium A: Basic Algebraic Objects Compendium B: Combinatorial Objects.

Book ChapterDOI
07 Jul 2008
TL;DR: A property tester is given that given a graph with degree bound d, an expansion bound α, and a parametere> 0, accepts the graph with high probability if its expansion is more than α and rejects it withhigh probability if it is e-far from any graph with expansion α' withdegree bound d.
Abstract: We consider the problem of testing graph expansion (eithervertex or edge) in the bounded degree model [10]. We give aproperty tester that given a graph with degree bound d, anexpansion bound α, and a parametere> 0, accepts the graph with high probabilityif its expansion is more than α, and rejects it withhigh probability if it is e-far from any graphwith expansion α' with degree bound d,where α' 0.

Journal ArticleDOI
TL;DR: An efficient approach to reduce the number of elementary tests for continuous collision detection between rigid and deformable models by exploiting connectivity information and using the adjacency relationships between triangles to perform hierarchical culling is presented.
Abstract: We present an efficient approach to reduce the number of elementary tests for continuous collision detection between rigid and deformable models. Our algorithm exploits connectivity information and uses the adjacency relationships between triangles to perform hierarchical culling. This can be combined with table-based lookups to eliminate duplicate elementary tests. In practice, our approach can reduce the number of elementary tests by two orders of magnitude. We demonstrate the performance of our algorithm on various challenging rigid body and deformable simulations.

Book ChapterDOI
TL;DR: This chapter discusses the well-known edge graph and its dual, the adjacency graph, recently introduced by Bergeron et al. step-by-step procedures are given for constructing and manipulating these graphs.
Abstract: The Double Cut and Join is an operation acting locally at four chromosomal positions without regard to chromosomal context. This chapter discusses its application and the resulting menu of operations for genomes consisting of arbitrary numbers of circular chromosomes, as well as for a general mix of linear and circular chromosomes. In the general case the menu includes: inversion, translocation, transposition, formation and absorption of circular intermediates, conversion between linear and circular chromosomes, block interchange, fission, and fusion. This chapter discusses the well-known edge graph and its dual, the adjacency graph, recently introduced by Bergeron et al. Step-by-step procedures are given for constructing and manipulating these graphs. Simple algorithms are given in the adjacency graph for computing the minimal DCJ distance between two genomes and finding a minimal sorting; and use of an online tool (Mauve) to generate synteny blocks and apply DCJ is described.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the nearest-nodes finite element method (NN-FEM), where finite elements are used only for numerical integration; while shape functions are constructed in a similar way as in meshless methods, i.e. by using a set of nodes that are the nearest to a concerned quadrature point.

Proceedings ArticleDOI
01 Sep 2008
TL;DR: It is demonstrated how redundancy-based techniques can be used to acquire containment and adjacency relations, and how fuzzy spatial reasoning can be employed to maintain the consistency of the resulting knowledge base.
Abstract: Topological relations between geographic regions are of interest in many applications. When the exact boundaries of regions are not available, such relations can be established by analysing natural language information from Web documents. In particular, we demonstrate how redundancy-based techniques can be used to acquire containment and adjacency relations, and how fuzzy spatial reasoning can be employed to maintain the consistency of the resulting knowledge base.

Proceedings ArticleDOI
TL;DR: In this paper, a network characterization of combinatorial fitness landscapes was proposed by adapting the notion of inherent networks proposed for energy surfaces (Doye, 2002) to the family of $NK$ landscapes as an example.
Abstract: We propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces (Doye, 2002). We use the well-known family of $NK$ landscapes as an example. In our case the inherent network is the graph where the vertices are all the local maxima and edges mean basin adjacency between two maxima. We exhaustively extract such networks on representative small NK landscape instances, and show that they are 'small-worlds'. However, the maxima graphs are not random, since their clustering coefficients are much larger than those of corresponding random graphs. Furthermore, the degree distributions are close to exponential instead of Poissonian. We also describe the nature of the basins of attraction and their relationship with the local maxima network.