scispace - formally typeset
Search or ask a question

Showing papers on "Line graph published in 2013"


01 Jan 2013

801 citations


Book ChapterDOI
TL;DR: In this paper, a survey on extreme graph theory is presented, focusing on the case when one of the excluded graphs is bipartite, and many important results, methods, problems, and constructions are described.
Abstract: This paper is a survey on Extremal Graph Theory, primarily focusing on the case when one of the excluded graphs is bipartite. On one hand we give an introduction to this field and also describe many important results, methods, problems, and constructions.

332 citations


Journal ArticleDOI
TL;DR: This work presents a local clustering algorithm, a useful primitive for handling massive graphs, such as social networks and web-graphs, that finds a good cluster---a subset of vertices whose internal connections are significantly richer than its external connections---near a given vertex.
Abstract: We study the design of local algorithms for massive graphs A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph We present a local clustering algorithm Our algorithm finds a good cluster---a subset of vertices whose internal connections are significantly richer than its external connections---near a given vertex The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and web-graphs As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance Our algorithm takes time nearly linear in the number edges of the graph

329 citations


01 Jan 2013
TL;DR: The Third Lecture as discussed by the authors discusses the applications of the Zarankiewicz problem and Moore's bound to graph spanners, and the Girth Problem for trees and the Turán problem for trees.
Abstract: 3 Third Lecture 11 3.1 Applications of the Zarankiewicz Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 The Turán Problem for Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 The Girth Problem and Moore’s Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Application of Moore’s Bound to Graph Spanners . . . . . . . . . . . . . . . . . . . . . . . 14

284 citations


Journal ArticleDOI
TL;DR: This paper relax the condition of orthogonality to design a biorthogonal pair of graph-wavelets that are k-hop localized with compact spectral spread and still satisfy the perfect reconstruction conditions.
Abstract: This paper extends previous results on wavelet filterbanks for data defined on graphs from the case of orthogonal transforms to more general and flexible biorthogonal transforms. As in the recent work, the construction proceeds in two steps: first we design “one-dimensional” two-channel filterbanks on bipartite graphs, and then extend them to “multi-dimensional” separable two-channel filterbanks for arbitrary graphs via a bipartite subgraph decomposition. We specifically design wavelet filters based on the spectral decomposition of the graph, and state sufficient conditions for the filterbanks to be perfect reconstruction and orthogonal. While our previous designs, referred to as graph-QMF filterbanks, are perfect reconstruction and orthogonal, they are not exactly k-hop localized, i.e., the computation at each node is not localized to a small k-hop neighborhood around the node. In this paper, we relax the condition of orthogonality to design a biorthogonal pair of graph-wavelets that are k-hop localized with compact spectral spread and still satisfy the perfect reconstruction conditions. The design is analogous to the standard Cohen-Daubechies-Feauveau's (CDF) construction of factorizing a maximally-flat Daubechies half-band filter. Preliminary results demonstrate that the proposed filterbanks can be useful for both standard signal processing applications as well as for signals defined on arbitrary graphs.

235 citations


Journal ArticleDOI
TL;DR: In this paper, Nordhaus and Gaddum gave lower and upper bounds on the sum and product of the chromatic number of a graph and its complement, in terms of the order of the graph.

198 citations


Journal ArticleDOI
TL;DR: It is explained what it means for one graph to be a spectral approximation of another and the development of algorithms for spectral sparsification are reviewed, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.
Abstract: Graph sparsification is the approximation of an arbitrary graph by a sparse graph.We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsification has been an important tool in the design of nearly linear-time algorithms for solving systems of linear equations in symmetric, diagonally dominant matrices. The fast solution of these linear systems has already led to breakthrough results in combinatorial optimization, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.

193 citations


Proceedings ArticleDOI
13 May 2013
TL;DR: This work finds that the space of subgraph frequencies is governed both by its combinatorial properties --- based on extremal results that constrain all graphs --- as well as by its empirical properties, manifested in the way that real social graphs appear to lie near a simple one-dimensional curve through this space.
Abstract: A growing set of on-line applications are generating data that can be viewed as very large collections of small, dense social graphs --- these range from sets of social groups, events, or collaboration projects to the vast collection of graph neighborhoods in large social networks A natural question is how to usefully define a domain-independent 'coordinate system' for such a collection of graphs, so that the set of possible structures can be compactly represented and understood within a common space In this work, we draw on the theory of graph homomorphisms to formulate and analyze such a representation, based on computing the frequencies of small induced subgraphs within each graph We find that the space of subgraph frequencies is governed both by its combinatorial properties --- based on extremal results that constrain all graphs --- as well as by its empirical properties --- manifested in the way that real social graphs appear to lie near a simple one-dimensional curve through this space We develop flexible frameworks for studying each of these aspects For capturing empirical properties, we characterize a simple stochastic generative model, a single-parameter extension of Erdos-Renyi random graphs, whose stationary distribution over subgraphs closely tracks the one-dimensional concentration of the real social graph families For the extremal properties, we develop a tractable linear program for bounding the feasible space of subgraph frequencies by harnessing a toolkit of known extremal graph theory Together, these two complementary frameworks shed light on a fundamental question pertaining to social graphs: what properties of social graphs are 'social' properties and what properties are 'graph' properties? We conclude with a brief demonstration of how the coordinate system we examine can also be used to perform classification tasks, distinguishing between structures arising from different types of social graphs

186 citations


Proceedings ArticleDOI
07 Oct 2013
TL;DR: A very simple percolation - based graph matching algorithm that incrementally maps every pair of nodes (i,j) with at least r neighboring mapped pairs is proposed and analyzed, which makes possible a rigorous analysis that relies on recent advances in bootstrappercolation theory for the G(n,p) random graph.
Abstract: Graph matching is a generalization of the classic graph isomorphism problem. By using only their structures a graph-matching algorithm finds a map between the vertex sets of two similar graphs. This has applications in the de-anonymization of social and information networks and, more generally, in the merging of structural data from different domains.One class of graph-matching algorithms starts with a known seed set of matched node pairs. Despite the success of these algorithms in practical applications, their performance has been observed to be very sensitive to the size of the seed set. The lack of a rigorous understanding of parameters and performance makes it difficult to design systems and predict their behavior.In this paper, we propose and analyze a very simple percolation - based graph matching algorithm that incrementally maps every pair of nodes (i,j) with at least r neighboring mapped pairs. The simplicity of this algorithm makes possible a rigorous analysis that relies on recent advances in bootstrap percolation theory for the G(n,p) random graph. We prove conditions on the model parameters in which percolation graph matching succeeds, and we establish a phase transition in the size of the seed set. We also confirm through experiments that the performance of percolation graph matching is surprisingly good, both for synthetic graphs and real social-network data.

177 citations


Journal ArticleDOI
TL;DR: This paper considers different classes of graphs that are roughly differentiated considering the complexity of the defined labels for both vertices and edges, aiming at explaining some significant instances of each graph matching methodology mainly considered in the technical literature.
Abstract: In this paper, we propose a survey concerning the state of the art of the graph matching problem, conceived as the most important element in the definition of inductive inference engines in graph-based pattern recognition applications. We review both methodological and algorithmic results, focusing on inexact graph matching procedures. We consider different classes of graphs that are roughly differentiated considering the complexity of the defined labels for both vertices and edges. Emphasis will be given to the understanding of the underlying methodological aspects of each identified research branch. A selection of inexact graph matching algorithms is proposed and synthetically described, aiming at explaining some significant instances of each graph matching methodology mainly considered in the technical literature.

173 citations


Posted Content
TL;DR: This survey discusses both classical text-book type properties and some advanced properties of graph sampling, and provides a taxonomy of different graph sampling objectives and graph sampling approaches.
Abstract: Graph sampling is a technique to pick a subset of vertices and/ or edges from original graph. It has a wide spectrum of applications, e.g. survey hidden population in sociology [54], visualize social graph [29], scale down Internet AS graph [27], graph sparsification [8], etc. In some scenarios, the whole graph is known and the purpose of sampling is to obtain a smaller graph. In other scenarios, the graph is unknown and sampling is regarded as a way to explore the graph. Commonly used techniques are Vertex Sampling, Edge Sampling and Traversal Based Sampling. We provide a taxonomy of different graph sampling objectives and graph sampling approaches. The relations between these approaches are formally argued and a general framework to bridge theoretical analysis and practical implementation is provided. Although being smaller in size, sampled graphs may be similar to original graphs in some way. We are particularly interested in what graph properties are preserved given a sampling procedure. If some properties are preserved, we can estimate them on the sampled graphs, which gives a way to construct efficient estimators. If one algorithm relies on the perserved properties, we can expect that it gives similar output on original and sampled graphs. This leads to a systematic way to accelerate a class of graph algorithms. In this survey, we discuss both classical text-book type properties and some advanced properties. The landscape is tabularized and we see a lot of missing works in this field. Some theoretical studies are collected in this survey and simple extensions are made. Most previous numerical evaluation works come in an ad hoc fashion, i.e. evaluate different type of graphs, different set of properties, and different sampling algorithms. A systematical and neutral evaluation is needed to shed light on further graph sampling studies.

Posted Content
TL;DR: In this article, the authors discuss the work of many authors on various matrices used to study signed graphs, concentrating on adjacency and incidence matrices and the closely related topics of Kirchhoff (`Laplacian') matrices, line graphs, and very strong regularity.
Abstract: I discuss the work of many authors on various matrices used to study signed graphs, concentrating on adjacency and incidence matrices and the closely related topics of Kirchhoff (`Laplacian') matrices, line graphs, and very strong regularity.

Proceedings ArticleDOI
26 May 2013
TL;DR: This framework extends traditional discrete signal processing theory to structured datasets by viewing them as signals represented by graphs, so that signal coefficients are indexed by graph nodes and relations between them are represented by weighted graph edges.
Abstract: We propose a novel discrete signal processing framework for the representation and analysis of datasets with complex structure. Such datasets arise in many social, economic, biological, and physical networks. Our framework extends traditional discrete signal processing theory to structured datasets by viewing them as signals represented by graphs, so that signal coefficients are indexed by graph nodes and relations between them are represented by weighted graph edges. We discuss the notions of signals and filters on graphs, and define the concepts of the spectrum and Fourier transform for graph signals. We demonstrate their relation to the generalized eigenvector basis of the graph adjacency matrix and study their properties. As a potential application of the graph Fourier transform, we consider the efficient representation of structured data that utilizes the sparseness of graph signals in the frequency domain.


Journal ArticleDOI
TL;DR: A new set of model specifications by including bipartite graph configurations involving more than four nodes is proposed, based on a hierarchy of dependence structures within which different dependence assumptions may be located.

Proceedings ArticleDOI
23 Jul 2013
TL;DR: In this article, an improved parallel algorithm for decomposing an undirected unweighted graph into small diameter pieces with a small fraction of the edges in between is presented, which is based on the shifted shortest path approach introduced in [Blelloch, Gupta, Koutis, Miller, Peng, Tangwongsan, SPAA 2011].
Abstract: We show an improved parallel algorithm for decomposing an undirected unweighted graph into small diameter pieces with a small fraction of the edges in between. These decompositions form critical subroutines in a number of graph algorithms. Our algorithm builds upon the shifted shortest path approach introduced in [Blelloch, Gupta, Koutis, Miller, Peng, Tangwongsan, SPAA 2011]. By combining various stages of the previous algorithm, we obtain a significantly simpler algorithm with the same asymptotic guarantees as the best sequential algorithm.

Proceedings ArticleDOI
22 Jun 2013
TL;DR: A novel, efficient threshold-based graph decomposition algorithm, with time complexity O(l × |E|), to decompose a graph G at each iteration, where l usually is a small integer with l « |V|.
Abstract: Efficiently computing k-edge connected components in a large graph, G = (V, E), where V is the vertex set and E is the edge set, is a long standing research problem. It is not only fundamental in graph analysis but also crucial in graph search optimization algorithms. Consider existing techniques for computing k-edge connected components are quite time consuming and are unlikely to be scalable for large scale graphs, in this paper we firstly propose a novel graph decomposition paradigm to iteratively decompose a graph G for computing its k-edge connected components such that the number of drilling-down iterations h is bounded by the "depth" of the k-edge connected components nested together to form G, where h usually is a small integer in practice. Secondly, we devise a novel, efficient threshold-based graph decomposition algorithm, with time complexity O(l × |E|), to decompose a graph G at each iteration, where l usually is a small integer with l « |V|. As a result, our algorithm for computing k-edge connected components significantly improves the time complexity of an existing state-of-the-art technique from O(|V|2|E| + |V|3 log |V|) to O(h × l × |E|). Finally, we conduct extensive performance studies on large real and synthetic graphs. The performance studies demonstrate that our techniques significantly outperform the state-of-the-art solution by several orders of magnitude.

Posted Content
TL;DR: In this article, the independence number of the maximal triangle-free graph was shown to be within a 4+o(1) factor of the best known upper bound on the Ramsey number.
Abstract: The triangle-free process begins with an empty graph on n vertices and iteratively adds edges chosen uniformly at random subject to the constraint that no triangle is formed. We determine the asymptotic number of edges in the maximal triangle-free graph at which the triangle-free process terminates. We also bound the independence number of this graph, which gives an improved lower bound on the Ramsey numbers R(3,t): we show R(3,t) > (1-o(1)) t^2 / (4 log t), which is within a 4+o(1) factor of the best known upper bound. Our improvement on previous analyses of this process exploits the self-correcting nature of key statistics of the process. Furthermore, we determine which bounded size subgraphs are likely to appear in the maximal triangle-free graph produced by the triangle-free process: they are precisely those triangle-free graphs with density at most 2.

Journal ArticleDOI
TL;DR: The revised notion of graph pattern matching is proposed based on a notion of bounded simulation, which extends graph simulation by specifying the connectivity of nodes in a graph within a predefined number of hops, and it is shown that bounded simulation is able to find sensible matches that the traditional matching notions fail to catch.
Abstract: Graph pattern matching is commonly used in a variety of emerging applications such as social network analysis. These applications highlight the need for studying the following two issues. First, graph pattern matching is traditionally defined in terms of subgraph isomorphism or graph simulation. These notions, however, often impose too strong a topological constraint on graphs to identify meaningful matches. Second, in practice a graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches starting from scratch via batch algorithms when the graph is updated.This article studies these two issues. (1) We propose to define graph pattern matching based on a notion of bounded simulation, which extends graph simulation by specifying the connectivity of nodes in a graph within a predefined number of hops. We show that bounded simulation is able to find sensible matches that the traditional matching notions fail to catch. We also show that matching via bounded simulation is in cubic time, by giving such an algorithm. (2) We provide an account of results on incremental graph pattern matching, for matching defined with graph simulation, bounded simulation, and subgraph isomorphism. We show that the incremental matching problem is unbounded, that is, its cost is not determined alone by the size of the changes in the input and output, for all these matching notions. Nonetheless, when matching is defined in terms of simulation or bounded simulation, incremental matching is semibounded, that is, its worst-time complexity is bounded by a polynomial in the size of the changes in the input, output, and auxiliary information that is necessarily maintained to reuse previous computation, and the size of graph patterns. We also develop incremental matching algorithms for graph simulation and bounded simulation, by minimizing unnecessary recomputation. In contrast, matching based on subgraph isomorphism is neither bounded nor semibounded. (3) We experimentally verify the effectiveness and efficiency of these algorithms, and show that: (a) the revised notion of graph pattern matching allows us to identify communities commonly found in real-life networks, and (b) the incremental algorithms substantially outperform their batch counterparts in response to small changes. These suggest a promising framework for real-life graph pattern matching.

Journal ArticleDOI
TL;DR: The results imply that any of these graph classes have boolean-width O(logn), which leads to polynomial time algorithms for a large class of locally checkable vertex subset and vertex partitioning problems on all ofThese graph classes.

Proceedings ArticleDOI
01 Oct 2013
TL;DR: This paper casts AGM in a Bayesian framework based on a clean definition of the probability of correctly mapping two nodes, which leads to a polynomial time algorithm that does not require side information.
Abstract: Approximate graph matching (AGM) refers to the problem of mapping the vertices of two structurally similar graphs, which has applications in social networks, computer vision, chemistry, and biology. Given its computational cost, AGM has mostly been limited to either small graphs (e.g., tens or hundreds of nodes), or to large graphs in combination with side information beyond the graph structure (e.g., a seed set of pre-mapped node pairs). In this paper, we cast AGM in a Bayesian framework based on a clean definition of the probability of correctly mapping two nodes, which leads to a polynomial time algorithm that does not require side information. Node features such as degree and distances to other nodes are used as fingerprints. The algorithm proceeds in rounds, such that the most likely pairs are mapped first; these pairs subsequently generate additional features in the fingerprints of other nodes. We evaluate our method over real social networks and show that it achieves a very low matching error provided the two graphs are sufficiently similar. We also evaluate our method on random graph models to characterize its behavior under various levels of node clustering.

Proceedings ArticleDOI
26 May 2013
TL;DR: This work proposes a novel discrete signal processing framework for structured datasets that arise from social, economic, biological, and physical networks and demonstrates the application of graph filters to data classification by demonstrating that a classifier can be interpreted as an adaptive graph filter.
Abstract: We propose a novel discrete signal processing framework for structured datasets that arise from social, economic, biological, and physical networks. Our framework extends traditional discrete signal processing theory to datasets with complex structure that can be represented by graphs, so that data elements are indexed by graph nodes and relations between elements are represented by weighted graph edges. We interpret such datasets as signals on graphs, introduce the concept of graph filters for processing such signals, and discuss important properties of graph filters, including linearity, shift-invariance, and invertibility. We then demonstrate the application of graph filters to data classification by demonstrating that a classifier can be interpreted as an adaptive graph filter. Our experiments demonstrate that the proposed approach achieves high classification accuracy.

Journal ArticleDOI
TL;DR: This paper investigates whether the Jensen-Shannon divergence can be used as a means of establishing a graph kernel, and uses kernel principle components analysis (kPCA) to embed graphs into a feature space.
Abstract: Graph-based representations have been proved powerful in computer vision. The challenge that arises with large amounts of graph data is that of computationally burdensome edit distance computation. Graph kernels can be used to formulate efficient algorithms to deal with high dimensional data, and have been proved an elegant way to overcome this computational bottleneck. In this paper, we investigate whether the Jensen-Shannon divergence can be used as a means of establishing a graph kernel. The Jensen-Shannon kernel is nonextensive information theoretic kernel, and is defined using the entropy and mutual information computed from probability distributions over the structures being compared. To establish a Jensen-Shannon graph kernel, we explore two different approaches. The first of these is based on the von Neumann entropy associated with a graph. The second approach uses the Shannon entropy associated with the probability state vector for a steady state random walk on a graph. We compare the two resulting graph kernels for the problem of graph clustering. We use kernel principle components analysis (kPCA) to embed graphs into a feature space. Experimental results reveal that the method gives good classification results on graphs extracted both from an object recognition database and from an application in bioinformation.

Journal ArticleDOI
TL;DR: It is proved that all RAC graphs having maximal edge density belong to the intersection of the two families; and there is no inclusion relationship between theTwo families.

Journal ArticleDOI
TL;DR: In this article, the authors give a delocalization estimate for eigenfunctions of the discrete Laplacian on large (d+1)-regular graphs, showing that any subset of the graph supporting e of the L 2 mass of an eigenfunction must be large.
Abstract: We give a delocalization estimate for eigenfunctions of the discrete Laplacian on large (d+1)-regular graphs, showing that any subset of the graph supporting e of the L 2 mass of an eigenfunction must be large. For graphs satisfying a mild girth-like condition, this bound will be exponential in the size of the graph.

Journal ArticleDOI
22 Mar 2013
TL;DR: In this paper, the authors proposed an algorithm to construct in and out-degree sequences from samples of i.i.d. observations from F and G, respectively, that with high probability will be graphical, that is, from which a simple directed graph can be drawn.
Abstract: Given two distributions F and G on the nonnegative integers we propose an algorithm to construct in- and out-degree sequences from samples of i.i.d. observations from F and G, respectively, that with high probability will be graphical, that is, from which a simple directed graph can be drawn. We then analyze a directed version of the configuration model and show that, provided that F and G have finite variance, the probability of obtaining a simple graph is bounded away from zero as the number of nodes grows. We show that conditional on the resulting graph being simple, the in- and out-degree distributions are (approximately) F and G for large size graphs. Moreover, when the degree distributions have only finite mean we show that the elimination of self-loops and multiple edges does not significantly change the degree distributions in the resulting simple graph.

Journal ArticleDOI
TL;DR: It is shown that there exist disk graphs on n vertices such that in every realization by integer disks at least one coordinate or radius is 2^2^^^@W^^^(^^^n^^^) and on the other hand every disk graph can be realized by disks with integer coordinates and radii that are at most 2^ 2^^^O^^^ (^^^ n^^^); and the analogous results for unit disk graphs and segment graphs are shown.

Book
27 Jun 2013
TL;DR: In this paper, the authors introduce twisted duality, cycle family graphs, and embedded graph equivalence, as well as graph polynomials and their interactions with graph polygons.
Abstract: 1. Embedded Graphs .- 2. Generalised Dualities .- 3. Twisted duality, cycle family graphs, and embedded graph equivalence .- 4. Interactions with Graph Polynomials .- 5. Applications to Knot Theory .- References .- Index .

Journal ArticleDOI
TL;DR: It is proved that the study of the hyperbolicity on graphs can be reduced to the study in the same graph without its loops and multiple edges, and it is shown how thehyperbolicities of a graph changes upon adding or deleting finitely or infinitely many edges.

Journal ArticleDOI
TL;DR: In this article, an even simpler linear-time algorithm is presented that computes a structure from which both the 2-vertex- and 2-edge-connectivity of a graph can be easily ''read [email protected]?"'.