scispace - formally typeset
Search or ask a question
Topic

Graph (abstract data type)

About: Graph (abstract data type) is a research topic. Over the lifetime, 69988 publications have been published within this topic receiving 1218314 citations. The topic is also known as: graph.


Papers
More filters
Journal ArticleDOI
TL;DR: Two backtracking algorithms are presented, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique, and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed.
Abstract: Description bttroductian. A maximal complete subgraph (clique) is a complete subgraph that is not contained in any other complete subgraph. A recent paper [1] describes a number of techniques to find maximal complete subgraphs of a given undirected graph. In this paper, we present two backtracking algorithms, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique. The first version is a straightforward implementation of the basic algorithm. It is mainly presented to illustrate the method used. This version generates cliques in alphabetic (lexicographic) order. The second version is derived from the first and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed. This version tends to produce the larger cliques first and to generate sequentially cliques having a large common intersection. The detailed algorithm for version 2 is presented here. Description o f the algorithm--Version 1. Three sets play an important role in the algorithm. (1) The set compsub is the set to be extended by a new point or shrunk by one point on traveling along a branch of the backtracking tree. The points that are eligible to extend compsub, i.e. that are connected to all points in compsub, are collected recursively in the remaining two sets. (2) The set candidates is the set of all points that will in due time serve as an extension to the present configuration of compsub. (3) The set not is the set of all points that have at an earlier stage already served as an extension of the present configuration of compsub and are now explicitly excluded. The reason for maintaining this set trot will soon be made clear. The core of the algorithm consists of a recursively defined extension operator that will be applied to the three sets Just described. It has the duty to generate all extensions of the given configuration of compsub that it can make with the given set of candidates and that do not contain any of the points in not. To put it differently: all extensions of compsub containing any point in not have already been generated. The basic mechanism now consists of the following five steps:

2,405 citations

Journal Article
TL;DR: In this survey I have collected everything I could find on graph labelings techniques that have appeared in journals that are not widely available.
Abstract: A graph labeling is an assignment of integers to the vertices or edges, or both, subject to certain conditions. Graph labelings were first introduced in the late 1960s. In the intervening years dozens of graph labelings techniques have been studied in over 1000 papers. Finding out what has been done for any particular kind of labeling and keeping up with new discoveries is difficult because of the sheer number of papers and because many of the papers have appeared in journals that are not widely available. In this survey I have collected everything I could find on graph labeling. For the convenience of the reader the survey includes a detailed table of contents and index.

2,367 citations

Journal ArticleDOI
TL;DR: LexRank as discussed by the authors is a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing (NLP), which is based on the concept of eigenvector centrality.
Abstract: We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.

2,367 citations

Journal ArticleDOI
TL;DR: A new supervised dimensionality reduction algorithm called marginal Fisher analysis is proposed in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizing the interclass separability.
Abstract: A large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions

2,339 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
86% related
Deep learning
79.8K papers, 2.1M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Reinforcement learning
46K papers, 1M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2022158
20217,346
20207,228
20195,990
20184,812
20174,094