scispace - formally typeset
Search or ask a question

Showing papers on "Graph (abstract data type) published in 1997"


Proceedings ArticleDOI
17 Jun 1997
TL;DR: This work treats image segmentation as a graph partitioning problem and proposes a novel global criterion, the normalized cut, for segmenting the graph, which measures both the total dissimilarity between the different groups as well as the total similarity within the groups.
Abstract: We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We have applied this approach to segmenting static images and found results very encouraging.

11,827 citations


BookDOI
01 Feb 1997
TL;DR: The double-pushout approach to graph transformation, which was invented in the early 1970's, is introduced in the Handbook of Graph Grammars and Computing by Graph.
Abstract: A graph program consists of declarations of conditional graph transformation rules G. Rozenberg, editors: Handbook of Graph Grammars and Computing. We introduce s-graph grammars, a new grammar formalism for computing Handbook of Graph Grammars and Computing by Graph Transformation, pp. The double-pushout approach to graph transformation, which was invented in the early 1970's, is Handbook of Graph Grammars and Computing by Graph.

1,366 citations


Proceedings ArticleDOI
04 May 1997
TL;DR: In this article, the authors presented randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity.
Abstract: We present randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encod-ing and decoding algorithms for these codes have fast and simple software implementations. Partial implementationsof our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm forthis problem. We expect these codes will be extremely useful for applications such as real-time audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design andanalysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph,where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithmby a set of differential equations. The solution to these equations can then be expressed as polynomials in one variable with coefficients determined by the graph structure. Based on these polynomials, we design a graph structure that guarantees successful decoding with high probability

872 citations


Journal ArticleDOI
TL;DR: An algorithm for finding the minimum cut of an undirected edge-weighted graph that has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness.
Abstract: We present an algorithm for finding the minimum cut of an undirected edge-weighted graph. It is simple in every respect. It has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness. Its runtime matches that of the fastest algorithm known. The runtime analysis is straightforward. In contrast to nearly all approaches so far, the algorithm uses no flow techniques. Roughly speaking, the algorithm consists of about |V| nearly identical phases each of which is a maximum adjacency search.

764 citations


Journal ArticleDOI
TL;DR: An assortment of methods for finding and counting simple cycles of a given length in directed and undirected graphs improve upon various previously known results.
Abstract: We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected graphs. Most of the bounds obtained depend solely on the number of edges in the graph in question, and not on the number of vertices. The bounds obtained improve upon various previously known results.

657 citations


Journal ArticleDOI
TL;DR: A method for the construction of a word graph (or lattice) for large vocabulary, continuous speech recognition and it is shown that the word graph density can be reduced to an average number of about 10 word hypotheses, per spoken word with virtually no loss in recognition performance.

424 citations


Journal ArticleDOI
01 Jan 1997
TL;DR: Arrays for distributed fusion, whereby each node processes the data from its own set of sensors and communicates with other nodes to improve on the estimates, are discussed, and the information graph is introduced as a way of modeling information flow in distributed fusion systems and for developing algorithms.
Abstract: Modern surveillance systems often utilize multiple physically distributed sensors of different types to provide complementary and overlapping coverage on targets. In order to generate target tracks and estimates, the sensor data need to be fused. While a centralized processing approach is theoretically optimal, there are significant advantages in distributing the fusion operations over multiple processing nodes. This paper discusses architectures for distributed fusion, whereby each node processes the data from its own set of sensors and communicates with other nodes to improve on the estimates, The information graph is introduced as a way of modeling information flow in distributed fusion systems and for developing algorithms. Fusion for target tracking involves two main operations: estimation and association. Distributed estimation algorithms based on the information graph are presented for arbitrary fusion architectures and related to linear and nonlinear distributed estimation results. The distributed data association problem is discussed in terms of track-to-track association likelihoods. Distributed versions of two popular tracking approaches (joint probabilistic data association and multiple hypothesis tracking) are then presented, and examples of applications are given.

384 citations


Journal ArticleDOI
TL;DR: The main goal is to analyze the ancestral selection graph and to compare it to Kingman's coalescent process; it is found that the distribution of the time to the most recent common ancestor does not depend on the selection coefficient and hence is the same as in the neutral case.

328 citations


Journal ArticleDOI
01 Feb 1997-Genetics
TL;DR: It is found that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case, and this is supported by rigorous results.
Abstract: We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

316 citations


Book
01 Feb 1997
TL;DR: To cover a large part of the theory of hyperedge replacement, structural properties and decision problems, including the membership problem, are addressed.
Abstract: In this survey the concept of hyperedge replacement is presented as an elementary approach to graph and hypergraph generation. In particular, hyperedge replacement graph grammars are discussed as a (hyper)graph-grammatical counterpart to context-free string grammars. To cover a large part of the theory of hyperedge replacement, structural properties and decision problems, including the membership problem, are addressed.

292 citations


Patent
19 Feb 1997
TL;DR: In this article, the authors propose a method and apparatus for generating first and second tree topologies for any source node in a network which can be represented as a node or an edge redundant graph, such that any node in the graph remains connected to the source node via at least one tree.
Abstract: A method and apparatus for generating first and second tree topologies for any source node in a network which can be represented as a node or an edge redundant graph, such that any node in the graph remains connected to the source node via at least one tree even after the failure of a node or an edge. This technique provides a recovery mechanism upon detection of a failure in a network.

Proceedings Article
27 Jul 1997
TL;DR: In this article, the authors describe a method for summarizing similarities and differences in a pair of related documents using a graph representation for text, where concepts denoted by words, phrases, and proper names in the document are represented positionally as nodes in the graph along with edges corresponding to semantic relations between items.
Abstract: We describe a new method for summarizing similarities and differences in a pair of related documents using a graph representation for text. Concepts denoted by words, phrases, and proper names in the document are represented positionally as nodes in the graph along with edges corresponding to semantic relations between items. Given a perspective in terms of which the pair of documents is to be summarized, the algorithm first uses a spreading activation technique to discover, in each document, nodes semantically related to the topic. The activated graphs of each document are then matched to yield a graph corresponding to similarities and differences between the pair, which is rendered in natural language. An evaluation of these techniques has been carried out.

Journal ArticleDOI
TL;DR: A graph-constructive approach to solving systems of geometric constraints capable of effeciently handling well-constrained, overconstraining, and under Constrained configurations is presented.
Abstract: A graph-constructive approach to solving systems of geometric constraints capable of effeciently handling well-constrained, overconstrained, and underconstrained configurations is presented. The geometric constraint solver works in two phases: in the analysis phase the constraint graph is analyzed and a sequence of elementary construction steps is derived, and then in the construction phase the sequence of construction steps in actually carried out. The analysis phase of the algorithm is described in detail, its correctness is proved, and an efficient algorith to realized it is presented. The scope of the graph analysis is then extended by utilizing semantic information in the form of anlge derivations, and by extending the repertoire of the construction steps. Finally, the construction phase is briefly discussed.


Journal ArticleDOI
TL;DR: This work introduces self-stabilizing protocols for synchronization that are used as building blocks by the leader-election algorithm and presents a simple, uniform, self-Stabilizing ranking protocol.
Abstract: A distributed system is self-stabilizing if it can be started in any possible global state. Once started the system regains its consistency by itself, without any kind of outside intervention. The self-stabilization property makes the system tolerant to faults in which processors exhibit a faulty behavior for a while and then recover spontaneously in an arbitrary state. When the intermediate period in between one recovery and the next faulty period is long enough, the system stabilizes. A distributed system is uniform if all processors with the same number of neighbors are identical. A distributed system is dynamic if it can tolerate addition or deletion of processors and links without reinitialization. In this work, we study uniform dynamic self-stabilizing protocols for leader election under readwrite atomicity. Our protocols use randomization to break symmetry. The leader election protocol stabilizes in O(/spl Delta/D log n) time when the number of the processors is unknown and O(/spl Delta/D), otherwise. Here /spl Delta/ denotes the maximal degree of a node, D denotes the diameter of the graph and n denotes the number of processors in the graph. We introduce self-stabilizing protocols for synchronization that are used as building blocks by the leader-election algorithm. We conclude this work by presenting a simple, uniform, self-stabilizing ranking protocol.

Proceedings Article
01 Jan 1997
TL;DR: In this article, the authors present new protocols for two parties to exchange documents with fairness, i.e., such that no party can gain an advantage by quitting prematurely or otherwise misbehaving.
Abstract: We present new protocols for two parties to exchange documents with fairness, i.e., such that no party can gain an advantage by quitting prematurely or otherwise misbehaving. We use a third party that is L‘semi-trusted”, in the sense that it may misbehave on its own but will not conspire with either of the main parties. In our solutions, disruption by any one of the three parties will not allow the disrupter gain any useful new information about the documents. Our solutions are efficient and can be based on any of several cryptographic assumptions (e.g., factoring, discrete log, graph isomorpbism). We also discuss the application of our techniques to electronic commerce protocols to achieve fair payment.

Journal ArticleDOI
TL;DR: To assist human analysis of video data, a technique has been developed to perform automatic, content-based video indexing from object motion to analyse the semantic content of the video.

Journal ArticleDOI
TL;DR: A new and improved searching strategy that has two main advantages over the old strategy, which allows for easier integration with programs for multiple sequence alignment and data base search and makes it possible to use branch-and-bound search, and heuristics, to speed up the search.
Abstract: Motivation We have previously reported an algorithm for discovering patterns conserved in sets of related unaligned protein sequences. The algorithm was implemented in a program called Pratt. Pratt allows the user to define a class of patterns (e.g. the degree of ambiguity allowed and the length and number of gaps), and is then guaranteed to find the conserved patterns in this class scoring highest according to a defined fitness measure. In many cases, this version of Pratt was very efficient, but in other cases it was too time consuming to be applied. Hence, a more efficient algorithm was needed. Results In this paper, we describe a new and improved searching strategy that has two main advantages over the old strategy. First, it allows for easier integration with programs for multiple sequence alignment and data base search. Secondly, it makes it possible to use branch-and-bound search, and heuristics, to speed up the search. The new search strategy has been implemented in a new version of the Pratt program.

Journal ArticleDOI
Yi-Min Wang1
TL;DR: This paper defines the least stringent of these models ("FDAS"), and puts it in context with other models defined in the literature, and introduces a concept called "rollback-dependency tractability" that enables this analysis to be performed efficiently for a certain class of checkpoint and communication models.
Abstract: In this paper, we consider the problem of constructing consistent global checkpoints that contain a given set of checkpoints. We address three important issues related to this problem. First, we define the maximum and minimum consistent global checkpoints containing a set S, and give algorithms to construct them. These algorithms are based on reachability analysis on a rollback-dependency graph. Second, we introduce a concept called "rollback-dependency tractability" that enables this analysis to be performed efficiently for a certain class of checkpoint and communication models. We define the least stringent of these models ("FDAS"), and put it in context with other models defined in the literature. Significant in this is a way to use FDAS to provide efficient rollback recovery for applications that do not satisfy perfect piecewise determinism. Finally, we describe several applications of the theorems and algorithms derived in this paper to demonstrate the capability of our approach to unify, generalize, and extend many previous works.

Patent
24 Feb 1997
TL;DR: In this paper, a performance monitor represents execution of a data flow graph by changing performance information along different parts of a representation of that graph, and the monitor can provide 2D or 3D views in which the user can change focus, zoom and viewpoint.
Abstract: A performance monitor represents execution of a data flow graph by changing performance information along different parts of a representation of that graph. If the graph is executed in parallel, the monitor can show parallel operator instances, associated datalinks, and performance information relevant to each. The individual parallel processes executing the graph send performance messages to the performance monitor, and the performance monitor can instruct such processes to vary the information they send. The monitor can provides 2D or 3D views in which the user can change focus, zoom and viewpoint. In 3D views, parallel instances of the same operator are grouped in a 2D array. The data rate of a datalink can be represented by both the density and velocity of line segments along the line which represent it. The line can be colored as a function of the datalink's source or destination, its data rate, or the integral thereof. Alternatively, a histogram can be displayed along each datalink's line, displaying information about the rate of, total of, or value of a field in, the data sent, at successive intervals. The user can click on objects to obtain additional information, such as bar charts of statistics, detailed performance listings, or invocation of a debugger. The user can selectively collapse representations of graph objects into composite representations; highlight objects which are out of records or which have flow blockages; label operators;. turn off the display of objects; and record and playback the performance information.

Journal ArticleDOI
TL;DR: The generalized Dunn's index and the Davies-Bouldin index for cluster validation using graph structures, such as GG, RNG and MST are generalized and superiority over some existing cluster validity indices is established.

Proceedings ArticleDOI
03 Aug 1997
TL;DR: The Visibility Skeleton is a new powerful utility which can efficiently and accurately answer visibility queries for the entire scene, and on-demand or lazy contruction is presented, its implementation showing encouraging first results.
Abstract: Many problems in computer graphics and computer vision require accurate global visibility information. Previous approaches have typically been complicated to implement and numerically unstable, and often too expensive in storage or computation. The Visibility Skeleton is a new powerful utility which can efficiently and accurately answer visibility queries for the entire scene. The Visibility Skeleton is a multi-purpose tool, which can solve numerous different problems. A simple construction algorithm is presented which only requires the use of well known computer graphics algorithmic components such as ray-casting and line/plane intersections. We provide an exhaustive catalogue of visual events which completely encode all possible visibility changes of a polygonal scene into a graph structure. The nodes of the graph are extremal stabbing lines, and the arcs are critical line swaths. Our implementation demonstrates the construction of the Visibility Skeleton for scenes of over a thousand polygons. We also show its use to compute exact visible boundaries of a vertex with respect to any polygon in the scene, the computation of global or on-the-fly discontinuity meshes by considering any scene polygon as a source, as well as the extraction of the exact blocker list between any polygon pair. The algorithm is shown to be manageable for the scenes tested, both in storage and in computation time. To address the potential complexity problems for large scenes, on-demand or lazy contruction is presented, its implementation showing encouraging first results.

Journal ArticleDOI
TL;DR: Ambivalent data structures are presented for several problems on undirected graphs and used to dynamically maintain 2-edge-connectivity information and are extended to find the smallest spanning trees in an embedded planar graph in time.
Abstract: Ambivalent data structures are presented for several problems on undirected graphs. These data structures are used in finding the $k$ smallest spanning trees of a weighted undirected graph in $O(m \log \beta (m,n) + \min \{ k^{3/2}, km^{1/2} \} )$ time, where $m$ is the number of edges and $n$ the number of vertices in the graph. The techniques are extended to find the $k$ smallest spanning trees in an embedded planar graph in $O(n + k (\log n)^3 )$ time. Ambivalent data structures are also used to dynamically maintain 2-edge-connectivity information. Edges and vertices can be inserted or deleted in $O(m^{1/2})$ time, and a query as to whether two vertices are in the same 2-edge-connected component can be answered in $O(\log n)$ time, where $m$ and $n$ are understood to be the current number of edges and vertices, respectively.

Journal ArticleDOI
TL;DR: In this paper, the structural properties of discrete land cover parcels are analyzed and interpreted using a graph-based, structural pattern recognition system, known as XRAG (eXtended Relational Attribute Graph) that might be used to infer broad categories of urban land-use from very fine spatial resolution, remotely-sensed images.

Patent
25 Mar 1997
TL;DR: In this article, the authors present a data exploration tool that employs directed graphs to provide histories of the data exploration operations, such as query, segmentation, aggregation, and data view operations.
Abstract: A data exploration tool which has a graphical user interface that employs directed graphs to provide histories of the data exploration operations. Nodes in the directed graphs represent operations on data; the edges represent relationships between the operations. One type of the directed graphs is the derivation graph, in which the root of the graph is a node representing a data set and an edge leading from a first node to a second node indicates that the operation represented by the second node is performed on the result of the operation represented by the first node. Operations include query, segmentation, aggregation, and data view operations. A user may edit the derivation graph and may select a node for execution. When that is done, all of the operations represented by the nodes between the root node and the selected node are performed as indicated in the graph. The operations are performed using techniques of lazy evaluation and encachement of results with the nodes. Anothertype of the directed graphs is the subsumption graph, in which an edge leading from a first node to a second node indicates that the second node stands in a subsumption relationship to the first node. If a result of the operation represented by the first node has been computed, the result is available to calculate the result of the operation represented by the second node.

Proceedings ArticleDOI
04 May 1997
TL;DR: The rst subexponential algorithm for this exploration problem, which achieves an upper bound of d O(logd) m, and lower bounds of 2 ›(d)m, respectively, d ›(logD) m for various other natural exploration algorithms are given.
Abstract: We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number R of edge traversals. Deng and Papadimitriou (Proceedings of the 31st Symposium on the Foundations of Computer Science, 1990, pp. 356{361) showed an upper bound for R of d O(d) m and Koutsoupias (reported by Deng and Papadimitriou) gave a lower bound of ›(d2m), where m is the number of edges in the graph and d is the minimum number of edges that have to be added to make the graph Eulerian. We give the rst subexponential algorithm for this exploration problem, which achieves an upper bound of d O(logd) m. We also show a matching lower bound of d ›(logd) m for our algorithm. Additionally, we give lower bounds of 2 ›(d) m, respectively, d ›(logd) m for various other natural exploration algorithms.

Proceedings ArticleDOI
18 Oct 1997
TL;DR: The SHriMP visualization technique for seamlessly exploring software structure and browsing source code, with a focus on effectively assisting hybrid program comprehension strategies, is described.
Abstract: This paper describes the SHriMP visualization technique for seamlessly exploring software structure and browsing source code, with a focus on effectively assisting hybrid program comprehension strategies. The technique integrates both pan+zoom and fisheye-view visualization approaches for exploring a nested graph view of software structure. The fisheye-view approach handles multiple focal points, which are necessary when examining several subsystems and their mutual interconnections. Source code is presented by embedding code fragments within the nodes of the nested graph. Finer connections among these fragments are represented by a network that is navigated using a hypertext link-following metaphor. SHriMP combines this hypertext metaphor with animated panning and zooming motions over the nested graph to provide continuous orientation and contextual cues for the user. The SHriMP tool is being evaluated in several user studies. Observations of users performing program understanding tasks with the tool are discussed.

Journal ArticleDOI
Anshul Gupta1
TL;DR: This paper describes heuristics that improve the state-of-the-art practical algorithms used in graph-partitioning software in terms of both partitioning speed and quality, and their implementation is more parallelizable than minimum-degree-based ordering algorithms and it renders the ordered matrix more amenable to parallel factorization.
Abstract: Graph partitioning is a fundamental problem in several scientific and engineering applications. In this paper, we describe heuristics that improve the state-of-the-art practical algorithms used in graph-partitioning software in terms of both partitioning speed and quality. An important use of graph partitioning is in ordering sparse matrices for obtaining direct solutions to sparse systems of linear equations arising in engineering and optimization applications. The experiments reported in this paper show that the use of these heuristics results in a considerable improvement in the quality of sparse-matrix orderings over conventional ordering methods, especially for sparse matrices arising in linear programming problems. In addition, our graph-partitioning-based ordering algorithm is more parallelizable than minimum-degree-based ordering algorithms, and it renders the ordered matrix more amenable to parallel factorization.

Patent
29 Oct 1997
TL;DR: In this article, the authors present an apparatus and method for reproducing master and variable information, including variable graphics information, on a display device, such as a computer network or a demand printer.
Abstract: The present invention comprises an apparatus and method for reproducing master and variable information, including variable graphics information, on a display device, such as a computer network or a demand printer. Variable graphics information is stored in a database. A user is prompted to specify graph parameters (i.e. graph type, size, labels, etc.) or default values are used. Template page files containing fixed information and placeholders for variable information are generated. Image boxes are used as placeholders for variable graphics information and an executable graph file is placed in the image boxes. A text box containing the specified graph parameters and variable graphics information from the database is layered over the image box and “tagged” to specify that it contains variable graphics information. During interpretation of the page file, an interpreter (RIP) determines if a text box is “tagged” and, if so, executes the graph file to generate a graph using the specified graph parameters and variable graphics information from the database.

Proceedings ArticleDOI
01 May 1997
TL;DR: This paper presents initial efforts to develop a systematic approach for assessing the similarity of solid models based on how they will be manufactured, and ways to measure similarity among these graph structures, so that given the graph structures corresponding to two different designs, they can tell how similar or different they are.
Abstract: This paper presents our initial efforts to develop a systematic approach for assessing the similarity of solid models based on how they will be manufactured. The goal of this work is to develop methods that, given a solid model representing the design of a new product, query a product information database (of solid models, associated manufacturing plans, and related attributes) and identify existing designs with manufacturing plans similar to some reasonable plan for the new design—or useful as a starting point for creation of a new plan for the new design. Our approach is based on the automatic generation (from CAD models) of graph structures that contain manufacturing information (in the form of manufacturing features). We are developing ways to measure similarity among these graph structures, so that given the graph structures corresponding to two different designs, we can tell how similar or different they are. The similarity measure will be used as a basis for indexing and retrieving similar designs from databases. An implementation of our approach is discussed. We believe our work is a first step in producing computergeneratable and computer-interpretable similarity assessment techniques that will be useful for applications such as variThis work is supported in part by NSF grants NSF EEC 94-02384, IRI9306580, and DDM-9201779, by ARPA grant DABT63-95-C-0037 and ONR grant DABT63-95-C-0037, and by in-kind contributions from Spatial Technologies and Bentley Systems. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders. Current position: Visiting Research Engineer, Engineering Design Research Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213-3980 ant and hybrid variant/generative process planning systems, indexing schemes for large part inventories, access methods for “smart catalogs,” and for performing component searches through product catalogs and on the Internet.