scispace - formally typeset
Search or ask a question

Showing papers on "Graph (abstract data type) published in 1989"



Proceedings ArticleDOI
Ron Cytron1, Jeanne Ferrante1, Barry K. Rosen1, Mark N. Wegman1, F. K. Zadeck2 
03 Jan 1989
TL;DR: This paper presents strong evidence that static single assignment form and the control dependence graph can be of practical use in optimization, and presents a new algorithm that efficiently computes these data structures for arbitrary control flow graph.
Abstract: In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point where advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present a new algorithm that efficiently computes these data structures for arbitrary control flow graph We also give analytical and experimental evidence that they are usually {\em linear} in the size of the original program. This paper thus presents strong evidence that these structures can be of {\em practical} use in optimization.

493 citations



Journal ArticleDOI
TL;DR: The author shows that unless P=NP, there can be no polynomial-time epsilon -approximate algorithm for the module allocation problem, nor can there exist a local search algorithm that requiresPolynomial time per iteration and yields an optimum assignment.
Abstract: The author studies the complexity of the problem of allocating modules to processes in a distributed system to minimize total communication and execution costs. He shows that unless P=NP, there can be no polynomial-time epsilon -approximate algorithm for the problem, nor can there exist a local search algorithm that requires polynomial time per iteration and yields an optimum assignment. Both results hold even if the communication graph is planar and bipartite. On the positive side, it is shown that if the communication graph is a partial k-tree or an almost-tree with parameter k, the module allocation problem can be solved in polynomial time. >

435 citations


Proceedings Article
20 Aug 1989
TL;DR: In this paper, a graph representation of the domain model is interactively created by using instances of the basic network components, nodes and arcs, as building blocks, together with the quantitative relations between nodes and their immediate causes expressed as conditional probabilities, are automatically transformed into a tree structure.
Abstract: Causal probabilistic networks have proved to be a useful knowledge representation tool for modelling domains where causal relations in a broad sense are a natural way of relating domain objects and where uncertainty is inherited in these relations. This paper outlines an implementation the HUGIN shell - for handling a domain model expressed by a causal probabilistic network. The only topological restriction imposed on the network is that, it must not contain any directed loops. The approach is illustrated step by step by solving a genetic breeding problem. A graph representation of the domain model is interactively created by using instances of the basic network components-- nodes and arcs--as building blocks. This structure, together with the quantitative relations between nodes and their immediate causes expressed as conditional probabilities, are automatically transformed into a tree structure, a junction tree. Here a computationally efficient and conceptually simple algebra of Bayesian belief universes supports incorporation of new evidence, propagation of information, and calculation of revised beliefs in the states of the nodes in the network. Finally, as an exam ple of a real world application, MUNIN an expert system for electromyography is discussed.

398 citations


Journal ArticleDOI
TL;DR: The authors propose a method for solving the stereo correspondence problem by extracting local image structures and matching similar such structures between two images using a benefit function.
Abstract: The authors propose a method for solving the stereo correspondence problem. The method consists of extracting local image structures and matching similar such structures between two images. Linear edge segments are extracted from both the left and right images. Each segment is characterized by its position and orientation in the image as well as its relationships with the nearby segments. A relational graph is thus built from each image. For each segment in one image as set of potential assignments is represented as a set of nodes in a correspondence graph. Arcs in the graph represent compatible assignments established on the basis of segment relationships. Stereo matching becomes equivalent to searching for sets of mutually compatible nodes in this graph. Sets are found by looking for maximal cliques. The maximal clique best suited to represent a stereo correspondence is selected using a benefit function. Numerous results obtained with this method are shown. >

370 citations


Journal ArticleDOI
TL;DR: In this paper, a framework is developed for computer assisted analysis of event sequences like those obtained through sociological field work or historical research, which produces a qualitative model including a graph displaying logical relations among events, which accounts for the input data.
Abstract: A framework is developed for computer‐assisted analysis of event sequences like those obtained through sociological field work or historical research. The analytic procedures produce a qualitative model — including a graph displaying logical relations among events — which accounts for the input data. The model can be tested and refined through analysis of additional data.

220 citations


Proceedings ArticleDOI
Vivek Sarkar1
21 Jun 1989
TL;DR: This paper presents a general framework for determining average program execution times and their variance, based on the program's interval structure and control dependence graph, using frequency information from an optimized counter-based execution profile of the program.
Abstract: This paper presents a general framework for determining average program execution times and their variance, based on the program's interval structure and control dependence graph. Average execution times and variance values are computed using frequency information from an optimized counter-based execution profile of the program.

188 citations


Journal ArticleDOI
TL;DR: Stein's method of obtaining rates of convergence to the normal distribution is illustrated in the context of random graph theory and results are obtained for the number of copies of a given graph G in K.

173 citations


Proceedings ArticleDOI
01 Nov 1989
TL;DR: From a practical point of view, examples of GraphLog queries applied to several different hypertext systems are presented, providing evidence for the expressive power of the language, as well as for the convenience and naturalness of its graphical representation.
Abstract: GraphLog is a visual query language in which queries are formulated by drawing graph patterns. The hyperdocument graph is searched for all occurrences of these patterns. The language is powerful enough to allow the specification and manipulation of arbitrary subsets of the network and supports the computation of aggregate functions on subgraphs of the hyperdocument. It can support dynamically defined structures as well as inference capabilities, going beyond current static and passive hypertext systems.The expressive power of the language is a fundamental issue: too little power limits the applications of the language, while too much makes efficient implementation difficult and probably affects ease of use. The complexity and expressive power of GraphLog can be characterized precisely by using notions from deductive database theory and descriptive complexity. In this paper, from a practical point of view, we present examples of GraphLog queries applied to several different hypertext systems, providing evidence for the expressive power of the language, as well as for the convenience and naturalness of its graphical representation. We also describe an ongoing implementation of the language.

144 citations



Journal ArticleDOI
TL;DR: An algorithm for extracting certain classes of form features from a relational boundary model of an object, called the generalized edge-face graph (GEFG), is described, which provides a face-based topological description of the object boundary and encodes the minimum number of relations needed in the recognition process.
Abstract: An algorithm for extracting certain classes of form features from a relational boundary model of an object, called the generalized edge-face graph (GEFG), is described. The GEFG provides a face-based topological description of the object boundary and encodes the minimum number of relations needed in the recognition process. The feature identification and classification are based on the analysis of the connectivity properties of the edge-face graph associated with the GEFG and on some geometric considerations. The result is a hierarchical graph decomposition of the object boundary into components representing form features. >


Journal ArticleDOI
TL;DR: An algorithm for partitioning the nodes of a graph into supernodes is presented, which improves the performance of the multifrontal method for the factorization of large, sparse matrices on vector computers, and factorizes the extremely sparse electric power matrices faster than the general sparse algorithm.
Abstract: In this paper we present an algorithm for partitioning the nodes of a graph into supernodes, which improves the performance of the multifrontal method for the factorization of large, sparse matrices on vector computers. This new algorithm first partitions the graph into fundamental supernodes. Next, using a specified relaxation parameter, the supernodes are coalesced in a careful manner to create a coarser supernode partition. Using this coarser partition in the factorization generally introduces logically zero entries into the factor. This is accompanied by a decrease in the amount of sparse vector computations and data movement and an increase in the number of dense vector operations. The amount of storage required for the factor is generally increased by a small amount. On a collection of moderately sized 3-D structures, matrices speedups of 3 to 20 percent on the Cray X-MP are observed over the fundamental supernode partition which allows no logically zero entries in the factor. Using this relaxed supernode partition, the multifrontal method now factorizes the extremely sparse electric power matrices faster than the general sparse algorithm. In addition, there is potential for considerably reducing the communication requirements for an implementation of the multifrontal method on a local memory multiprocessor.

Proceedings ArticleDOI
01 Nov 1989
TL;DR: This paper articulates a number of navigational strategies that people use in physical (geographical) navigation and correlates these with various graph topologies, showing how and why appropriately restricting the connectivity of a hyperbase can improve the ability of users to navigate.
Abstract: One of the major problems confronting users of large hypermedia systems is that of navigation: knowing where one is, where one wants to go, and how to get there from here. This paper contributes to this problem in three steps. First, it articulates a number of navigational strategies that people use in physical (geographical) navigation. Second, it correlates these with various graph topologies, showing how and why appropriately restricting the connectivity of a hyperbase can improve the ability of users to navigate. Third, it analyzes some common hypermedia navigational mechanisms in terms of navigational strategies and graph topology.

Proceedings ArticleDOI
05 Nov 1989
TL;DR: The authors address the problem of incorporating timing constraints into the physical design of integrated circuits by describing algorithms resulting in placements of improved performance in comparison to placements whose objective is to minimize the summation of wire lengths on the chip.
Abstract: The authors address the problem of incorporating timing constraints into the physical design of integrated circuits. First they formulate the problem and discuss graph models suitable for its analysis. Next, they describe algorithms resulting in placements of improved performance in comparison to placements whose objective is to minimize the summation of wire lengths on the chip. Finally, the authors show preliminary results of their placement programs for the sea-of-gates designs. >

Journal ArticleDOI
TL;DR: A graph theoretic representation for the tolerance chart is introduced and a special path tracing algorithm is used to identify tolerance chains from this graph.
Abstract: A tolerance chart is a graphical representation of a process plan and a manual procedure for controlling tolerance stackup when the machining of a component involves interdependent tolerance chains. This heuristic, experience-based method of allocating tolerances to individual cuts of a process plan can be embodied in a computer-based module. This paper introduces a graph theoretic representation for the tolerance chart. A special path tracing algorithm is used to identify tolerance chains from this graph. Optimal tolerance allocation among individual cuts is achieved using a linear goal programming model instead of existing heuristic methods. A more comprehensive mixed integer programming model is developed to incorporate linear tolerance cost functions and alternative process selection.

Journal ArticleDOI
TL;DR: It is shown how to extend the solution for the approximate distribution problem to an optimal probabilistic algorithm for the exact distribution problem on a similar class of expander graphs.
Abstract: A solution to the following fundamental communication problem is presented. Suppose that n tokens are arbitrarily distributed among n processors with no processor having more than K tokens. The problem is to specify a bounded-degree network topology and an algorithm that can distribute the tokens uniformly among the processors.The first result is a tight $\Theta (K + \log n)$ bound on the complexity of this problem. It is also shown that an approximate version of this problem can be solved deterministically in $O(K + \log n)$ on any expander graph with sufficiently large expansion factor.In the second part of this work, it is shown how to extend the solution for the approximate distribution problem to an optimal probabilistic algorithm for the exact distribution problem on a similar class of expander graphs. Note that communication through an expander graph is a necessary condition for an $O(K + \log n)$ solution of the problem.These results have direct applications to the efficient implementation of many...

Proceedings ArticleDOI
01 May 1989
TL;DR: The design of a user interface which permits gradual enlargement or refinement of the user's query by browsing through a graph of term and document subsets obtained from a lattice automatically generated from the usual document-term relation is described.
Abstract: In conventional Boolean retrieval systems, users have difficulty controlling the amount of output obtained from a given query. This paper describes the design of a user interface which permits gradual enlargement or refinement of the user's query by browsing through a graph of term and document subsets. This graph is obtained from a lattice automatically generated from the usual document-term relation. The major design features of the proposed interface are the integration of menu, fill-in the blank and direct manipulation modes of interaction within the “fisheye view” [Furnas, 1986] paradigm. A prototype user interface incorporating some of these ideas has been implemented on a microcomputer.The resulting interface is well adapted to various kinds of users and needs. More experienced users with a particular subject in mind can directly specify a query which results into a jump to a particular vertex in the graph. From there, the user can refine his initial query by browsing through the graph from that point on. On the other hand, casual users without any prior knowledge of the contents of the system or users without any particular subject in mind can freely navigate through the graph without ever specifying any query.

Journal ArticleDOI
TL;DR: Interrelationships are shown among the complexities of computing the permanent and determinant of a matrix despite their similar looking formulae, the complexity of checking if a directed graph contains an even length cycle, and the number of perfect matchings in a graph using Pfaffian orientations.

Journal ArticleDOI
TL;DR: Within the flexible structure of the graph model, solution concepts including Nash stability, the various metagame techniques, the sequential stability method of Fraser and Hipel, and Stackelberg equilibrium are defined and characterized.

Journal ArticleDOI
TL;DR: A new fast algorithm that implements the parallel ordering step by exploiting the clique tree representation of a chordal graph is presented, which has time and space complexity linear in the number of compressed subscripts for L.
Abstract: Jess and Kees [IEEE Trans. Comput., C-31 (1982), pp. 231–239] introduced a method for ordering a sparse symmetric matrix A for efficient parallel factorization. The parallel ordering is computed in two steps. First, the matrix A is ordered by some fill-reducing ordering. Second, a parallel ordering of A is computed from the filled graph that results from symbolically factoring A using the initial fill-reducing ordering. Among all orderings whose fill lies in the filled graph, this parallel ordering achieves the minimum number of parallel steps in the factorization of A. Jess and Kees did not specify the implementation details of an algorithm for either step of this scheme. Liu and Mirzaian [SIAM J. Discrete Math., 2 (1989), pp. 100–107] designed an algorithm implementing the second step, but it has time and space requirements higher than the cost of computing common fill-reducing orderings.A new fast algorithm that implements the parallel ordering step by exploiting the clique tree representation of a chordal graph is presented. The cost of the parallel ordering step is reduced well below that of the fill-reducing step. This algorithm has time and space complexity linear in the number of compressed subscripts for L, i.e., the sum of the sizes of the maximal cliques of the filled graph. Running times nearly identical to Liu's heuristic composite rotations algorithm, which approximates the minimum number of parallel steps, are demonstrated empirically.

Journal ArticleDOI
TL;DR: In this paper, a directed graph with identified nodes is defined to represent a set of conditional independence (c.i.) statements, and the results of Howard and Matheson are rigorised and generalized.
Abstract: A directed graph with identified nodes is defined to represent a set of conditional independence (c.i.) statements. It is shown how new c.i. statements can be read from the graph of an influence diagram and results of Howard and Matheson are rigorised and generalized. A new decomposition theorem, analogous to Kiiveri, Speed and Carlin and requiring no positivity condition, is proved. Connections between influence diagrams and Markov field networks are made explicit. Because all results depend on only three properties of c.i., the theorems proved here can be restated as theorems about other structures like second order processes.

Journal ArticleDOI
TL;DR: A framework is presented that uses the same strategy to solve both the learned navigation and terrain model acquisition and it is shown that any abstract graph structure that satisfies a set of four properties suffices as the underlying structure.
Abstract: A framework is presented that uses the same strategy to solve both the learned navigation and terrain model acquisition. It is shown that any abstract graph structure that satisfies a set of four properties suffices as the underlying structure. It is also shown that any graph exploration algorithm can serve as the searching strategy. The methods provide paths that keep the robot as far from the obstacles as possible. In some cases, these methods are preferable to visibility graph methods that require the robot to navigate arbitrarily close to the obstacles, which is hard to implement if the robot motions are not precise. >

Book ChapterDOI
01 Jan 1989
TL;DR: For an n-by-m array with some entries specified and the remainder free to be chosen from a given field, the authors studied the possible ranks occurring among all completions.
Abstract: For an n-by-m array with some entries specified and the remainder free to be chosen from a given field, we study the possible ranks occurring among all completions. For any such partial matrix the maximum rank may be nicely characterized and all possible ranks between the minimum and maximum are attained. The minimum is more delicate and is not in general determined just by the ranks of fully specified submatrices. This focusses attention upon the patterns of specified entries for which the minimum is so determined. It is shown that it is necessary that the graph of the pattern be (bipartite) chordal, and some evidence is given for the conjecture that this is also sufficient.

Proceedings Article
01 Jul 1989
TL;DR: It is shown that the problem is in general intractable, but an algorithm than runs in polynomial time in the size of the graph when the regular expression and the graph are free of conflicts is presented.
Abstract: We consider the following problem: given a labelled directed graph G and a regular expression R, find all pairs of nodes connected by a simple path such that the concatenation of the labels along the path satisfies R. The problem is motivated by the observation that many recursive queries can be expressed in this form, and by the implementation of a query language, G+, based on this observation. We show that the problem is in general intractable, but present an algorithm than runs in polynomial time in the size of the graph when the regular expression and the graph are free of conflicts. We also present a class of languages whose expressions can always be evaluated in time polynomial in the size of both the database and the expression, and characterize syntactically the expressions for such languages.

Proceedings ArticleDOI
04 Oct 1989
TL;DR: The paper defines three aesthetic criteria for drawings of directed graphs, and discusses a general method for obtaining drawings according to these criteria.
Abstract: Several recent tools for visualizing software and information engineering problems have used directed graphs as a basic model. Thus considerable interest has arisen in algorithms for drawing directed graphs so that they are easy to understand and remember. The paper defines three aesthetic criteria for drawings of directed graphs, and discusses a general method for obtaining drawings according to these criteria. Several recent algorithms to draw directed graphs are instances of this general method. The aesthetic criteria can be viewed as goals of optimization problems. Each step of the general method aims to achieve one of the criteria by solving these optimization problems. The authors discuss the current state of knowledge of each of these problems. >

Journal ArticleDOI
TL;DR: An index encoding the structural attribute known as flexibility has been devised based upon the chemical graph as discussed by the authors, which is based on the role of molecular size, branching, cycles and heteroatom content.
Abstract: An index encoding the structural attribute known as flexibility has been devised based upon the chemical graph. The flexibility definition is based upon the role of molecular size, branching, cycles and heteroatom content. The calculation of the index, phi, uses information from the kappa indexes to encode these contributions. The flexibility index is made equal to the product of kappa-one and kappa-two, normalized to the number of atoms in the graph. Examples reveal the structure-activity potential for the index.

Proceedings ArticleDOI
04 Jun 1989
TL;DR: A Markov random field (MRF) model-based approach to automated image interpretation is described and demonstrated as a region-based scheme and provides a systematic method for organizing and representing domain knowledge through the clique functions of the probability density function underlying MRF.
Abstract: A Markov random field (MRF) model-based approach to automated image interpretation is described and demonstrated as a region-based scheme. In this approach, an image is first segmented into a collection of disjoint regions which form the nodes of an adjacency graph. Image interpretation is then achieved through assigning object labels, or interpretations, to the segmented regions, or nodes, using domain knowledge, extracted feature measurements, and spatial relationships between the various regions. The interpretation labels are modeled as a MRF on the corresponding adjacency graph, and the image interpretation problem are formulated as a maximum a posteriori estimation rule. Simulated annealing is used to find the best realization, or optimal interpretation. Through the MRF model, this approach also provides a systematic method for organizing and representing domain knowledge through the clique functions of the probability density function underlying MRF. Results of image interpretation experiments performed on synthetic and real-world images using this approach are described. >

Proceedings ArticleDOI
14 May 1989
TL;DR: The author describes a method for planning fine motion strategies by reasoning on an explicit representation of the contact space that reduces the algorithmic complexity inherent in the path planning problem by separating the computation of potential reachable positions and valid movements from the determination of those that are really executable by the robot.
Abstract: The author describes a method for planning fine motion strategies by reasoning on an explicit representation of the contact space. This method reduces the algorithmic complexity inherent in the path planning problem by separating the computation of potential reachable positions and valid movements from the determination of those that are really executable by the robot. The algorithm developed operates in two phases: the construction of a state graph representing a set of potential solutions; and the searching of this graph in order to find a good reverse path defining a feasible fine motion program. The fine motion planner has been implemented in LUCID-LISP on a SUN 260. Simulations have given rise to real executions using a six-degree-of-freedom SCEMI robot equipped with a force sensor. >