scispace - formally typeset
Search or ask a question

Showing papers on "Graph (abstract data type) published in 1993"


Journal ArticleDOI
01 Apr 1993
TL;DR: An approach to requirements acquisition is presented which is driven by higher-level concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc.
Abstract: Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc The paper presents an approach to requirements acquisition which is driven by such higher-level concepts Requirements models are acquired as instances of a conceptual meta-model The latter can be represented as a graph where each node captures an abstraction such as, eg, goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions Well-formedness properties on nodes and links constrain their instances-that is, elements of requirements models Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire appropriate instances of the various nodes and links according to such constraints Acquisition processes are governed by strategies telling which way to follow systematically in that graph; at each node specific tactics can be used to acquire the corresponding instances The paper describes a significant portion of the meta-model related to system goals, and one particular acquisition strategy where the meta-model is traversed backwards from such goals The meta-model and the strategy are illustrated by excerpts of a university library system

2,092 citations


Journal ArticleDOI
TL;DR: The success of 16 methods of phylogenetic inference was examined using consis?
Abstract: The success of 16 methods of phylogenetic inference was examined using consis? tency and simulation analysis. Success?the frequency with which a tree-making method cor? rectly identified the true phylogeny?was examined for an unrooted four-taxon tree. In this study, tree-making methods were examined under a large number of branch-length conditions and under three models of sequence evolution. The results are plotted to facilitate comparisons among the methods. The consistency analysis indicated which methods converge on the correct tree given infinite sample size. General parsimony, transversion parsimony, and weighted par? simony are inconsistent over portions of the graph space examined, although the area of incon? sistency varied. Lake's method of invariants consistently estimated phylogeny over all of the graph space when the model of sequence evolution matched the assumptions of the invariants method. However, when one of the assumptions of the invariants method was violated, Lake's method of invariants became inconsistent over a large portion of the graph space. In general, the distance methods (neighbor joining, weighted least squares, and unweighted least squares) consistently estimated phylogeny over all of the graph space examined when the assumptions of the distance correction matched the model of evolution used to generate the model trees. When the assumptions of the distance methods were violated, the methods became inconsistent over portions of the graph space. UPGMA was inconsistent over a large area of the graph space, no matter which distance was used. The simulation analysis showed how tree-making methods perform given limited numbers of character data. In some instances, the simulation results differed quantitatively from the consistency analysis. The consistency analysis indicated that Lake's method of invariants was consistent over all of the graph space under some conditions, whereas the simulation analysis showed that Lake's method of invariants performs poorly over most of the graph space for up to 500 variable characters. Parsimony, neighbor-joining, and the least-squares methods performed well under conditions of limited amount of character change and branch-length variation. By weighting the more slowly evolving characters or using dis? tances that correct for multiple substitution events, the area in which tree-making methods are misleading can be reduced. Good performance at high rates of change was obtained only by giving increased weight to slowly evolving characters (e.g., transversion parsimony, weighted parsimony). UPGMA performed well only when branch lengths were close in length. (Phylogeny estimation; simulation; parsimony; Lake's invariants; UPGMA; neighbor joining; weighted least squares; unweighted least squares; tree space.)

752 citations


Book
23 Sep 1993
TL;DR: The Graph Model for Preferences Appendices and the Interrelationships of Solution Concepts Resolving an Environmental Conflict and the application of the Graph Model to an International Trade Conflict are discussed.
Abstract: Interactive Decision Making Representing Conflicts Using the Graph Model Solution Concepts for the Graph Model Extensive Games and the Graph Model for Conflicts The Interrelationships of Solution Concepts Resolving an Environmental Conflict Using the Graph Model Application of the Graph Model to an International Trade Conflict The Graph Model for Preferences Appendices References Index.

423 citations


Journal ArticleDOI
01 Jan 1993
TL;DR: It is shown how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition, which involves identifying groups of building blocks using composition operations based on software engineering principles such as low coupling and high cohesion.
Abstract: Reverse-engineering is the process of extracting system abstractions and design information out of existing software systems. This process involves the identification of software artefacts in a particular subject system, the exploration of how these artefacts interact with one another, and their aggregation to form more abstract system representations that facilitate program understanding. This paper describes our approach to creating higher-level abstract representations of a subject system, which involves the identification of related components and dependencies, the construction of layered subsystem structures, and the computation of exact interfaces among subsystems. We show how top-down decompositions of a subject system can be (re)constructed via bottom-up subsystem composition. This process involves identifying groups of building blocks (e.g., variables, procedures, modules, and subsystems) using composition operations based on software engineering principles such as low coupling and high cohesion. The result is an architecture of layered subsystem structures. The structures are manipulated and recorded using the Rigi system, which consists of a distributed graph editor and a parsing system with a central repository. The editor provides graph filters and clustering operations to build and explore subsystem hierarchies interactively. The paper concludes with a detailed, step-by-step analysis of a 30-module software system using Rigi.

367 citations


Proceedings ArticleDOI
01 Jul 1993
TL;DR: A spectral approach to multiway ratio-cut partitioning is developed which provides a generalization of the ratio- cut cost metric to k-way partitioning and a lower bound on this cost metric.
Abstract: Recent research on partitioning has focussed on the ratio-cut cost metric which maintains a balance between the sizes of the edges cut and the sizes of the partitions without fixing the size of the partitions a priori. Iterative approaches and spectral approaches to two-way ratio-cut partitioning have yielded higher quality partitioning results. In this paper we develop a spectral approach to multiway ratio-cut partitioning which provides a generalization of the ratio-cut cost metric to k-way partitioning and a lower bound on this cost metric. Our approach involves finding the k smallest eigenvalue/eigenvector pairs of the Laplacian of the graph. The eigenvectors provide an embedding of the graph's n vertices into a k-dimensional subspace. We devise a time and space efficient clustering heuristic to coerce the points in the embedding into k partitions. Advancement over the current work is evidenced by the results of experiments on the standard benchmarks.

361 citations


Journal ArticleDOI
TL;DR: It turns out that SWN's allow the representation of any color function in a structured form, so that any unconstrained high-level net can be transformed into a well-formed net.
Abstract: The class of stochastic well-formed colored nets (SWN's) was defined as a syntactic restriction of stochastic high-level nets. The interest of the introduction of restrictions in the model definition is the possibility of exploiting the symbolic reachability graph (SRG) to reduce the complexity of Markovian performance evaluation with respect to classical Petri net techniques. It turns out that SWN's allow the representation of any color function in a structured form, so that any unconstrained high-level net can be transformed into a well-formed net. Moreover, most constructs useful for the modeling of distributed computer systems and architectures directly match the "well-formed" restriction, without any need of transformation. A nontrivial example of the usefulness of the technique in the performance modeling and evaluation of multiprocessor architectures is included. >

340 citations


Journal ArticleDOI
TL;DR: The whole theory for double-pushout transformations including sequential composition, parallel composition, and amalgamation can be reformulated and generalized in the new framework.

322 citations


Book ChapterDOI
01 Jun 1993
TL;DR: This paper describes in detail how the new implementation works and gives realistic examples to illustrate its power, and discusses a number of directions for future research.
Abstract: Temporal logic model checking is an automatic technique for verifying finite-state concurrent systems. Specifications are expressed in a propositional temporal logic, and the concurrent system is modeled as a state-transition graph. An efficient search procedure is used to determine whether or not the state-transition graph satisfies the specification. When the technique was first developed ten years ago, it was only possible to handle concurrent systems with a few thousand states. In the last few years, however, the size of the concurrent systems that can be handled has increased dramatically. By representing transition relations and sets of states implicitly using binary decision diagrams, it is now possible to check concurrent systems with more than 10120 states. In this paper we describe in detail how the new implementation works and give realistic examples to illustrate its power. We also discuss a number of directions for future research. The necessary background information on binary decision diagrams, temporal logic, and model checking has been included in order to make the exposition as self-contained as possible.

305 citations


Patent
30 Apr 1993
TL;DR: In this paper, a method of using a computer system 20 to implement a graphical interface is described, which displays a graph 160 of data from a database system 11, and permits a user to change the data by changing the appearance of the graph 160.
Abstract: A method of using a computer system 20 to implement a graphical interface 10. The method displays a graph 160 of data from a database system 11, and permits a user to change the data by changing the appearance of the graph 160. The graph 160 is generated from a stored graphics engine 12, which contains rules for generating graphical objects comprising the graph and the objects' attributes. The graphical objects and attributes are matched to data delivered from the database system 11. If the user manipulates a graphical object, the graphical interface 10 associates the change to a new data value, and updates both the graph 160 and the data in the database system 11.

298 citations


Journal ArticleDOI
TL;DR: Any SRN containing one or more of the nearly-independent structures, commonly encountered in practice, can be analyzed using the decomposition approach presented, and this technique is applied to the analysis of a flexible manufacturing system.

280 citations


Journal ArticleDOI
TL;DR: A different approach is considered, which deduces shared-interest relationships between people based on the history of email communication, using a set of heuristic graph algorithms that are powerful and can threaten privacy.
Abstract: ngoing increases in wide-area network connectivity promise vastly increased opportunities for collaboration and resource sharing. A fundamental problem confronting users of such networks i,; how to discover the existence of resources of interest, such as files, retail products, network services, or people. In tZhis article we focus on the problem of discovering people who have particular interests or expertise. For an overview of tlhe larger research project into which this work fits, the reader is referred to [16]. The typical approach to locating people is to build a directory from explicitly registered data. This approach is taken, for example, by the X.500 directory service standard [3]. While this approach provides good support for locating particular users (the \"white-pages\" problem), it does not: easily support finding users who have particular interests or expertise (the \"yellow-pages\" problem). One could create special interest group lists, but doing so requires a significant amount of effort. For each group someone has to build and maintain a membership list. Moreover, building such lists assumes one knows which lists should be compiled and who should be included in each list. In a large network, the set of possible interest groups can be quite large and rapidly evolving. It is difficult to track the interests of such a community using explicitly registered data. We consider a different approach, which deduces shared-interest relationships between people based on the history of email communication. Using this approach, a user could search J~or people by requesting a list of people whose interests are similar to several people known to have the interest in question. This technique can support a fine-grained, dynamic means of locating people with related interests. The set of possible interests can be arbitrarily specialized, and the people located will be appropriate at the time of the search, rather than at some earlier time when a list was compiled. One might at tempt to discern shared interests by analyzing subject lines and message bodies in electronic mail messages. Beyond the obvious privacy problems, doing this would pose difficult natura l l anguage recognition problems. Instead, we approached the problem by analyzing the structure of the graph formed from \"From:/To:\" email logs, using a set of heuristic graph algorithms. We demonstrate the algorithms by applying them to email logs we collected from 15 sites around the world between December 1, 1988 and January 31, 1989. The graph generated from these logs contained approximately 50,000 people in 3,700 different sites worldwide. Using these algorithms, we were able to deduce sharedinterest lists for people far beyond the data collection sites. Because the algorithms we present can deduce shared-interest relationships from any c o m m u n i c a t i o n graph, they are powerful and can threaten privacy. We propose recommendations that we believe should underlie the ethical use of these algorithms and discuss several possible applications we believe to not threaten privacy.

Book ChapterDOI
05 Jul 1993
TL;DR: It is shown that the problem to find the maximum number of nodes inducing a subgraph that satisfies a desired property π on directed or undirected graphs that is nontrivial and hereditary on induced subgraphs is hard to approximate.
Abstract: We consider the following class of problems: given a graph, find the maximum number of nodes inducing a subgraph that satisfies a desired property π, such as planar, acyclic, bipartite, etc. We show that this problem is hard to approximate for any property π on directed or undirected graphs that is nontrivial and hereditary on induced subgraphs.

Journal ArticleDOI
TL;DR: This paper develops a formal definition of the concept of one program being a self-stabilizing extension of another and a characterization of what properties may hold in such extensions.
Abstract: A self-stabilizing program eventually resumes normal behavior even if execution begins in an abnormal initial state. In this paper, we explore the possibility of extending an arbitrary program into a self-stabilizing one. Our contributions are: (1) a formal definition of the concept of one program being a self-stabilizing extension of another; (2) a characterization of what properties may hold in such extensions; (3) a demonstration of the possibility of mechanically creating such extensions. The computational model used is that of an asynchronous distributed message-passing system whose communication topology is an arbitrary graph. We contrast the difficulties of self-stabilization in this model with those of the more common shared-memory models.

Book
02 Jan 1993
TL;DR: Partial table of contents: How to Get Confluence for Explicit Substitutions (T. Hardin) Graph Rewriting Systems for Efficient Compilation (Z. Ariola & Arvind)
Abstract: Partial table of contents: How to Get Confluence for Explicit Substitutions (T. Hardin) Graph Rewriting Systems for Efficient Compilation (Z. Ariola & Arvind) Abstract Reduction: Towards a Theory via Abstract Interpretation (M. van Eekelen, et al.) The Adequacy of Term Graph Rewriting for Simulating Term Rewriting (J. Kennaway, et al.) Hypergraph Rewriting: Critical Pairs and Undecidability of Confluence (D. Plump) MONSTR: Term Graph Rewriting for Parallel Machines (R. Banach) Parallel Execution of Concurrent Clean on ZAPP (R. Goldsmith, et al.) Implementing Logical Variables and Disjunctions in Graph Rewrite Systems (P. McBrien) Index.

Journal ArticleDOI
TL;DR: In this paper, the geodetic number of a connected graph G is defined as the minimum number of nodes on a set S^* whose geodesic closure is all of V. The determination of g(G) is an NP-hard problem and its decision problem is NP-complete.

Proceedings ArticleDOI
01 Mar 1993
TL;DR: In this article, the authors propose a notion of graph types, which allow common shapes, such as doubly-linked lists or threaded trees, to be expressed concisely and efficiently, and define regular languages of routing expressions to specify relative addresses of extra pointers in a canonical spanning tree.
Abstract: Recursive data structures are abstractions of simple records and pointers. They impose a shape invariant, which is verified at compile-time and exploited to automatically generate code for building, copying, comparing, and traversing values without loss of efficiency. However, such values are always tree shaped, which is a major obstacle to practical use.We propose a notion of graph types, which allow common shapes, such as doubly-linked lists or threaded trees, to be expressed concisely and efficiently. We define regular languages of routing expressions to specify relative addresses of extra pointers in a canonical spanning tree. An efficient algorithm for computing such addresses is developed. We employ a second-order monadic logic to decide well-formedness of graph type specifications. This logic can also be used for automated reasoning about pointer structures.

Journal ArticleDOI
TL;DR: An axiomatic basis is developed for the relationship between conditional independence and graphical models in statistical analysis and unconditional independence relative to normal models can be axiomatized with a finite set of axioms.
Abstract: This article develops an axiomatic basis for the relationship between conditional independence and graphical models in statistical analysis. In particular, the following relationships are established: (1) every axiom for conditional independence is an axiom for graph separation, (2) every graph represents a consistent set of independence and dependence constraints, (3) all binary factorizations of strictly positive probability models can be encoded and determined in polynomial time using their correspondence to graph separation, (4) binary factorizations of non-strictly positive probability models can also be derived in polynomial time albeit less efficiently and (5) unconditional independence relative to normal models can be axiomatized with a finite set of axioms.

BookDOI
01 Jan 1993
TL;DR: The articles in this volume are based on recent research on sparse matrix computations and examine graph theory as it connects to linear algebra, parallel computing, data structures, geometry and both numerical and discrete algorithms.
Abstract: When reality is modelled by computation, matrices are often the connection between the continuous physical world and the finite algorithmic one. Usually, the more detailed the model, the bigger the matrix; however, efficiency demands that every possible advantage be exploited. The articles in this volume are based on recent research on sparse matrix computations. They examine graph theory as it connects to linear algebra, parallel computing, data structures, geometry and both numerical and discrete algorithms. The articles are grouped into three general categories: graph models of symmetric matrices and factorizations; graph models of algorithms on nonsymmetric matrices; and parallel sparse matrix algorithms.

Proceedings ArticleDOI
03 Nov 1993
TL;DR: This paper presents a method for converting an approximation algorithm for an unweighted graph problem (from a specific class of maximization problems) into one for the corresponding weighted problem, and apply it to the densest subgraph problem.
Abstract: This paper concerns the problem of computing the densest k-vertex subgraph of a given graph, namely, the subgraph with the most edges, or with the highest edges-to-vertices ratio. A sequence of approximation algorithms is developed for the problem, with each step yielding a better ratio at the cost of a more complicated solution. The approximation ratio of our final algorithm is O/spl tilde/(n/sup 0.3885/). We also present a method for converting an approximation algorithm for an unweighted graph problem (from a specific class of maximization problems) into one for the corresponding weighted problem, and apply it to the densest subgraph problem. >

Patent
Florin Oprescu1
16 Dec 1993
TL;DR: In this paper, a node identification system is described for use in a computer system in which the various components of the system are interconnected via nodes on a communications bus, and each node may be assigned a non-predetermined unique address.
Abstract: A node identification system is described for use in a computer system in which the various components of the system are interconnected via nodes on a communications bus. Once the topology of the nodes has been resolved into an acyclic directed graph, each node may be assigned a non-predetermined unique address. Each node having a plurality of ports has an apriori assigned priority for port selection. Each child node connected to a parent is allowed to respond in the predetermined sequence depending upon the port through which it is connected to its parent. Each node in the graph will announce its presence according to its location in the graph. Each receives an address incremented from the previous addresses assigned, thereby insuring uniqueness. The same mechanism may be implemented to allow each node in turn to broadcast information on the bus concerning the parameters of its local host. Likewise, additional information may be conveyed from each node concerning connections to other nodes thereby allowing a host system to generate a map of the resolved topology including any information about disabled links which may be used for redundancy purposes.

Proceedings ArticleDOI
01 Jun 1993
TL;DR: This paper shows how the dependence flow graph (DFG) for a program can be constructed in O(EV) time and how forward and backward dataflow analyses can be performed efficiently on the DFG, using constant propagation and elimination of partial redundancies as examples.
Abstract: Program analysis and optimization can be speeded up through the use of the dependence flow graph (DFG), a representation of program dependences which generalizes def-use chains and static single assignment (SSA) form. In this paper, we give a simple graph-theoretic description of the DFG and show how the DFG for a program can be constructed in O(EV) time. We then show how forward and backward dataflow analyses can be performed efficiently on the DFG, using constant propagation and elimination of partial redundancies as examples. These analyses can be framed as solutions of dataflow equations in the DFG. Our construction algorithm is of independent interest because it can be used to construct a program's control dependence graph in O(E) time and its SSA representation in O(EV) time, which are improvements over existing algorithms.

Book ChapterDOI
21 Jun 1993
TL;DR: Based on this class of timed nets with timing of arcs directing from places to transitions, the corresponding state graph, called dynamic graph, and a method to compute the state graph are defined.
Abstract: The paper presents an analysis method for Place/Transition nets with timing of arcs directing from places to transitions. Based on this class of timed nets, the corresponding state graph, called dynamic graph, and a method to compute the state graph are defined.

Book ChapterDOI
01 Nov 1993
TL;DR: This paper proposes a similarity measure for structured representations that is based on graph edit operations and shows how this similarity measure can be computed by means of state space search and considers subgraph isomorphism as a special case of graph similarity.
Abstract: A key concept in case-based reasoning is similarity. In this paper, we first propose a similarity measure for structured representations that is based on graph edit operations. Then we show how this similarity measure can be computed by means of state space search. Subsequently, subgraph isomorphism is considered as a special case of graph similarity and a new efficient algorithm for its detection is proposed. The new algorithm is particularly suitable if there is a large number of library cases being tested against an input graph. Finally, we present experimental results showing the computational efficiency of the proposed approach.

Journal Article
TL;DR: It is shown that there is no constant e>0 for which this problem can be approximated within a factor of n1−e inpolynomial time, unless P  NP, which is the strongest lower bound for polynomial-time approximation of an unweighted NP-complete graph problem.
Abstract: We consider the problem of approximating the size of a minimum non-extendible independent set of a graph, also known as the minimum dominating independence number. We strengthen a result of Irving to show that there is no constant e>0 for which this problem can be approximated within a factor of n1−e inpolynomial time, unless P  NP. This is the strongest lower bound we are aware of for polynomial-time approximation of an unweighted NP-complete graph problem.

Journal ArticleDOI
TL;DR: A generalized mapping strategy that uses a combination of graph theory, mathematical programming, and heuristics to guide the mapping of a parallel algorithm and the architecture is proposed.
Abstract: A generalized mapping strategy that uses a combination of graph theory, mathematical programming, and heuristics is proposed. The authors use the knowledge from the given algorithm and the architecture to guide the mapping. The approach begins with a graphical representation of the parallel algorithm (problem graph) and the parallel computer (host graph). Using these representations, the authors generate a new graphical representation (extended host graph) on which the problem graph is mapped. An accurate characterization of the communication overhead is used in the objective functions to evaluate the optimality of the mapping. An efficient mapping scheme is developed which uses two levels of optimization procedures. The objective functions include minimizing the communication overhead and minimizing the total execution time which includes both computation and communication times. The mapping scheme is tested by simulation and further confirmed by mapping a real world application onto actual distributed environments. >

Proceedings ArticleDOI
Juan A. Garay1, Shay Kutten1, David Peleg
03 Nov 1993
TL;DR: This paper proposes that a more sensitive parameter is the network's diameter Diam, and provides a distributed minimum-weight spanning tree algorithm whose time complexity is sub-linear in n, but linear in Diam (specifically, O(Diam+n/sup 0.614/)).
Abstract: This paper considers the question of identifying the parameters governing the behavior of fundamental global network problems. Many papers on distributed network algorithms consider the task of optimizing the running time successful when an O(n) bound is achieved on an n-vertex network. We propose that a more sensitive parameter is the network's diameter Diam. This is demonstrated in the paper by providing a distributed minimum-weight spanning tree algorithm whose time complexity is sub-linear in n, but linear in Diam (specifically, O(Diam+n/sup 0.614/)). Our result is achieved through the application of graph decomposition and edge elimination techniques that may be of independent interest. >

Journal ArticleDOI
TL;DR: An improved version of the “marching cubes” algorithm for the generation of isosurfaces from 3D data fields is presented and applied to molecular surfaces, and the advantage of a logarithmic interpolation procedure for data fields typically occurring in molecular science is demonstrated.
Abstract: An improved version of the "marching cubes" algorithm [W. Lorensen and H. Cline, Comp. Graph. 21, (1987)] for the generation of isosurfaces from 3D data fields is presented and applied to molecular surfaces. The new algorithm avoids inconsistent pattern definitions of the original one, which lead to artificial gaps. The advantage of a logarithmic interpolation procedure, in particular for data fields typically occurring in molecular science, is demonstrated. An example is the generation of molecular surfaces based upon electron density data.

Patent
TL;DR: In this paper, an electronic design automation tool embodiment uses a single slack graph structure throughout a process to provide communication between a placer (performing placement) and a timing constraint generator (performing slack distribution).
Abstract: An electronic design automation tool embodiment uses a single slack graph structure throughout a process to provide communication between a placer (performing placement) and a timing constraint generator (performing slack distribution). The tool includes a slack graph generator, a timing calculator, a timing analyzer, a timing constraint generator and a net bounding box generator. A list of net constraints and a list of complete path constraints are fed to the slack graph generator during operation. Timing calculations from the delay calculator and zero net RC delays from a clustering process in a placer also provide input to the slack graph generator. The list of net constraints, a list of pin-to-pin constraints and a set of specifications for system clocking are input to the timing analyzer. The timing constraint generator receives a composite slack graph from the timing calculator, slack graph generator and timing analyzer. A refined slack graph is output to the net timing constraint generator for mincut placement and placement on an iterative basis. The net timing constraint can be presented in many format, such as limit on net bounding box.

Journal ArticleDOI
TL;DR: The results indicate that all graph-based algorithms significantly outperform other types of algorithms such as Seminaive and Warren and to the extent possible, adapt these algorithms to perform path computations.
Abstract: Several graph-based algorithms have been proposed in the literature to compute the transitive closure of a directed graph. We develop two new algorithms (Basic_TC and Gobal_DFTC) and compare the performance of their implementations in a disk-based environment with a well-known graph-based algorithm proposed by Schmitz. Our algorithms use depth-first search to traverse a graph and a technique called marking to avoid processing some of the arcs in the graph. They compute the closure by processing nodes in reverse topological order, building descendent sets by adding the descendent sets of children. While the details of these algorithms differ considerably, one important difference among them is the time at which descendent set additions are performed. Basic_TC, results in superior performance. The first reason is that early additions result in larger descendent set sizes on the average over the duration of the execution, thereby causing more I/O; very often this turns out to more than offset the gains of not having to fetch certain sets again to add them. The second reason is that information collected in the first pass can be used to apply several optimizations in the second pass. To the extent possible, we also adapt these algorithms to perform path computations. Again, our performance comparison confirms the trends seen in reachability queries. Taken in conjunction with another performance study our results indicate that all graph-based algorithms significantly outperform other types of algorithms such as Seminaive and Warren.

Journal ArticleDOI
TL;DR: The Genetic Algorithm techniques are shown to be very effective search procedures for the class of network optimization problem investigated.
Abstract: Two alternative Genetic Algorithm methods for the optimal selection of the layout and connectivity of a dendritic pipe network are presented and compared. Both methods assume that the layout is selected from a directed base graph defining all feasible arcs. The first method uses a conventional binary string to represent the network layout, with the second method using a more efficient integer representation. Comparison with an exact Dynamic Programming formulation is made. The Genetic Algorithm techniques are shown to be very effective search procedures for the class of network optimization problem investigated.