scispace - formally typeset
Search or ask a question

Showing papers on "Graph (abstract data type) published in 1992"


Journal ArticleDOI
TL;DR: The main purpose of this paper is to advocate the use of the graph associated with Tikhonov regularization in the numerical treatment of discrete ill-posed problems, and to demonstrate several important relations between regularized solutions and the graph.
Abstract: When discrete ill-posed problems are analyzed and solved by various numerical regularization techniques, a very convenient way to display information about the regularized solution is to plot the norm or seminorm of the solution versus the norm of the residual vector. In particular, the graph associated with Tikhonov regularization plays a central role. The main purpose of this paper is to advocate the use of this graph in the numerical treatment of discrete ill-posed problems. The graph is characterized quantitatively, and several important relations between regularized solutions and the graph are derived. It is also demonstrated that several methods for choosing the regularization parameter are related to locating a characteristic L-shaped “corner” of the graph.

3,585 citations


01 Jan 1992

347 citations


01 Jan 1992
TL;DR: In this paper, the notions de base du modele des graphes conceptuels de Sowa [Sowa 84] and en etudions les proprietes are defined precisement.
Abstract: Nous definissons precisement les notions de base du modele des graphes conceptuels de Sowa [Sowa 84] et en etudions les proprietes Nos resultats portent principalement sur la structure de la relation de specialisation, la correspondance entre operations de graphes et operations logiques, et la complexite algorithmique de la mise en œuvre du modele

289 citations


Journal ArticleDOI
TL;DR: This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties; the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic.
Abstract: Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.

272 citations


Proceedings Article
01 Jan 1992
TL;DR: All algorithms are based on a new technique that transforms an algorithm for sparse graphs into one that will work on any graph, which is calledsparsification, and results speed up the insertion times to match the bounds of known partially dynamic algorithms.
Abstract: We provide data strutures that maintain a graph as edges are inserted and deleted, and keep track of the following properties with the following times: minimum spanning forests, graph connectivity, graph 2-edge connectivity, and bipartiteness in timeO(n1/2) per change; 3-edge connectivity, in time O(n2/3) per change; 4-edge connectivity, in time O(nα(n)) per change; k-edge connectivity for constant k, in time O(nlogn) per change;2-vertex connectivity, and 3-vertex connectivity, in the O(n) per change; and 4-vertex connectivity, in time O(nα(n)) per change. Further results speed up the insertion times to match the bounds of known partially dynamic algorithms. All our algorithms are based on a new technique that transforms an algorithm for sparse graphs into one that will work on any graph, which we call sparsification.

267 citations


Journal ArticleDOI
01 May 1992-Networks
TL;DR: Algorithms are derived for the case where, for any time t, the number of jobs that can be executed at that time is bounded, and for the special cases where all of the processing times are 0, all ofThe release times ri are0, and all ofthe deadlines di are infinite.
Abstract: Consider a complete directed graph in which each arc has a given length. There is a set ofjobs, each job i located at some node of the graph, with an associated processing time hi, and whose execution has to start within a prespecified time window [r;, di]. We have a single server that can move on the arcs of the graph, at unit speed, and that has to execute all of the jobs within their respective time windows. We consider the following two problems: (a) minimize the time by which all jobs are executed (traveling salesman problem) and (b) minimize the sum of the waiting times of the jobs (traveling repairman problem). We focus on the following two special cases: (a) The jobs are located on a line and (b) the number of nodes of the graph is bounded by some integer constant B. Furthermore, we consider in detail the special cases where (a) all of the processing times are 0, (b) all of the release times ri are 0, and (c) all of the deadlines di are infinite. For many of the resulting problem combinations, we settle their complexity either by establishing NP-completeness or by presenting polynomial (or pseudopolynomial) time algorithms. Finally, we derive algorithms for the case where, for any time t, the number of jobs that can be executed at that time is bounded.

261 citations



Journal ArticleDOI
TL;DR: The construction of the HTG at a given hierarchy level, the derivation of the execution conditions of tasks which maximizes task-level parallelism, and the optimization of these conditions which results in reducing synchronization overhead imposed by data and control dependences are emphasized.
Abstract: Presents the hierarchical task graph (HTG) as an intermediate parallel program representation which encapsulates minimal data and control dependences, and which can be used for the extraction and exploitation of functional, or task-level parallelism. The hierarchical nature of the HTG facilitates efficient task-granularity control during code generation, and thus applicability to a variety of parallel architectures. The construction of the HTG at a given hierarchy level, the derivation of the execution conditions of tasks which maximizes task-level parallelism, and the optimization of these conditions which results in reducing synchronization overhead imposed by data and control dependences are emphasized. Algorithms for the formation of tasks and their execution conditions based on data and control dependence constraints are presented. The issue of optimization of such conditions is discussed, and optimization algorithms are proposed. The HTG is used as the intermediate representation of parallel Fortran and C programs for generating parallel source as well as parallel machine code. >

220 citations


Journal ArticleDOI
TL;DR: The Symposium on Graph Theory in Chemistry as mentioned in this paper was the first conference dedicated to graph theory in chemistry, focusing on graph-theoretical concepts, definitions of selected topological indices, and designing QSPR models.
Abstract: Symposium on Graph Theory in Chemistry. Topological indexes, elementary graph-theoretical concepts, definitions of selected topological indices, and designing QSPR models.

207 citations


Proceedings Article
23 Aug 1992
TL;DR: The fundamental concept of view independence is introduced and shown to be view independent and implementation techniques for realizing MultiView with existing OODB technology are outlined.
Abstract: Author(s): Rundensteiner, Elke A. | Abstract: It has been widely recognized that object-oriented database (OODB) technology needs to be extended to provide a mechanism similar to views in relational database systems. We define an object-oriented view to be an arbitrarily complex virtual schema graph with possibly restructured generalization and decomposition hierarchies - rather than just one virtual class as has been proposed in the literature. In this paper, we propose a methodology, called MultiView, for supporting multiple such view schemata. MultiView breaks the schema design task into the following independent and well-defined subtasks: (1) the customization of type descriptions and object sets of existing classes by deriving virtual classes, (2) the integration of all derived classes into one consistent global schema graph, and (3) the definition of arbitrarily complex view schemata on this augmented global schema. For the first task of MultiView, we define a set of object algebra operators that can be used by the view definer for class customization. For the second task of MultiView, we propose an algorithm that automatically integrates these newly derived virtual classes into the global schema. We solve the third task of MultiView by first letting the view definer explicitly select the desired view classes from the global schema using a view definition language and then by automatically generating a view class hierarchy for these selected classes. In addition, we present algorithms that verify the closure property of a view and, if found to be incomplete, transform it into a closed, yet minimal, view. In this paper, we introduce the fundamental concept of view independence and show MultiView to be view independent. We also outline implementation techniques for realizing MultiView with existing OODB technology.

193 citations


Proceedings ArticleDOI
David Lee1, Mihalis Yannakakis1
01 Jul 1992
TL;DR: An algorithm for this problem that applies to general systems, provided the authors have appropriate primitive operations for manipulating blocks of states and can determine termination is presented.
Abstract: We are given a transition system implicitly through a compact representation and wish to perform simultaneously reachability analysis and minimization without constructing first the whole system graph. We present an algorithm for this problem that applies to general systems, provided we have appropriate primitive operations for manipulating blocks of states and we can determine termination; the number of operations needed to construct the minimal reachable graph is quadratic in the size of this graph. We specialize the method to obtain efficient algorithms for extended finite state machines that apply separable affine transformations on the variables.

01 Jan 1992
TL;DR: The non-directional blocking graph is introduced, a succinct characterization of the blocking relationships between parts in an assembly, and efficient algorithms to identify removable subassemblies by constructing and analyzing the NDBG are described.
Abstract: This dissertation addresses the problem of generating feasible assembly sequences for a mechanical product from a geometric model of the product. An operation specifies a motion to bring two subassemblies together to make a larger subassembly. An assembly sequence is a sequence of operations that construct the product from the individual parts. I introduce the non-directional blocking graph, a succinct characterization of the blocking relationships between parts in an assembly. I describe efficient algorithms to identify removable subassemblies by constructing and analyzing the NDBG. For an assembly A of n parts and m part-part contacts equivalent to k contact points, a subassembly that can translate a small distance from the rest of A can be identified in $O(mk\sp2)$ time. When rotations are allowed as well, the time bound is $O(mk\sp5)$. Both algorithms are extended to find connected subassemblies in the same time bounds. All free subassemblies can be identified in output-dependent polynomial time. Another algorithm based on the NDBG identifies subassemblies that can be completely removed by a single translation. For a polyhedral assembly with v vertices, the algorithm finds a removable subassembly and direction in $O(n\sp2v\sp4)$ time. When applied to find the set of translations separating two parts, the algorithm is optimal. A final method accelerates the generation of linear assembly sequences, in which each operation mates a single part with a subassembly. The results of geometric calculations are stored in logical expressions and later retrieved to answer similar geometric queries. Several types of expressions with increasing descriptive power are given. An assembly sequencing testbed called GRASP was implemented using the above methods. From standard three-dimensional model of a product, GRASP finds part contacts and motion constraints, and constructs an AND/OR graph representing a set of geometrically feasible assembly sequences for the product. Experimental results are shown for several complex products.

Journal ArticleDOI
TL;DR: This paper reviews different ways of describing expert system reasoning, emphasizing the use of simple logic, set, and graph notations for making dimensional analyses of modeling languages and inference methods.

Book
31 May 1992
TL;DR: In this paper, the authors present a system for relative control optimization of Sequencing Graph and Resource Model (SGRM) for space exploration and resource conflict resolution with relative control generation and optimization.
Abstract: 1 Introduction 2 System Overview 3 Behavioral Transformations 4 Sequencing Graph and Resource Model 5 Design Space Exploration 6 Relative Scheduling 7 Resource Conflict Resolution 8 Relative Control Generation 9 Relative Control Optimization 10 System Implementation 11 Experimental Results 12 Conclusions and Future Work References Index

Journal ArticleDOI
TL;DR: A graph-based technology-mapping package for delay optimization in lookup-table-based field programmable gate array (FPGA) designs is presented and results show that, on average, DAG-Map reduces both network delay and the number of look-up tables.
Abstract: A graph-based technology-mapping package for delay optimization in lookup-table-based field programmable gate array (FPGA) designs is presented. The algorithm, DAG-Map, carries out technology mapping and delay optimization on the entire Boolean network, instead of decomposing it into fan-out-free trees. As a preprocessing phase of DAG-Map, a general algorithm called DMIG, which transforms an arbitrary n-node network into a two-input network with only an O(1) factor increase in network depth, is introduced. A matching-based technique that minimizes area without increasing network delay, and is used in the postprocessing phase of DAG-Map is discussed. DAG-Map is compared with previous FPGA mapping algorithms on a set of logic synthesis benchmarks. The experimental results show that, on average, DAG-Map reduces both network delay and the number of look-up tables. >

Journal ArticleDOI
TL;DR: This approach provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF.
Abstract: An image is segmented into a collection of disjoint regions that form the nodes of an adjacency graph, and image interpretation is achieved through assigning object labels (or interpretations) to the segmented regions (or nodes) using domain knowledge, extracted feature measurements, and spatial relationships between the various regions. The interpretation labels are modeled as a Markov random field (MRF) on the corresponding adjacency graph, and the image interpretation problem is then formulated as a maximum a posteriori (MAP) estimation rule, given domain knowledge and region-based measurements. Simulated annealing is used to find this best realization or optimal MAP interpretation. This approach also provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF. A general methodology is provided for the design of the clique functions. Results of image interpretation experiments on synthetic and real-world images are described. >

Journal ArticleDOI
TL;DR: A tangent graph for path planning of mobile robots among obstacles with a general boundary is proposed, which has the same data structure as the visibility graph but its nodes represent common tangent points on obstacle boundaries, and its edges correspond to collision-free common tangents between the boundaries.
Abstract: This article proposes a tangent graph for path planning of mobile robots among obstacles with a general boundary. The tangent graph is defined on the basis of the locally shortest path. It has the ...

Journal ArticleDOI
TL;DR: In this paper, a hierarchical aspect representation based on projected surfaces of the primitives is introduced, and a set of conditional probabilities captures the ambiguity of mappings between the levels of the hierarchy.
Abstract: We present an approach to the recovery and recognition of 3-D objects from a single 2-D image. The approach is motivated by the need for more powerful indexing primitives, and shifts the burden of recognition from the model-based verification of simple image features to the bottom-up recovery of complex volumetric primitives. Given a recognition domain consisting of a database of objects, we first select a set of object-centered 3-D volumetric modeling primitives that can be used to construct the objects. Next, using a CAD system, we generate the set of aspects of the primitives. Unlike typical aspect-based recognition systems that use aspects to model entire objects, we use aspects to model the parts from which the objects are constructed. Consequently, the number of aspects is fixed and independent of the size of the object database. To accommodate the matching of partial aspects due to primitive occlusion, we introduce a hierarchical aspect representation based on the projected surfaces of the primitives; a set of conditional probabilities captures the ambiguity of mappings between the levels of the hierarchy. From a region segmentation of the input image, we present a novel formulation of the primitive recovery problem based on grouping the regions into aspects. No domain dependent heuristics are used; we exploit only the probabilities inherent in the aspect hierarchy. Once the aspects are recovered, we use the aspect hierarchy to infer a set of volumetric primitives and their connectivity relations. Subgraphs of the resulting graph, in which nodes represent 3-D primitives and arcs represent primitive connections, are used as indices to the object database. The verification of object hypotheses consists of a topological verification of the recovered graph, rather than a geometrical verification of image features. A system has been built to demonstrate the approach, and it has been successfully applied to both synthetic and real imagery.

Proceedings ArticleDOI
01 Jun 1992
TL;DR: This report proposes that graphical visualization techniques can help engineers undersland and solve a class of these problems by simplifying dependencies among components of a system and designing an efficient code overlay structure.
Abstract: Software engineering problems often involve large sets of objects and complex relationships among them. This report proposes that graphical visualization techniques can help engineers undersland and solve a class of these problems. To illustrate this, two problems are analyzed and recast using the graphical language GraphLog. The first problem is that of simplifying dependencies among components of a system, which translates into removing cycles from a graph. The second problem is that of designing an efficient code overlay structure, which is facilitated in several ways through graphical techniques.

Proceedings ArticleDOI
06 Jun 1992
TL;DR: Genetic algorithms (GAS) are used to generate neural networks that implement Boolean functions by using chromosomes that encode an algorithmic description based upon a cell rewriting grammar to give modular and interpretable architectures with a powerful scalability property.
Abstract: Genetic algorithms (GAS) are used to generate neural networks that implement Boolean functions. Neural networks both involve an architecture that is a graph of connections, and a set of weights. The algorithm that is put forward yields both the architecture and the weights by using chromosomes that encode an algorithmic description based upon a cell rewriting grammar. The developmental process interprets the grammar for l cycles and develops a neural net parametrized by l. The encoding along with the developmental process have been designed in order to improve the existing approaches. They implement the following key-properties. The representation on the chromosome is abstract and compact. Any chromosome develops a valid phenotype. The developmental process gives modular and interpretable architectures with a powerful scalability property. The GA finds a neural net for the 50 inputs parity function, and for the 40 inputs symmetry function. >


Journal ArticleDOI
TL;DR: This work addresses the problem of generating a minimal state graph from a program, without building the whole state graph, with respect to bisimulation by derived and illustrated an algorithm.

Book ChapterDOI
29 Jun 1992
TL;DR: The implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding is presented, which significantly improves the previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way.
Abstract: We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way Abo, the complexity has been dramatically improved, from exponential time to low polynomial time The implementation uses the techniques of incremental graph construction and constraint template instantiation to avoid representing intermediate results, doing superfluous work, and recomputing type information Experiments indicate that the implementation type checks as much as 100 lines pr second This results in a mature product, on which a number of tools can be based, for example a safety tool, an image compression tool, a code optimization tool, and an annotation tool This may make type inference for object-oriented languages practical

Journal ArticleDOI
TL;DR: This paper introduces a new data structure called a P4-tree, and uses the data structure as part of an algorithm to find the substitution decomposition of a graph in O(mα(m,n) time.

Book ChapterDOI
24 Aug 1992
TL;DR: The major developments in understanding the complexity of the graph connectivity problem in several computational models are surveyed, and some challenging open problems are highlighted.
Abstract: In this paper we survey the major developments in understanding the complexity of the graph connectivity problem in several computational models, and highlight some challenging open problems.

Journal ArticleDOI
TL;DR: A general method whereby the compiler can make the uses of each procedure implementation more uniform, enabling a greater degree of specialization, as well as constructing a general-purpose minimal-function graph semantics that extends and improves prior constructions of this sort.
Abstract: Many optimizing compilers use interprocedural analysis to determine how the source program uses each of its procedures. Customarily, the compiler gives each procedure a single implementation, which is specialized according to restrictions met by all uses of the procedure. We propose a general method whereby the compiler can make the uses of each procedure implementation more uniform, enabling a greater degree of specialization. The method creates several implementations of each procedure, each specialized for a different class of use; it avoids run-time overhead by determining at compile time the appropriate procedure implementation for each call in the expanded program. The implementation suited to each call is determined by embedding in the program a deterministic finite automaton that, during execution, scans the current call path, i.e., the sequence of calls entered but not yet exited. Each automaton state has an associated class of procedure uses that includes the use made by the last call in each call path, on input, leaves the automaton in the given state. The compiler creates one implementation for each state, using the associated class of use to specialize the implementation and the transition function to determine which implementation to invoke for each call in the expanded program. With standard automata-theory techniques, it is straightforward to merge several automaton states, in case several classes of use lead to specializations that are the same or whose differences are not substantial enough to warrant separate implementations. Thus, our method allows the compiler to perform multiple specialization where it is useful, while avoiding excessive enlargement of the generated code. We formalize the foundation of our method by constructing a general-purpose minimal-function graph semantics that extends and improves prior constructions of this sort.

Journal ArticleDOI
TL;DR: This article introduces patterns to identify what is interesting in data and gives some examples of patterns for difference‐, change‐, and trend‐detection, and summarizes what must be specified to define a pattern.
Abstract: In this article we describe some goals and problems of KDD. Approaches are presented which have been implemented in the Statistics Interpreter Explora, a prototype assistant system for discovering interesting findings in recurrent datasets. We introduce patterns to identify what is interesting in data and give some examples of patterns for difference-, change-, and trend-detection. Then we summarize what must be specified to define a pattern. Besides some descriptive parts, this includes a procedural verification method. Object-oriented programming techniques can simplify the specializations of general patterns. We identify search as a constituent principle of discovery and introduce object structures as a basis to induce a graph structure on the search space. We mention several strategies for graph search and describe approaches for dealing with the aggregation, redundancy, and overlapping problems. Then we address the presentation of findings in natural language and graphical form, focusing on the methods to design good graphical presentations by knowledge-based techniques. Finally, we discuss the paradigm of an adaptive discovery assistant, including the problem of how to reuse the discovered knowledge for further discovery. © 1992 John Wiley & Sons, Inc.

Journal ArticleDOI
01 Jan 1992
TL;DR: The result of applying the model to the recognition of handprinted Chinese characters is presented, and a measure for matching two FAGs is suggested that has its interpretation in fuzzy logic.
Abstract: To include fuzzy properties in solving some types of problems, the attributes graph is extended to provide a fuzzy-attribute graph (FAG). With such an extension, equality of attributes can no longer be used when matching of FAGs is considered, as equality of two fuzzy sets is too stringent a condition. A measure for matching two FAGs is suggested. The measure has its interpretation in fuzzy logic. The result of applying the model to the recognition of handprinted Chinese characters is presented. >

Proceedings Article
30 Aug 1992
TL;DR: The Conceptual Clustering system KBG structures information from a set of observations and a domain theory into a directed graph of concepts, generated by an iterative use of clustering and generalization operators, both guided by similarity measures.
Abstract: We present the Conceptual Clustering system KBG. The knowledge representation language used, both for input and output, is based on first order logic with some extensions to handle quantitative and procedural knowledge. From a set of observations and a domain theory, KBG structures this information into a directed graph of concepts. This graph is generated by an iterative use of clustering and generalization operators, both guided by similarity measures.

Journal ArticleDOI
TL;DR: It is demonstrated that bounds for optimal multiple alignment of k sequences can be derived from a solution of the maximum weighted matching problem in a k-vertex graph.
Abstract: Multiple sequence alignment is an important problem in computational molecular biology. Dynamic programming for optimal multiple alignment requires too much time to be practical. Although many algorithms for suboptimal alignment have been suggested, no “performance guarantees” algorithms have been known until recently. A computationally efficient approximation multiple alignment algorithm with guaranteed error bounds equal to the normalized communication cost of a corresponding graph is given in this paper. Recently, Altschul and Lipman [SIAM J. Appl. Math., 49 (1989), pp. 197–209] used suboptimal alignments for reducing the computational complexity of the optimal alignment algorithm. This paper develops the Altschul–Lipman approach and demonstrates that bounds for optimal multiple alignment of k sequences can be derived from a solution of the maximum weighted matching problem in a k-vertex graph. Fast maximum matching algorithms allow efficient implementation of dynamic bounds for the multiple alignment ...