scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 1994"


Journal ArticleDOI
TL;DR: A new algorithm for detecting self‐collisions on highly discretized moving polygonal surfaces is presented, based on geometrical shape regularity properties that permit avoiding many useless collision tests and an improved hierarchical representation of the surface is used.
Abstract: We present a new algorithm for detecting self-collisions on highly discretized moving polygonal surfaces. It is based on geometrical shape regularity properties that permit avoiding many useless collision tests. We use an improved hierarchical representation of our surface that, besides the optimizations inherent to hierarchisation, allows us to take adjacency information to our advantage for applying efficiently our geometrical optimizations. We reduce the computation time between each frame by building automatically the hierarchical structure once as a preprocessing task. We describe the main principles of our algorithm, followed by some performance tests.

203 citations


Journal ArticleDOI
01 Jun 1994-Language
TL;DR: Locality Theory as mentioned in this paper is a theory of phonological adjacency requirements, defined by a universal Locality Condition, which requires elements to be local within a plane, the Adjacency Parameter, which in turn allows rules to impose further constraints on the maximal distance between interacting segments, and by Transplanar Locality, which bans certain types of relations across featural planes.
Abstract: One motivation for nonlinear phonology is the potential for eliminating the devices of linear phonology needed to allow rules to apply to nonadjacent segments. Imposing hierarchical structure on the organization of features within a segment and allowing segments to be unspecified for certain features makes it possible to view apparently longdistance rules as rules operating between segments which are adjacent at a specified level, even though the segments are not adjacent at all levels of representation. This paper presents a theory of phonological adjacency requirements. Locality Theory is defined by a universal Locality Condition, which requires elements to be local within a plane, the Adjacency Parameter, which in turn allows rules to impose further constraints on the maximal distance between interacting segments, and by Transplanar Locality, which bans certain types of relations across featural planes. A survey of phonological processes demonstrates the generality of this theory across feature tiers.*

166 citations


Journal ArticleDOI
TL;DR: This article addresses the question of how to define boundaries in multidimensional digital spaces so that they are "closed" and connected, andSo that they partition the digital space into an interior set that is connected and an exterior set that are connected.

92 citations



Journal ArticleDOI
01 May 1994
TL;DR: It is found that the increase in parallelism afforded by the coloring-based orderings more than offsets any increase in the number of iterations required for the convergence of the conjugate gradient algorithm.
Abstract: The efficiency of a parallel implementation of the conjugate gradient method preconditioned by an incomplete Cholesky factorization can very dramatically depending on the column ordering chosen. One method to minimize the number of major parallel steps is to choose an ordering based on a coloring of the symmetric graph representing the nonzero adjacency structure of the matrix. In this paper, we compare the performance of the preconditioned conjugate gradient method using these coloring orderings with a number of standard orderings on matrices arising from finite element models. Because optimal colorings for these systems may not be known a priori, we employ a graph coloring heuristic to obtain consistent colorings. Based on lower bounds obtained from the local structure of these systems, we find that the colorings determined by the heuristic are nearly optimal. For these problems, we find that the increase in parallelism afforded by the coloring-based orderings more than offsets any increase in the number of iterations required for the convergence of the conjugate gradient algorithm. We also demonstrate that the performance of this parallel preconditioner is scalable. We give results from the Intel iPSC/860 to support our claims.

81 citations


Journal ArticleDOI
TL;DR: This paper investigates the extraction of machining features from boundary descriptions of iso-oriented (having no inclined faces) polyhedrons and proves that manufacturing the features proposed by the feature extractor results exactly in the desired part.
Abstract: This paper investigates the extraction of machining features from boundary descriptions of iso-oriented (having no inclined faces) polyhedrons. We prove that manufacturing the features proposed by our feature extractor results exactly in the desired part-in this respect, the approach is both sound and complete. Our method uses the adjacency information between faces to derive the features. This keeps the determination of isolated features in a part straightforward. However, interaction of features creates difficulties since the adjacency information between some faces is lost. We derive this lost information by considering faces that when extended intersect other faces to form concave edges. The derived face adjacencies are termed virtual links. Augmenting the virtual links to the cavity graph of the object leads to its feature graph, and subgraph matching of primitive graphs in this graph results in feature hypotheses. A feature hypothesis is considered valid if the volume corresponding to it is not shared with the part in question; therefore, we verify the feature hypotheses by checking the regularized intersection of the feature volume and the part. Thus, feature verification employs a constructive solid geometry approach. We have implemented a prototype of the system in the Smalltalk-80 environment. >

67 citations


Journal ArticleDOI
TL;DR: Evaluation for several adjacency maps shows that the conventional algorithm has the largest number of constraints, with a low degree of effort in derivation of adjacencies constraints and a small computational task to find a final solution.
Abstract: A mathematical programming formulation of the area-based forest planning problem can result in a large number of adjacency constraints with much potential for redundancy. Two heuristic algorithms have been proposed for reducing redundant adjacency constraints generated by the conventional algorithm. In this paper another analytical algorithm is proposed, and its efficiency and that of the conventional algorithm and the two heuristics are evaluated and compared. Comparison is based on the number of constraints, and on the computational effort needed both to derive the adjacency constraints and to solve the associated planning problem. Evaluation for several adjacency maps shows that the conventional algorithm has the largest number of constraints, with a low degree of effort in derivation of adjacency constraints and a small computational task to find a final solution. The first heuristic algorithm has the smallest number of constraints but involves a high degree of effort and a large computational task. T...

58 citations


Proceedings ArticleDOI
13 Nov 1994
TL;DR: A new method for gray-scale, color and multispectral image segmentation is proposed, based on a morphological split-and-merge fast watershed algorithm and region adjacency graph processing.
Abstract: A new method for gray-scale, color and multispectral image segmentation is proposed. The method is based on a morphological split-and-merge fast watershed algorithm and region adjacency graph processing. It features partly iterative processing with relatively low computational load. Two system alternatives are presented and their application to color image segmentation is discussed. First computer simulations show correctness of operation and encourage for further research. >

57 citations


Journal Article
TL;DR: In this paper, the authors extended the method of image segmentation by pyramid relinking to the formalism of hierarchies of region adjacency graphs, which has a number of advantages: resulting regions are connected; the method is adaptive, and therefore artifacts caused by a regular grid are avoided; and information on regions and boundaries between regions can be combined to guide the segmentation procedure.
Abstract: The method of image segmentation by pyramid relinking is extended to the formalism of hierarchies of region adjacency graphs. This approach has a number of advantages: (1) resulting regions are connected; (2) the method is adaptive, and therefore artifacts caused by a regular grid are avoided; and (3) information on regions and boundaries between regions can be combined to guide the segmentation procedure. The method is evaluated by the segmentation of a number of synthetic and natural images. Note: This research was supported by the Foundation for Computer Science in the Netherlands (SION) with financial support from the Netherlands Organization for Scientific Research (NWO). This research was part of a project in which the TNO Institute for Human Factors, CWI and the University of Amsterdam cooperate. Most of this work was performed while the author was a guest at the Technical University of Vienna, Austria, in the framework of the Erasmus exchange program of the European Community.

47 citations


Proceedings ArticleDOI
09 Oct 1994
TL;DR: This method can be applied to all problems in which a 2D discrete space can be represented by a planar graph, and overcomes the problem of unbounded vertex degree proper to the existing approach to building irregular pyramids.
Abstract: A new algorithm for building irregular pyramids is presented. The algorithm is based on only two basic operations on graphs, contraction and removal of edges. By making use of the concept of dual graphs, the algorithm overcomes the problem of unbounded vertex degree proper to the existing approach to building irregular pyramids. This boundedness extends the scope of parallel, degree preserving graph contraction also to irregular structures. Our method can be applied to all problems in which a 2D discrete space can be represented by a planar graph. Four-connected square grids, region adjacency graphs, Voronoi diagrams and Delaunay triangulations are examples of such plane representations.

32 citations


Book ChapterDOI
19 Sep 1994
TL;DR: A formal theory of undirected (labeled) graphs in higher-order logic developed using the mechanical theoremproving system HOL formalizes and proves theorems about such notions as the empty graph, single-node graphs, finite graphs, subgraphs, adjacency relations, walks, paths, cycles, bridges, reachability, connectedness, acyclicity, trees.
Abstract: This paper describes a formal theory of undirected (labeled) graphs in higher-order logic developed using the mechanical theoremproving system HOL. It formalizes and proves theorems about such notions as the empty graph, single-node graphs, finite graphs, subgraphs, adjacency relations, walks, paths, cycles, bridges, reachability, connectedness, acyclicity, trees, trees oriented with respect to roots, oriented trees viewed as family trees, top-down and bottom-up inductions in a family tree, distributing associative and commutative operations with identities recursively over subtrees of a family tree, and merging disjoint subgraphs of a graph. The main contribution of this work lies in the precise formalization of these graph-theoretic notions and the rigorous derivation of their properties in higher-order logic. This is significant because there is little tradition of formalization in graph theory due to the concreteness of graphs. A companion paper [2] describes the application of this formal graph theory to the mechanical verification of distributed algorithms.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: A new partial consistency suited to these problems: the semi-geometric arc-consistency is defined; the method for achieving it is based on an inference by propagation rectangle label and interval label.
Abstract: We describe a knowledge-based system that generates all possible floor plans satisfying a set of geometric constraints on the rooms (non-overlap, adjacency, minimal/maximal area, minimal/maximal dimension, etc.). Our approach is based on the extension of the constraint techniques; we define in particular a new partial consistency suited to these problems: the semi-geometric arc-consistency. The method for achieving it is based on an inference by propagation rectangle label and interval label. After solving some realistic problems, we conclude by discussing the relevance of a constraint-based approach for solving these problems. >

Journal ArticleDOI
TL;DR: A method similar to assumption-based truth maintenance systems for the collating and reasoning processes required in the labelling of input images for the recognition of objects from medical images is proposed.

Journal ArticleDOI
TL;DR: A method which can recognize form features and reconstruct 3D part from 2D CAD data automatically is proposed, and a new structure of form feature adjacency graph (FFAG) is devised to record the related attibutes of each form feature.

Journal ArticleDOI
TL;DR: In this paper, a ternary algebra formed by three-dimensional arrays with entries from an arbitrary field is investigated, and the notion of identity pair and inverse pair is defined and methods for finding inverse pairs are developed.

Proceedings ArticleDOI
02 Oct 1994
TL;DR: It is suggested that loops can be avoided and fault coverage increased by carefully choosing the initial state, and an approach based on binary decision diagrams and symbolic techniques to solve the problem is presented.
Abstract: The paper assesses the effectiveness of the circular self-test path BIST technique from an experimental point of view and proposes an algorithm to overcome the low fault coverage that often arises when real circuits are examined. Several fault simulation experiments have been performed on the ISCAS89 benchmark set, as well as on a set of industrial circuits: in contrast to the theoretical analysis proposed in [PKKa92], a very high fault coverage is attained with a limited number of clock cycles, but this happens only when the circuit does not enter a loop. This danger cannot be avoided even if clever strategies for flip-flops ordering, aimed at reducing the functional adjacency, are adopted. Instead, we suggest that loops can be avoided and fault coverage increased by carefully choosing the initial state, and we present an approach based on binary decision diagrams and symbolic techniques to solve the problem.

Journal ArticleDOI
TL;DR: The analytic method presented here identifies optimal combinations of relations for building boundary data structures, which are normally built on the adjacency relations between faces, edges, and vertices.
Abstract: Boundary representations, or B-reps, are a solid modeling technique with widespread applications in computer-aided design, computer-aided manufacturing, and robotics. B-reps have evolved considerably in recent years, and advances in theoretic studies-particularly in topological models and associated operators-make it possible now to model nonorientable, nonmanifold objects, as well as orientable, manifold objects. Boundary data structures are normally built on the adjacency relations between faces, edges, and vertices. The analytic method presented here identifies optimal combinations of relations for building these structures. >

Dissertation
01 Jan 1994
TL;DR: It is found that a hierarchical graph is an attractive framework for image analysis, because it can easily encode and handle different structures, and because structures and there relations are encoded in the same repre-sentation.
Abstract: This thesis is about image analysis methods based on hierarchical graph represen-tations. A hierarchical graph representation of an image is an ordered set of graphs that represent the image on different levels of abstraction. The vertices of the graph represent image structures (lines, areas). Its edges represent the relations between those structures (adjacency, collinearity). Graphs on higher levels of the hierarchy give a more global and abstract representa-ti-on of the image. A number of image analysis methods based on hierarchical graph repre-senta-tions were developed. These methods were applied to image segmentation, detection of linear structures and edge detection. It is found that a hierarchical graph is an attractive framework for image analysis, because it can easily encode and handle different structures, and because structures and there relations are encoded in the same repre-sentation. The only restriction of the method is its 'bottom-up' character. However it is suggested how this can be remedied by a 'top-down' analysis in a later stage of the proces. The second part of this study is about multiresolution morphology. Discs defined by weighted metrics were used as structuring elements. Weighted metrics can approximate the Euclidian metric to within a few percent. Algorithms were developed to perform the elementary morphological operations (erosion, dilation, opening, closing), and some advanced operations as the medial axis transform, the opening transform, and the patttern spec-trum-transform. The computational costs of these methods is comparable to the cost of conventional morphological methods using square structuring elements.

ReportDOI
10 Jun 1994
TL;DR: A partitioning algorithm with time and space complexity to partition the vertices of a chordal graph into the fewest transitively closed subgraphs over all perfect elimination orderings while satisfying a certain precedence relationship is described.
Abstract: A recent approach for solving sparse triangular systems of equations on massively parallel computers employs a factorization of the triangular coefficient matrix to obtain a representation of its inverse in product form. The number of general communication steps required by this approach is proportional to the number of factors in the factorization. The triangular matrix can be symmetrically permuted to minimize the number of factors over suitable classes of permutations, and thereby the complexity of the parallel algorithm can be minimized. Algorithms for minimizing the number of factors over several classes of permutations have been considered in earlier work. Let $F=L+L^T$ denote the symmetric filled matrix corresponding to a Cholesky factor $L$, and let $G_F$ denote the adjacency graph of $F$. In this paper we consider the problem of minimizing the number of factors over all permutations which preserve the structure of $G_F$. The graph model of this problem is to partition the vertices $G_F$ into the fewest transitively closed subgraphs over all perfect elimination orderings while satisfying a certain precedence relationship. The solution to this chordal graph partitioning problem can be described by a greedy scheme which eliminates a largest permissible subgraph at each step. Further, the subgraph eliminated at each step can be characterized in terms of lengths of chordless paths in the current elimination graph. This solution relies on several results concerning {\em transitive perfect elimination orderings\/} introduced in this paper. We describe a partitioning algorithm with $\order{|V|+|E|}$ time and space complexity.

Journal Article
TL;DR: The hyper-irregular pyramid segmentation scheme is extended to three dimensional (3D) digital space and it is shown how a 3D texture image is split into partitions recursively based on octree structure.
Abstract: Octree Splitting We extend the irregular pyramid segmentation scheme to three dimensional (3D) digital space and we call it hyper-irregular pyramid segmentation. Based on this segmentation scheme, a 3D texture image is split into partitions recursively based on octree structure. The testure features are calculated based on 3D gray level spatial dependency measurement. The octree is subsequently converted into a 3D region adjacency graph (RAG). Each vertex of the graph consists of a texture feature vector of the corresponding partition and each edge represents the neighborhood relationship between two partitions. The hyper-irregular pyramid process is then applied to the graph and finally a segmentation of the three

Book ChapterDOI
25 Aug 1994
TL;DR: This work presents two graph compression schemes for solving problems on dense graphs and complement graphs that compress a graph or its complement graph into two kinds of succinct representations based on adjacencies intervals and adjacency integers.
Abstract: We present two graph compression schemes for solving problems on dense graphs and complement graphs They compress a graph or its complement graph into two kinds of succinct representations based on adjacency intervals and adjacency integers, respectively These two schemes complement each other for different ranges of density Using these schemes, we develop optimal or near optimal algorithms for fundamental graph problems In contrast to previous graph compression schemes, ours are simple and efficient for practical applications

Proceedings ArticleDOI
17 Aug 1994
TL;DR: The scope of this research is to predict the quality, especially probabilities of components of the relational description from a few measures depending on noise, scale and local properties of the image content.
Abstract: A concept for analyzing the quality of relational descriptions of digital images is presented. The investigations are based on the relational description automatically derived by a new coherent procedure for feature extraction providing a feature adjacency graph containing points, edges and segments and their relations. A new notion of scale (integration scale) is introduced, relating to a nonlinear function of the image, providing new stable descriptions. Based on the feature extraction we analyzed the quality of the relational descriptions in dependency on the signal-to-noise ratio and on the control parameters of the feature extraction process, i.e., the significance level, the smoothing scale, and the integration scale. First results on the quality of the features, focussing on their existence, distinct attributes and relations are given. The scope of this research is to predict the quality, especially probabilities of components of the relational description from a few measures depending on noise, scale and local properties of the image content.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The results show that the iterative process that creates a new level of the hierarchy from its preceding one does not heavily depend on the size of the graph, as its expected time is O(log N) for a random graph, where N is the total number of vertices in the input graph.

Proceedings ArticleDOI
29 Jun 1994
TL;DR: This paper transforms an IA expression that describes a given image procedure into the dataflow graph of the corresponding MEA-conformable (Multiple Execution Array) architecture, and deduce the components and connectivity of a corresponding optical processor.
Abstract: In this paper, we extend our earlier work in image algebra (IA)-based optical processing to the optical computation of high-level image operations (HLIOs) such as connected component labelling, determining component adjacency graphs, finding corresponding points in stereo pair images, and computing the Euler number. In particular, we transform an IA expression that describes a given image procedure into the dataflow graph of the corresponding MEA-conformable (Multiple Execution Array) architecture. From the MEA dataflow graph, we deduce the components and connectivity of a corresponding optical processor. Analyses emphasize computational cost inclusive of propagation time, as well as information loss expected from optical devices such as Spatial Light Modulators.

Journal ArticleDOI
TL;DR: For a graph L whose nodes are the lattice points in the plane under the relation of row or column adjacency, it is shown that a pebbling of L is convex iff the set of unpebbled nodes is connected and orthoconvex.

Journal ArticleDOI
TL;DR: The method with the best runtime uses an initial search to identify each contour and connectivity labelling to separate and fill each solid component, and relies on union-finding techniques coupled with tree traversals in an octree environment.
Abstract: The reconstruction of a solid from its contour definition is, basically, a conversion from boundary to volume representation, and it has been addressed in the literature in several instances The simultaneous reconstruction of several solids— in any number and in any nesting order—from a set of contours is a more complex problem, for which a solution is presented The latter consists of three parts: the separation of connected contours, the filling of those contours which indeed contain a volume, and the leaving of the other components as dangling elements The general case is considered in which all the contours are given as one set of voxels; this means that contiguous contour elements may not be associated with the same object Together with the reconstruction of all multishell solids, their inner structure is identified in the sense that it is determined whether a solid is inside another, and, if so, at which level of nesting This information is condensed in a graph structure called the region-containment tree The methodology used relies on union-finding techniques coupled with tree traversals in an octree environment Various approaches to tree traversals and the propagation of adjacency information are explored The method with the best runtime uses an initial search to identify each contour This is followed by connectivity labelling (in the active border version) to separate and fill each solid component Testing has been carried out in seven cases In five of these, an independent solid modeller is used as a reference, and in two, a different surface-to-solid conversion is used

Journal ArticleDOI
C. Ursescu1
TL;DR: In this paper, the authors define regularity for general adjacency and tangency concepts and study it in some particular cases, and show that most of the convex but sophisticated approximations used in optimization theory can be disconvexified using the regularity notion and their adjacent or tangent structure can be perceived looking at them under a regularity angle.
Abstract: In this paper we define regularity for general adjacency and tangency concepts and we study it in some particular cases. Most of the convex but sophisticated approximations used in optimization theory can be disconvexified using the regularity notion and their adjacent or tangent structure can be perceived looking at them under the regularity angle.

Journal ArticleDOI
01 Jan 1994-Robotica
TL;DR: In this article, a sensor-based robot is shown to be able to incrementally build the entire terrain model; the model will be described in terms of visibility graph and visibility window.
Abstract: The problem of incremental terrain acquisition is addressed in this paper. Through a systematic planning of movements in an unknown terrain filled with polygonal obstacles, a sensor-based robot is shown to be able to incrementally build the entire terrain model; the model will be described in terms of visibility graph and visibility window. The terrain model is built area by area without any overlapping between explored areas. As a consequence, the terrain is obtained as a tessellation of disjoint star polygons. And the adjacency relations between star polygons are represented by a star polygon adjacency graph (SPAG graph). The incremental exploration process consists of two basic tasks: local exploration and exploration merging. Useful lemmas are derived for these two tasks and, then, the algorithms for the tasks are given. Examples are used to illustrate the algorithms. Two strategies for planning robot movements in the unknown terrain environment are suggested and compared. They are the depth-first search and the breadth-first search applied to the SPAG graph. Finally, the performance evaluation of the method and comparison with some existing methods are presented.

Book ChapterDOI
TL;DR: In this article, a Bayesian network is constructed over a universe of variables, each having a finite set of states, and the links in the graph model impact from one variable to another.
Abstract: Publisher Summary This chapter discusses qualitative recognition using Bayesian reasoning and presents the use of qualitative geometry, such as geometric ions (GEONS), where graph matching is critical to success. At an intermediate stage in the processing, a topology graph is created, which encodes the set of faces and their adjacency relations. A Bayesian model encodes probabilities about the visibility of different configurations of faces and their appearance. In the partitioning, faces are introduced into clusters according to their discriminatory value. Once faces are introduced into a cluster, the corresponding face, label, is entered in the Bayesian network as evidence and the model is updated. In consequence, each cluster becomes optimal. The partitioning based on Bayesian reasoning is superior to the original method. The language of Bayesian networks is used for modelling domains with inherent uncertainty in their impact structure. A Bayesian network is constructed over a universe of variables, each having a finite set of states. The universe is organized as a directed acyclic graph. The links in the graph model impact from one variable to another. The strength of the impacts is modeled through conditional probabilities.

Proceedings ArticleDOI
15 Jun 1994
TL;DR: The problem of analyzing fault tolerance of multiple-bus systems into the simpler problem of finding the node connectivity of component adjacency graphs is transformed into a graph-theoretic model for this purpose.
Abstract: We study multiple-bus computer systems that are fault tolerant in the sense that processors remain connected in the presence of component faults such as faulty processors and buses, and faulty links between processors and buses which may represent partial bus failures. We propose several graph-theoretic models for this purpose. A processor-bus-link (PBL) graph is introduced to represent a multiple-bus system component adjacency graphs derived from the PBL graph exhibit the connectivity of the system's components, We then transform the problem of analyzing fault tolerance of multiple-bus systems into the simpler problem of finding the node connectivity of component adjacency graphs. Minimum critical fault sets, each of which is a minimum set of faulty components whose removal disconnects processors, are also characterized. >