scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 1993"


Journal ArticleDOI
TL;DR: A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated, resulting in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices.
Abstract: A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated. The data to be clustered are represented by an undirected adjacency graph G with arc capacities assigned to reflect the similarity between the linked vertices. Clustering is achieved by removing arcs of G to form mutually exclusive subgraphs such that the largest inter-subgraph maximum flow is minimized. For graphs of moderate size ( approximately 2000 vertices), the optimal solution is obtained through partitioning a flow and cut equivalent tree of G, which can be efficiently constructed using the Gomory-Hu algorithm (1961). However for larger graphs this approach is impractical. New theorems for subgraph condensation are derived and are then used to develop a fast algorithm which hierarchically constructs and partitions a partially equivalent tree of much reduced size. This algorithm results in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices. The new clustering algorithm is applied to the image segmentation problem. The segmentation is achieved by effectively searching for closed contours of edge elements (equivalent to minimum cuts in G), which consist mostly of strong edges, while rejecting contours containing isolated strong edges. This method is able to accurately locate region boundaries and at the same time guarantees the formation of closed edge contours. >

1,223 citations


Book ChapterDOI
Jarek Rossignac1, Paul Borrel1
01 Jan 1993
TL;DR: This work presents a simple, effective, and efficient technique for approximating arbitrary polyhedra based on triangulation and vertex-clustering, and produces a series of 3D approximations that resemble the original object from all viewpoints, but contain an increasingly smaller number of faces and vertices.
Abstract: We present a simple, effective, and efficient technique for approximating arbitrary polyhedra. It is based on triangulation and vertex-clustering, and produces a series of 3D approximations (also called “levels of detail”) that resemble the original object from all viewpoints, but contain an increasingly smaller number of faces and vertices. The simplification is more efficient than competing techniques because it does not require building and maintaining a topological adjacency graph. Furthermore, it is better suited for mechanical CAD models which often exhibit patterns of small features, because it automatically groups and simplifies features that are geometrically close, but need not be topologically close or even part of a single connected component Using a lower level of detail when displaying small, distant, or background objects improves graphic performance without a significant loss of perceptual information, and thus enables realtime inspection of complex scenes or a convenient environment for animation or walkthrough preview.

863 citations


Journal ArticleDOI
TL;DR: Simulated annealing is a stochastic approach to solving large combinatorial problems and was used to model a harvest scheduling problem having block size constraints, a 20-year adjacency delay, and objectives to meet harvest volume targets on the minimum area possible.
Abstract: Simulated annealing is a stochastic approach to solving large combinatorial problems. This approach was used to model a harvest scheduling problem having block size constraints (no limit, 100–200, and 200–400 ha), a 20-year adjacency delay, and objectives to meet harvest volume targets on the minimum area possible. Spatially explicit harvest schedules complying with the constraints were successfully generated on test data sets of 6148 and 27 548 forest stands.

245 citations


Journal ArticleDOI
TL;DR: In this paper, a linear programming model is introduced to generate a layout from a graphical representation of any design skeleton, which can be used to enhance most design skeleton based layout approaches.
Abstract: In the past, researchers have proposed several types of design skeletons from which a human designer can generate good facilities layouts. Examples are flow graphs, SLP space relationships, bubble diagrams, planar adjacency graphs, matching based adjacency graphs, centroid locations, and cut trees. In this paper, we introduce a linear programming model which efficiently generates a layout from a graphical representation of any design skeleton. We demonstrate how the model can be used to enhance most design skeleton based layout approaches.

65 citations


Journal ArticleDOI
TL;DR: In a case study, three different ways to find a solution were examined: a random search algorithm, a simulated annealing algorithm and the prebiased random search method found in the SNAP II program, which was the fastest.
Abstract: Regulations defining the maximum opening size in the sub‐alpine region of Sweden, introduce new planning issues. The combinatorial problems that arise in harvest planning become very complex, but can be solved by different methods. In a case study, three different ways to find a solution were examined: a random search algorithm, a simulated annealing algorithm and the prebiased random search method found in the SNAP II program. Two different alternatives were studied, one with no road in the area and one with a road constructed. All three methods were found to give feasible solutions. The simulated annealing produced the best solutions, in terms of present net value, while the SNAP II program was the fastest. The SNAP II did not give as good solutions as the others in the case with a road, probably due to the lack of a distinct gradient in the structure.

64 citations


Proceedings ArticleDOI
20 Oct 1993
TL;DR: A new approach to the layout analysis, called nested segmentation, is introduced and an ordered labeled tree structure (L-S-Tree) is introduced to represent the segmented document for document classification.
Abstract: Office information systems (OISs) are employed to support office workers in their management of information and to assist them in their daily work. In the OISs, document classification is one of the major functional capabilities. Classifying a document can be facilitated through the layout analysis of the document. A new approach to the layout analysis, called nested segmentation, is introduced. The layout relationships of components of a document are defined in terms of the adjacency of blocks. Given the adjacency of blocks, an adjacent block graph is introduced where the problem of the nested segmentation is transformed to a classic minimal cut problem for the graph. Also, an ordered labeled tree structure (L-S-Tree) is introduced to represent the segmented document for document classification. >

30 citations


Journal ArticleDOI
Il Y. Kim1, Hyun S. Yang1
TL;DR: A Markov Random Field model-based approach is proposed as a systematic way for modeling, encoding and applying scene knowledge to the image understanding problem and is exploited to interpret the color scenes.

29 citations


Journal ArticleDOI
01 Mar 1993
TL;DR: This paper resolves a problem posed by Gargano, Körner, and Vaccaro by giving the first example (the two orientations of the triangle) of a graph where the Sperner capacity depends on the orientation of the edges, and gives a simple proof that linear codes do not achieve SPerner capacity for the cyclic triangle.
Abstract: Shannon introduced the concept of zero-error capacity of a discrete memoryless channel. The channel determines an undirected graph on the symbol alphabet, where adjacency means that symbols cannot be confused at the receiver. The zero-error or Shannon capacity is an invariant of this graph. Gargano, Korner, and Vaccaro have recently extended the concept of Shannon capacity to directed graphs. Their generalization of Shannon capacity is called Sperner capacity. We resolve a problem posed by these authors by giving the first example (the two orientations of the triangle) of a graph where the Sperner capacity depends on the orientations of the edges. Sperner capacity seems to be achieved by nonlinear codes, whereas Shannon capacity seems to be attainable by linear codes. In particular, linear codes do not achieve Sperner capacity for the cyclic triangle. We use Fourier analysis or linear programming to obtain the best upper bounds for linear codes. The bounds for unrestricted codes are obtained from rank arguments, eigenvalue interlacing inequalities and polynomial algebra. The statement of the cyclic q-gon problem is very simple: what is the maximum size Nq(n) of a subset Sn of l0, 1, …, q−1rn with the property that for every pair of distinct vectors x e (xi), y e (yi) ∈ Sn, we have xj−yj ≡ 1(mod q) for some j? For q e 3 (the cyclic triangle), we show N3(n)s2n. If however Sn is a subgroup, then we give a simple proof that \vert S_n\vert \leq \sqrt{3}^n.

29 citations


Proceedings ArticleDOI
19 Apr 1993
TL;DR: This paper looks at a new fault model used for shorts between nets on a PCB, the pin-adjacency fault model, and the implementation of two algorithms for detecting and diagnosing bridging faults.
Abstract: This paper looks at a new fault model used for shorts between nets on a PCB, the pin-adjacency fault model, and the implementation of two algorithms. The pin-adjacency detection and diagnosis algorithms for detecting and diagnosing these bridging faults. The authors represent the nets and their likelihood to short as a graph, and in conjunction with the new algorithms are able to generate reduced test sets. This represents a huge saving over existing algorithms which assume that any two nets are likely to short, as opposed to the new more realistic pin-adjacency fault model. >

27 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a novel technique for constructing a floorplan from an adjacency requirement represented by a graphG. The algorithm finds a geometric dual of G involving both rectangular and L-shaped modules.
Abstract: We propose a novel technique for constructing a floorplan from an adjacency requirement -- represented by a graphG. The algorithm finds a geometric dual ofG involving both rectangular and L-shaped modules. This is the first dualization technique which permits L-shaped modules. We can test inO(n3/2) time ifG admits an L-shaped dual and construct one, if it exists, inO(n2) time, wheren is the number of modules.

26 citations


Journal ArticleDOI
G.K.-H. Yeap1, M. Sarrafzadeh
TL;DR: The authors consider a generalized optimal sizing problem on a set of slicing trees related to an adjacency graph and combine the tree enumeration and sizing procedures in a unified algorithm where floorplan trees and sizes are computed simultaneously.
Abstract: Given a sliceable floorplan and cell sizes, Otten and Stockmeyer (1983) presented an algorithm to find an optimal implementation for each cell. The authors consider a generalized optimal sizing problem on a set of slicing trees related to an adjacency graph. For computation efficiency, they combine the tree enumeration and sizing procedures in a unified algorithm where floorplan trees and sizes are computed simultaneously. The tree enumeration is based on adjacency graph of the input cells, which ensures that the adjacency requirements of the cells are preserved. Time complexity of the algorithm is analyzed and experimental results using MCNC benchmarks are reported. >

Book ChapterDOI
14 Jun 1993
TL;DR: A method similar to assumption-based truth maintenance systems for the collating and reasoning processes required in the labelling of input images for the recognition of objects from medical images is proposed.
Abstract: We present both a high-level symbolic model of the human brain, and a method of using this model to aid in the recognition of objects from medical images. The model is stored as a frame based semantic network consisting of three coexisting graphs (a spatial adjacency graph; a part hierarchy; and an inheritance graph). We propose a method similar to assumption-based truth maintenance systems for the collating and reasoning processes required in the labelling of input images.

Proceedings ArticleDOI
13 Jun 1993
TL;DR: This paper presents a new method of retinal image registration based on representing a segmented reference image as an adaptive adjacency graph, which consists of a network of active contours, nodes where contours are connected, regions outlined by the contours and their full adjacencies.
Abstract: This paper presents a new method of retinal image registration. The method is based on representing a segmented reference image as an adaptive adjacency graph. The graph consists of a network of active contours, nodes where contours are connected, regions outlined by the contours and their full adjacency relationship. The contours in the graph correspond to retinal vessels or other curvilinear features. The registration is performed by placing the graph on the image to be registered and allowing it to adapt to the image data. The contours move under combined effect of internal and external forces. The internal forces represent contour internal energy. The external forces correspond to image data and to connectivity constraints imposed on the contours. Results of registration obtained for retinal images are presented. >

Journal ArticleDOI
TL;DR: A partitioning algorithm with O (∣V∣ + ∣E∣) time and space complexity is described, which relies on several results concerning transitive perfect elimination orderings introduced in this paper.

Proceedings ArticleDOI
07 Nov 1993
TL;DR: This work proposes a polynomial-time algorithm for transforming an arbitrary floorplan into a sliceable one, and shows that on industrial benchmarks the area increase, enforcing sliceability, is 6% on the average.
Abstract: Sliceable floorplans attain nice properties. In particular, a number of NP-hard problems can be solved efficiently on sliceable floorplans. Most floorplanning algorithms/packages do not produce sliceable floorplans; besides, there are adjacency requirements that are inherently non-sliceable. Motivated by that, we propose a polynomial-time algorithm for transforming an arbitrary floorplan into a sliceable one. We operate on a given sized floorplan to transform it into a sliceable one. Experimental results show that on industrial benchmarks the area increase, enforcing sliceability, is 6% on the average. The percentage of changes in the input adjacency graph (i.e., number of edge detention-addition divided by the total number of edges) is 7% on the average. The proposed algorithm can serve as a post-processor for other floor-planning algorithms. The proposed technique also provides new insights into the class of sliceable floorplans.

Journal ArticleDOI
TL;DR: The commenters point out that the algorithm of S.A. Kumar and S.H. Lee for the enumeration of all minimal s-t cutsets is defective and cites two reasons for this: the adjacency problem and the back-vertex problem.
Abstract: The commenters point out that the algorithm of S.A. Kumar and S.H. Lee for the enumeration of all minimal s-t cutsets is defective (see ibid., vol.R-26, p.51-5, April 1979). It is illustrated through an example that their algorithm misses several minimal cutsets. The commenters cite two reasons for this: the adjacency problem and the back-vertex problem. >

Proceedings ArticleDOI
23 Jun 1993
TL;DR: A new model of an adaptive adjacency graph (AAG) for representing a 2-D image or a2-D view of a 3-D scene is introduced and results obtained for dynamic tracking of features in sequence of images and for registration of retinal images are presented.
Abstract: A new model of an adaptive adjacency graph (AAG) for representing a 2-D image or a 2-D view of a 3-D scene is introduced. The model makes use of image representation similar in form to a region adjacency graph. Adaptive adjacency graph, as opposed to region adjacency graph, is an active representation of the image. The AAG can adapt to the image or track features and maintain the topology of the graph. Adaptability of the AAG is achieved by incorporating active contours (`snakes') in the graph. Various methods for creating the AAGs are discussed. Results obtained for dynamic tracking of features in sequence of images and for registration of retinal images are presented.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: An incremental boundary-evaluation algorithm that exploits adjacency information in B-reps to minimize the number of explicit edge classifications required is presented and performs reliably and well, although global optimization schemes could increase performance significantly.
Abstract: An incremental boundary-evaluation algorithm that exploits adjacency information in B-reps to minimize the number of explicit edge classifications required is presented. Evaluations of the implemented algorithm show that it performs reliably and well, although global optimization schemes could increase performance significantly. The steps of the algorithm, which include self-edge partitioning, cross-edge self-edge (CESE) classification, inference of self-edge classifications, and checking for split and merged shells, are discussed. >

Proceedings ArticleDOI
22 Feb 1993
TL;DR: A generalized optimal sizing problem on a set of related slicing trees is considered, and the tree enumeration and sizing process are combined in a unified algorithm where floorplan trees and cell sizes are computed simultaneously.
Abstract: A generalized optimal sizing problem on a set of related slicing trees is considered. For computation efficiency, the tree enumeration and sizing process are combined in a unified algorithm where floorplan trees and cell sizes are computed simultaneously. The tree enumeration is based on a dual graph of the input cells, which ensures that the adjacency requirements of the cells are preserved. Experimental results using MCNC benchmarks are reported. >

Proceedings ArticleDOI
20 Sep 1993
TL;DR: A new routing-driven partitioning approach for fitting a sequential circuit onto limited-connectivity EPLDs (electrically programmable logic devices) is presented and solutions to a number of problems unsolved by the previous fitter were found.
Abstract: A new routing-driven partitioning approach for fitting a sequential circuit onto limited-connectivity EPLDs (electrically programmable logic devices) is presented. The fitting problem is stated as a graph monomorphism problem. Global, local, and adjacency routing constraints are used to define the partitioning properties of the graph representing chip resources. This approach very effectively limits the solution space of the graph monomorphism problem in the early stages of the search. The program which uses the proposed algorithm to solve the fitting problem for the CY7C361 device, from Cypress Semiconductor, has been implemented and tested. Solutions to a number of problems unsolved by the previous fitter were found. The experimental results are presented. >

Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem of deciding whether a vertex is visited by a breadth-depth search originating from s is -complete for directed and undirected graphs.
Abstract: The parallel complexity of a search strategy that combines attributes of both breadth-first search and depth-first search is studied. The search called breadth-depth search was defined by Horowitz and Sahni. The search technique has applications in branch-and-bound strategies. Kindervater and Lenstra posed the complexity of this type of search strategy as an open problem. We resolve their question by showing that a natural decision problem based on breadth-depth search is -complete. Specifically, we prove that if given a graph G=(V, E) either directed or undirected, a start vertex s∈V, and two designated vertices u and v in V, then the problem of deciding whether u is visited before v by a breadth-depth search originating from s is -complete. The search can be based either on vertex numbers or fixed ordered adjacency lists. Our reductions differ for directed/undirected graphs and depending on whether vertex numbers/fixed ordered adjacency lists are used. These results indicate breadth-depth search is highly sequential in nature and probably will not adapt to a fast parallel solution, unless equals .

Journal ArticleDOI
TL;DR: This paper shows that the weighted complex triangle elimination problem is NP-complete, even when the input graphs are restricted to 1-level containment, and the unweighted problem is optimally solvable.
Abstract: Rectangular dual graph approach to floorplanning is based on the adjacency graph of the modules in a floorplan. If the input adjacency graph contains a cycle of length three which is not a face (complex triangle), a rectangular floorplan does not exist. Thus, complex triangles have to be eliminated before applying any floorplanning algorithm. This paper shows that the weighted complex triangle elimination problem is NP-complete, even when the input graphs are restricted to 1-level containment. For adjacency graph with 0-level containment, the unweighted problem is optimally solvable in O(c1.5 + n) time where c is the number of complex triangles and n is the number of vertices of the input graph.

Proceedings ArticleDOI
01 Jun 1993
TL;DR: In this article, a region-containment tree is proposed for the simultaneous reconstruction of all multishell solids from a set of contours, in which the inner structure is identified in the sense that it is determined whether a solid is inside another, and, if so, at which level of nesting.
Abstract: The reconstruction of a solid from its contour definition is, basically, a conversion from boundary to volume representation, and it has been addressed in the literature in several instances. The simultaneous reconstruction of several solids— in any number and in any nesting order—from a set of contours is a more complex problem, for which a solution is presented. The latter consists of three parts: the separation of connected contours, the filling of those contours which indeed contain a volume, and the leaving of the other components as dangling elements. The general case is considered in which all the contours are given as one set of voxels; this means that contiguous contour elements may not be associated with the same object. Together with the reconstruction of all multishell solids, their inner structure is identified in the sense that it is determined whether a solid is inside another, and, if so, at which level of nesting. This information is condensed in a graph structure called the region-containment tree. The methodology used relies on union-finding techniques coupled with tree traversals in an octree environment. Various approaches to tree traversals and the propagation of adjacency information are explored. The method with the best runtime uses an initial search to identify each contour. This is followed by connectivity labelling (in the active border version) to separate and fill each solid component. Testing has been carried out in seven cases. In five of these, an independent solid modeller is used as a reference, and in two, a different surface-to-solid conversion is used.

Journal ArticleDOI
TL;DR: An algorithm for computing the shortest path between two hypervoxels is described as an illustration of an application of the decoding formula and the neighbor identification algorithm.

Journal ArticleDOI
TL;DR: In this paper, sufficient conditions for a polytope to satisfy the strong adjacency property are given and, from this, binary b-matching polytopes, set partitioning polytops, set packing poly topes, etc. satisfy the Strong Adjacent Property.

Journal ArticleDOI
TL;DR: In this article, the problem of semiclassical quantization for physical systems with a discrete symmetry is discussed with the help of a graph and an associated adjacency matrix, and a general expression for the symmetry-reduced zeta functions is derived in terms of symmetry reduced moments.
Abstract: With the help of a graph and an associated adjacency matrix the problem of semiclassical quantization is discussed for physical systems with a discrete symmetry. A general expression for the symmetry-reduced zeta-functions is derived in terms of symmetry-reduced moments of the adjacency operator. As an application the uniform semiclassical quantization conditions of the Hecht Hamiltonian are discussed within this approach.

Proceedings Article
01 Jun 1993
TL;DR: Initial work is presented on a method for mapping between functional requirements and a description of the physical structure of discrete static systems, which consists of a set of atomic elements, a hierarchy of compound components from the domain, and the composition of a graph of adjacent atomic elements.
Abstract: One view of the design process is that design is a mapping from functional requirements to artifact description. This article presents initial work on a method for mapping between functional requirements and a description of the physical structure of discrete static systems. The representation consists of a set of atomic elements, a hierarchy of compound components from the domain, and the composition of a graph of adjacent atomic elements. Through forward or backward chaining, this method may be used in a parsing mode to discover the behavior and function of a given system, or in a generative mode to suggest instances of systems which can be used to satisfy the desired functionality. Parsing discovers the behavior of the system in terms of the compound components by matching on subgraphs within the overall adjacency graph. Generation hierarchically instantiates subgraphs which satisfy the initial functional requirements and the requirements propagated by previously instantiated components. The graph is composed from a geometric model, but the method is independent of the specific representation used by the geometric modeler. We focus on the domain of structural systems in buildings to describe this method. This work has been sponsored by EDRC, the Engineering Design Research Center at Carnegie Mellon University, an NSF-sponsorcd Engineering Research Center.

Proceedings ArticleDOI
16 Aug 1993
TL;DR: This paper presents an efficient synthesis procedure for asynchronous finite state machines (FSMs) that is first generated from a behavioral description of a FSM, and a race-free state assignment algorithm using bipartite graphs is applied.
Abstract: This paper presents an efficient synthesis procedure for asynchronous finite state machines (FSMs). A merged flow table is first generated from a behavioral description of a FSM. Based on the bipartite characteristics of the adjacency diagram, a race-free state assignment algorithm using bipartite graphs is applied. Several MCNC FSM benchmarks have been tested. Results show that the presented procedure can handle reasonably large asynchronous FSMs. >

Journal Article
TL;DR: In this article, a self-organizing learning method using Kohonen's local topology conserving maps is presented for global representation and cognition of 3D objects, where a 3D object is represented by a set of sheets (subnets) and each sheet is mapped onto a part of the surface of the subject by a locally converging Kohonen map.
Abstract: A model is proposed for global representation and cognition of 3D objects, and a global self-organizing learning method using Kohonen's local topology conserving maps is presented. A 3D object is represented by a set of sheets (subnets). Each of the sheets is mapped onto a part of the surface of the subject by a locally converging Kohonen map. The adjacency of the overlapped sheets is also defined over the whole surface of the objects. Thus, these adjoining subsets will eventually converge to a global atlas representation of any closed surface (as a combinatorial manifold), the Kohonen maps on each part become the local coordinate (or chart) maps. The proposed model is able to represent the objects distributively and can easily accommodate local features. >

01 Jan 1993
TL;DR: The purpose of this article is to explain exactly why the particular weighting scheme used for the edges is necessary, to elucidate why it works, and to suggest another, more intuitive way of embedding the graph in the plane, once the low-point depth-first search has been executed.
Abstract: This is an expository article on the Hopcroft-Tarjan planarity algorithm. A graph-theoretic analysis of a version of the algorithm is presented. An explicit formula is given for determining which vertex to place first in the adjacency list of any vertex. The intention is to make the Hopcroft-Tarjan algorithm more accessible and intuitive than it currently is. Let G be a simple 2-connected graph with vertex set V (G) and edge set E(G). The number of vertices of G is denoted by n. If u,v 2 V (G), then A(u) denotes the set of all vertices adjacent to u. v ! u means that v is adjacent to u, so that v 2 A(u) (and since G is undirected, also u ! v). Refer to the book by Bondy and Murty [3] for other graph-theoretic terminology. Hopcroft and Tarjan [13] gave a linear-time algorithm to determine whether G is planar, using a depth-first search. The depth-first search computes two low-point arrays for the vertices, L1(u) and L2(u), and assigns a weight to the edges of G, where the weight is computed from the low-points. The edges incident on each u 2 V (G) are then ordered according to their weights. The algorithm then uses the revised graph in which incident edges have been ordered, and performs another depth-first search, the so-called PathFinder to embed the graph in the plane. The purpose of this article is to explain exactly why the particular weighting scheme used for the edges is necessary, to elucidate why it works, and to suggest another, more intuitive way of embedding the graph in the plane, once the low-point depth-first search has been executed. The Hopcroft-Tarjan algorithm is complicated and subtle. For example, see the description of it in the paper of Williamson [20]. In [2], Di Battista, Eades, Tamassia, and Tollis state that “The known planarity algorithms * This work was supported by an operating grant from the Natural Sciences and Engineering Research Council of Canada. 1