scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 1997"


Journal ArticleDOI
TL;DR: An algorithm for finding the minimum cut of an undirected edge-weighted graph that has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness.
Abstract: We present an algorithm for finding the minimum cut of an undirected edge-weighted graph. It is simple in every respect. It has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness. Its runtime matches that of the fastest algorithm known. The runtime analysis is straightforward. In contrast to nearly all approaches so far, the algorithm uses no flow techniques. Roughly speaking, the algorithm consists of about |V| nearly identical phases each of which is a maximum adjacency search.

764 citations


Journal ArticleDOI
TL;DR: A map between squares and disks that associates concentric squares with concentric circles is presented that preserves adjacency and fractional area, and has proven useful in many sampling applications where correspondences must be maintained between the two shapes.
Abstract: This paper presents a map between squares and disks that associates concentric squares with concentric circles. This map preserves adjacency and fractional area, and has proven useful in many sampling applications where correspondences must be maintained between the two shapes. The paper also provides code to compute the map that minimizes branching and is robust for all inputs. Finally, it extends the map to the hemisphere. Though this map has been used in publications before, details of its computation have never previously been published.

148 citations


Book ChapterDOI
01 Apr 1997
TL;DR: This chapter describes several methods of word pattern matching that are based on the use of automata, including Bayesian inference, reinforcement learning, and many others.
Abstract: This chapter describes several methods of word pattern matching that are based on the use of automata

128 citations


Proceedings ArticleDOI
30 Apr 1997
TL;DR: An algorithm is described that automatically constructs consistent representations of the solid objects modeled by an arbitrary set of polygons and is effective even when the input polygons intersect, overlap, are wrongly-oriented, have T-junctions, or are unconnected.
Abstract: Consistent repreaentations of the boundary and interior of thredimensional solid objects are required by applications ramging from interactive visualization to finite element analysis. However, most commonly available models of solid objects contain errors and inconsistencies. We describe an algorithm that automatically constructs consistent representations of the solid objects modeled by an arbitrary set of polygons. The key feature of our algorithm is that it first partitions space into a set of polyhedral regions and then determines which regions are solid based on region adjacency relationships. Fromthe solid polyhedral regions, we are able to output umsistent boundary and solid representations in a variety of iile formats. Unlike previous approaches, our solid-based approach is effective even when the input polygons intersect, overlap, are wrongly-oriented, have T-junctions, or are unconnected.

125 citations


Journal ArticleDOI
TL;DR: As difficulty increases, maps are more effective for problem-solving tasks, and the tasks are simplified using visual heuristics that keep problemsolving times and error rates from rising as quickly as they do with tables.
Abstract: Geographic Information Systems (GIS) enable decision makers to view tabular data geographically, as maps. This simple yet powerful visual format appears to facilitate problem solving, yet how it does so is not clear, nor do we know the types of problems that benefit from this representation. To begin to understand the contributions of geographic representations over tabular representations, we conducted a three-factor experiment in problem solving. The experiment contained two different representations (map and table), three different geographic relationships (proximity, adjacency, and containment), and three levels of task difficulty (low, medium, and high). We found that maps generally produced faster problem solving than tables, and that problem-solving time increased with task difficulty. Most importantly, for the proximity and adjacency geographic relationships we found that maps kept problem-solving time low, while tables tended to increase time dramatically. However, we found that the number of knowledge states for each task explains performance times quite well and is a useful tool for understanding performance differences and interaction effects. As tasks become more difficult, representing them as maps generally keeps the number of knowledge states small, while for tables, the number of knowledge states increases dramatically. Correspondingly, problem-solving times increase dramatically with tables, but not with maps. In sum, as difficulty increases, maps are more effective for problem-solving tasks. Using maps, the tasks are simplified using visual heuristics that keep problemsolving times and error rates from rising as quickly as they do with tables.

124 citations


Journal ArticleDOI
TL;DR: It is shown that a regular graph?withd+1 distinct eigenvalues is distance-regular if, and only if, the number of vertices at distanced from any given vertex is the value at?of the highest degree member of an orthogonal system of polynomials, which depend only on the spectrum of the graph.

102 citations


Journal ArticleDOI
TL;DR: The performance of the recogniser in terms of speed is far better than that of any other rule-based system due to the Neural Network approach employed and the basic limitation is that of the heuristics used to break down compound features into simple ones which are fed to the ANN.
Abstract: This work presents a Feature Recognition system developed using a previously trained Artificial Neural Network. The part description is taken from a B-rep solid modeller's data base. This description refers only to topological information about the faces in the part in the form of an Attributed Adjacency Graph. A set of heuristics is used for breaking down this compound feature graph into subgraphs, that correspond to simple features. Special representation patterns are then constructed for each of these subgraphs. These patterns are presented to a Neural Network which classifies them into feature classes: pockets, slots, passages, protrusions, steps, blind slots, corner pockets, and holes. The scope of instances/ variations of these features that can be recognised is very wide. A commercially available neural network modelling tool was used for training. The user interface to the neural network recogniser has been written in Pascal. The program can handle parts with up to 200 planar or curved faces. The performance of the recogniser in terms of speed is far better than that of any other rule-based system due to the Neural Network approach employed. The basic limitation is that of the heuristics used to break down compound features into simple ones which are fed to the ANN, but this is still a step ahead compared to other approaches.

90 citations


Journal ArticleDOI
TL;DR: A new approach toward image segmentation is proposed in which a set of slightly different segmentations is derived from the same input and the final result is based on the consensus among them, using the hierarchical, RAG pyramid technique.

85 citations



Proceedings ArticleDOI
20 Apr 1997
TL;DR: In this article, an algorithm for computing the Euclidean distance between a pair of objects represented by convex polytopes in 3D space is presented. But it is not suitable for the case of moving objects.
Abstract: An algorithm is presented for computing the Euclidean distance between a pair of objects represented by convex polytopes in three dimensional space. It is a fast version of the well known algorithm by Gilbert, Johnson and Keerthi (1988) in which the polytopes are characterized by their vertices. The improvement in speed comes from the inclusion of additional structural information on the objects; for each vertex there is a list of adjacent vertices. No other information or preprocessing of data is needed. Following Cameron, the adjacency structures are used to greatly reduce the number of inner product evaluations needed to compute the polytope support functions used in the original Gilbert-Johnson-Keerthi algorithm. When the algorithm is applied to a pair of objects that move incrementally along smooth paths, it shares the advantage of the incremental distance algorithm introduced by Lin and Canny (1991). Algorithmic details and performance issues are developed fully. Computational experiments provide quantitative data on the effects of object complexity and the size of incremental object motions.

57 citations


01 Jan 1997
TL;DR: An algorithm is presented for computing the Euclidean distance between a pair of objects represented by convex polytopes in three dimensional space, a fast version of the well known algorithm by Gilbert, Johnson and Keerthi (1988) in which the poly topes are characterized by their vertices.
Abstract: An algorithm is presented for computing the Euclidean distance between a pair of objects represented by convex polytopes in three dimensional space. It is a fast version of the well known algorithm by Gilbert, Johnson and Keerthi [I] in which the polytopes are characterized by their vertices. The improvement in speed comes from the inclusion of additional structural information on the objects; for each vertex there is a list of adjacent vertices. No other information or preprocessing of data is needed. Following Cameron [5], the adjacency structures are used to greatly reduce the number of inner product evaluations needed to compute the polytope support functions used in the original Gilbert, Johnson and Keerthi algorithm. When the algorithm is applied to a pair of objects that move incrementally along smooth paths, it shares the advantage of the incremental distance algorithm introduced by Lin and Canny [2]: under appropriate conditions, the computational time is very small and does not depend significantly on the total number of object vertices. Algorithmic details and performance issues are developed fully. Computational experiments provide quantitative data on the effects of object complexity and the size of incremental object motions.

Journal ArticleDOI
TL;DR: This work proposes several ways for extending adjacency to fuzzy sets, either by using α-cuts, or by using a formal translation of binary equations into fuzzy ones, and shall privilege set representations of adjACency, particularly in the framework of fuzzy mathematical morphology.
Abstract: The notion of adjacency has a strong interest for image processing and pattern recognition, since it denotes an important relationship between objects or regions in an image, widely used as a feature in model-based pattern recognition. A crisp definition of adjacency often leads to low robustness in the presence of noise, imprecision, or segmentation errors. We propose two approaches to cope with spatial imprecision in image processing applications, both based on the framework of fuzzy sets. These approaches lead to two completely different classes of definitions of a degree of adjacency. In the first approach, we introduce imprecision as a property of the adjacency relation, and consider adjacency between two (crisp) objects to be a matter of degree. We represent adjacency by a fuzzy relation whose value depends on the distance between the objects. In the second approach, we introduce imprecision (in particular spatial imprecision) as a property of the objects, and consider objects to be fuzzy subsets of the image space. We then represent adjacency by a relation between fuzzy sets. This approach is, in our opinion, more powerful and general. We propose several ways for extending adjacency to fuzzy sets, either by using α-cuts, or by using a formal translation of binary equations into fuzzy ones. Since set equations are more easily translated into fuzzy terms, we shall privilege set representations of adjacency, particularly in the framework of fuzzy mathematical morphology. Finally, we give some hints on how to compare degrees of adjacency, typically for applications in model-based pattern recognition.

Journal ArticleDOI
TL;DR: By applying a simulated annealing adjacency model based on net present value maximization and combined with an LP consequence computation model, it is possible to delineate optimal strategies of final felling scheduling.
Abstract: Forest management planning comprises selection among treatment alternatives in management units. A traditional linear programming (LP) approach may effectively account for a profit maximization objective combined with sustainability constraints, e.g. on the temporal distribution of harvest volume flows, cash‐flow, and net present value development, but it fails to account for spatial constraints, especially those associated with final felling. By applying a simulated annealing adjacency model based on net present value maximization and combined with an LP consequence computation model, it is possible to delineate optimal strategies of final felling scheduling. Evaluation is made of the trade‐off between (1) the incremental cost (determined by use of the LP model) of an optimal adjacency model solution, and (2) the potential damage cost resulting from adjacency characteristics such as windthrow and bark injuries. The decision support system may contribute significantly to reduce damage costs and may improv...

Patent
Charu C. Aggarwal1, Philip S. Yu1
03 Jun 1997
TL;DR: In this paper, a two-step procedure was proposed to reduce the required computational effort for performing online mining of association rules by creating an adjacency lattice which pre-stores a number of large itemsets at a level of support dictated by available memory.
Abstract: A computer method of online mining of association rules by pre-processing data within the constraint of available memory. The required computational effort for performing online mining of association rules is reduced by a two-step procedure that involves first creating an adjacency lattice which pre-stores a number of large itemsets at a level of support dictated by available memory. The lattice structure is useful for both finding the itemsets quickly, by reducing the amount of disk I/O required to perform the analysis, and also using the itemsets in order to generate the rules. Once the adjacency lattice is obtained, the second (mining) step is further comprised of two phases. The first phase involves a search algorithm used to find the corresponding itemsets at user specified levels of minimum support. The second phase involves using those itemsets to generate association rules at the user specified level of minimum confidence.

Journal ArticleDOI
TL;DR: It is shown here how to construct an implicit representation for a graph such that vertex adjacency can be tested quickly by providing simple and optimal algorithms, both in a sequential and a parallel setting.

Journal ArticleDOI
TL;DR: This paper investigates two constraints for the connected operator class, and leads to a new approach to the class of filters by reconstruction of flat non-binary (gray-level) operators.
Abstract: This paper investigates two constraints for the connected operator class. For binary images, connected operators are those that treat grains and pores of the input in an all or nothing way, and therefore they do not introduce discontinuities. The first constraint, called connected-component (c.c.) locality, constrains the part of the input that can be used for computing the output of each grain and pore. The second, called adjacency stability, establishes an adjacency constraint between connected components of the input set and those of the output set. Among increasing operators, usual morphological filters can satisfy both requirements. On the other hand, some (non-idempotent) morphological operators such as the median cannot have the adjacency stability property. When these two requirements are applied to connected and idempotent morphological operators, we are lead to a new approach to the class of filters by reconstruction. The important case of translation invariant operators and the relationships between translation invariance and connectivity are studied in detail. Concepts are developed within the binary (or set) framework; however, conclusions apply as well to flat non-binary (gray-level) operators.

Journal ArticleDOI
TL;DR: An incremental algorithm to maintain a DFS-forest in a directed acyclic graph under a sequence of arc insertions in O(nm) worst case total time, which compares favorably with the time required to recompute DFS from scratch by using Tarjan's Θ(n + m) algorithm any time a sequences of Ω(n)Arc insertions must be handled.

Journal ArticleDOI
TL;DR: A new O(n2) algorithm is given for incremental path-consistency, which is applied after adding each algebraic constraint to the configuration of rectangles in 2D space, where the sides of the rectangles are parallel to an orthogonal coordinate system.
Abstract: The spatial synthesis problem addressed in this paper is the configuration of rectangles in 2D space, where the sides of the rectangles are parallel to an orthogonal coordinate system. Variables are the locations of the edges of the rectangles and their orientations. Algebraic constraints on these variables define a layout and constitute a constraint satisfaction problem. We give a new O(n 2 ) algorithm for incremental path-consistency, which is applied after adding each algebraic constraint. Problem requirements are formulated as spatial relations between the rectangles, for example, adjacency, minimum distance, and nonoverlap. Spatial relations are expressed by Boolean combinations of the algebraic constraints; called disjunctive constraints. Solutions are generated by backtracking search, which selects a disjunctive constraint and instantiates its disjuncts. The selected disjuncts describe an equivalence class of configurations that is a significantly different solution. This method generates the set of significantly different solutions that satisfy all the requirements. The order of instantiating disjunctive constraints is critical for search efficiency. It is determined dynamically at each search state, using functions of heuristic measures called textures. Textures implement fail-first and prune-early strategies. Extensions to the model, that is, 3D configurations, configurations of nonrectangular shapes, constraint relaxation, optimization, and adding new rectangles during search are discussed.

Journal ArticleDOI
TL;DR: The graph theoretic facility layout problem (GTFLP) seeks the best spatial arrangement of a set of facilities, which minimizes the total transportation cost of material flow between facilities, or maximizes total facility adjacency benefits as discussed by the authors.
Abstract: The graph theoretic facility layout problem (GTFLP) seeks the best spatial arrangement of a set of facilities, which minimizes the total transportation cost of material flow between facilities, or maximizes total facility adjacency benefits. The GTFLP has applications in arranging rooms within a building floorplan, placing machines on a factory floor, controls on an instrument panel, or components on a circuit board. For this reason, the GTFLP is an important problem within industrial design. The GTFLP comprises two phases: the generation of a highly weighted maximal planar graph (MPG), whose edges specify adjacencies required in the layout, and the drawing of an orthogonal geometric dual to this MPG, called the block plan, under given area requirements for each facility. The first phase is NP-complete, and has been researched extensively; in the absence of a polynomial time algorithm, it appears there is little future for further research in this area. The second phase however has received far less atten...

Journal ArticleDOI
TL;DR: In this paper, the triangulation expansion method was used to solve the adjacency problem in facility layout planning, which is known to be NP-complete and heuristics are required to solve 'large' problem instances.
Abstract: The adjacency problem is an important subproblem in facility layout planning. It is known to be NP-complete, so heuristics are required to solve 'large' problem instances. Several heuristics have been suggested for the adjacency problem, but very little reliable information is available on their relative performance. Thus, extensive numerical experiments have been carried out with a special class of heuristics called triangulation expansion methods. Two algorithms, the Wheel Expansion Heuristic by Eades et al. (1982) and a method by Leung (1992) have proven to be superior. The test procedure that has been used to evaluate the methods is described and suggested for future testing of newly developed heuristics.

Proceedings ArticleDOI
12 May 1997
TL;DR: A reliable computational procedure which takes the range image discontinuities into account is presented for computing the pixel's normal and is evaluated on 80 real images acquired by two different range sensors using the methodology proposed in Hoover et al., 1996.
Abstract: This paper presents a hybrid approach to the segmentation of range images into planar regions. The term hybrid refers to a combination of edge- and region-based considerations. A reliable computational procedure which takes the range image discontinuities into account is presented for computing the pixel's normal. The segmentation algorithm consists of two parts. In the first one, the pixels are aggregated according to local properties derived from the input data and are represented by a region adjacency graph (RAG). At this stage, the image is still over-segmented. In the second part, the segmentation is refined thanks to the construction of an irregular pyramid. The base of the pyramid is the RAG previously extracted. The over-segmented regions are merged using a surface-based description. This algorithm has been evaluated on 80 real images acquired by two different range sensors using the methodology proposed in (Hoover et al., 1996). Experimental results are presented and compared to others obtained by four research groups.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: This paper presents an unsupervised segmentation method applicable to both 2D and 3D images achieved by a bottom-up hierarchical analysis to progressively agglomerate pixels/voxels in the image into non-overlapped homogeneous regions characterised by a linear signal model.
Abstract: This paper presents an unsupervised segmentation method applicable to both 2D and 3D images. The segmentation is achieved by a bottom-up hierarchical analysis to progressively agglomerate pixels/voxels in the image into non-overlapped homogeneous regions characterised by a linear signal model. A hierarchy of adjacency graphs is used to describe agglomeration results from the hierarchical analysis, and is constructed by successively performing a clustering operation which produces an optimal classification by merging each region with its nearest neighbours determined under the framework of statistical inference. The top level of the hierarchy then describes the segmentation result.

Journal ArticleDOI
TL;DR: In this paper, two new heuristics for the adjacency problem are introduced which belong to a special class of constructive methods called Triangulation Expansion Heuristics, and extensive numerical experiments have been carried out to evaluate the proposed methods in terms of computing times and solution quality.
Abstract: The adjacency problem is an important subproblem in facility layout planning. It is known to be NP-complete, so heuristics are required to solve “large” problem instances. In this paper two new heuristics for the adjacency problem are introduced which belong to a special class of constructive methods called Triangulation Expansion Heuristics. Extensive numerical experiments have been carried out in order to evaluate the proposed methods in terms of computing times and solution quality. It has been found that at least one method is clearly superior to the best methods proposed in the literature so far (Eades et al. 1982, Leung 1992).

Proceedings ArticleDOI
14 Jul 1997
TL;DR: This paper examines partitions of region adjacency graphs that make it possible to pass rapidly from an initial representation of an image that uses a large number of regions to a final segmentation that is composed of appropriately few regions.
Abstract: An important aim of image analysis is to segment an image into parts that correlate strongly with real work objects or surfaces. In this paper, we present an algorithm for color image segmentation that uses a transformation based on region adjacency graphs. We examine partitions of region adjacency graphs that make it possible to pass rapidly from an initial representation of an image that uses a large number of regions to a final segmentation that is composed of appropriately few regions. After an initial segmentation process we create an associate adjacency graph, and partition this graph in two ways: by growth on graph vertices using region growing, and by a watershed transformation of a contour graph.

Proceedings ArticleDOI
05 Jan 1997
TL;DR: In this paper, the authors present a general method to determine the implementation and hardware specific running time constants for combinatorial algorithms, which is applicable to a wide range of programming languages.
Abstract: Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available hardware, knowledge about these constants is important. Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra`s algorithm, a Fibonacci heap and a graph representation based on adjacency lists. The errors in the running time prediction ofmore » these algorithms using exact execution frequencies are at most 50 % on the tested hardware platforms.« less

Journal ArticleDOI
01 Jun 1997
TL;DR: A hierarchical part representation is used as the basis for a reasoning scheme that uses topological relationships between features to restrict the search space of operation sequences in CADCAM (computer aided design and manufacture).
Abstract: This article is concerned with part representation and reasoning algorithms for automatic process planning in CADCAM (computer aided design and manufacture). Process planning involves the translation of a part description into instructions for a sequence of operations for the manufacture of the part. Part representations in CADCAM are reviewed, and a hierarchical representation is introduced which describes parts as the set-theoretic union of positive (protrusion) features, with the set-theoretic union of negative (depression) features subtracted. The model information hierarchy also incorporates topological relationships among features (adjacency, ownership and intersection), tolerances and links to a boundary representation (B-rep) geometric model. The hierarchical part representation is used as the basis for a reasoning scheme that uses topological relationships between features to restrict the search space of operation sequences. A recursive algorithm produces candidate operation sequences tha...

Journal ArticleDOI
R.E. Sequeira1, F.J. Preteux
TL;DR: The use of the algorithm for constructing skeletons by influence zones (SKIZ) is demonstrated and the adjacency, or dual, graph of the VD is readily obtained.
Abstract: The Voronoi diagram (VD) is a popular tool for partitioning the support of an image. An algorithm is presented for constructing VD when the seed set, which determines the Voronoi regions, can be modified by adding and removing seeds. The number of pixels and seeds revisited for updating the diagram and the neighbor relationships among seeds is minimized. A result on cocircular seeds is presented. The adjacency, or dual, graph of the VD is readily obtained. The use of the algorithm for constructing skeletons by influence zones (SKIZ) is demonstrated.

Journal ArticleDOI
Hung-Kuei Ku1, John P. Hayes
TL;DR: A model called processor-bus-link (PBL) graphs is introduced to represent a multiple-bus system's interconnection structure, more general than previously proposed models, and has the advantages of simple representation, broad application, and the ability to model partial bus failures.
Abstract: We present an efficient approach to characterizing the fault tolerance of multiprocessor systems that employ multiple shared buses for interprocessor communication. Of concern is connective fault tolerance, which is defined as the ability to maintain communication between any two fault-free processors in the presence of faulty processors, buses, or processor-bus links. We introduce a model called processor-bus-link (PBL) graphs to represent a multiple-bus system's interconnection structure. The model is more general than previously proposed models, and has the advantages of simple representation, broad application, and the ability to model partial bus failures. The PBL graph implies a set of component adjacency graphs that highlights various connectivity features of the system. Using these graphs, we propose a method for analyzing the maximum number of faults a multiple-bus system can tolerate, and for identifying every minimum set of faulty components that disconnects the processors of the system. We also analyze the connective fault tolerance of several proposed multiple-bus systems to illustrate the application of our method.

Journal ArticleDOI
TL;DR: In this article, a planar orthogonal layout or floorplan of a set of facilities subject to adjacency and area constraints is presented, and simple selection criteria for choosing the next facility to be inserted into the floorplan are used.
Abstract: The main problem concerned with applying graph theory to facilities layout is the conversion of the dual graph to a block layout. This paper presents a new method of producing a planar orthogonal layout or floorplan of a set of facilities subject to adjacency and area constraints. It improves upon previous approaches by accepting any maximal planar graph representing the adjacencies as input. Simple selection criteria for choosing the next facility to be inserted into the floorplan are used. Further, any sensible orthogonal shape for the facilities in the resulting floorplan can be generated.

Patent
06 Aug 1997
TL;DR: In this article, a set of articles correlated to each other is defined by a directed graph (V, A), where V means a setof articles (nodes) with A meaning a set consisting of two pairs of nodes (i, j) of nodes.
Abstract: PROBLEM TO BE SOLVED: To relate a large quantity of article data to each other with high efficiency and at a high speed by producing a similarity matrix from the similarity set among plural documents via the time restriction and transforming the similarity matrix into an adjacency matrix that shows the correlated between the documents. SOLUTION: A set of articles is shown in the sets that are ordered in a time sequence. A set of articles correlated to each other is defined by a directed graph (V, A), where V means a set of articles (nodes) with A meaning a set consisting of two pairs (i, j) of nodes (i, j: elements of V). As a set of nodes is ordered in time, the limit conditions that: For (d i , d j ) EA and i