scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 1996"


Journal ArticleDOI
TL;DR: A generic system for form dropout when the filled-in characters or symbols are either touching or crossing the form frames and a method to separate these characters from form frames whose locations are unknown is proposed.
Abstract: Recent advances in intelligent character recognition are enabling us to address many challenging problems in document image analysis. One of them is intelligent form analysis. This paper describes a generic system for form dropout when the filled-in characters or symbols are either touching or crossing the form frames. We propose a method to separate these characters from form frames whose locations are unknown. Since some of the character strokes are either touching or crossing the form frames, we need to address the following three issues: 1) localization of form frames; 2) separation of characters and form frames; and 3) reconstruction of broken strokes introduced during separation. The form frame is automatically located by finding long straight lines based on the block adjacency graph. Form frame separation and character reconstruction are implemented by means of this graph. The proposed system includes form structure learning and form dropout. First, a form structure-based template is automatically generated from a blank form which includes form frames, preprinted data areas and skew angle. With this form template, our system can then extract both handwritten and machine-typed filled-in data. Experimental results on three different types of forms show the performance of our system. Further, the proposed method is robust to noise and skew that is introduced during scanning.

132 citations


Journal ArticleDOI
TL;DR: The landscape contagion index as discussed by the authors measures the degree of clumping of attributes on raster maps and is computed from the frequencies by which different pairs of attributes occur as adjacent pixels on a map.
Abstract: The landscape contagion index measures the degree of clumping of attributes on raster maps. The index is computed from the frequencies by which different pairs of attributes occur as adjacent pixels on a map. Because there are subtle differences in the way the attribute adjacencies may be tabulated, the standard index formula may not always apply, and published index values may not be comparable. This paper derives formulas for the contagion index that apply for different ways of tabulating attribute adjacencies — with and without preserving the order of pixels in pairs, and by using two different ways of determining pixel adjacency. When the order of pixels in pairs is preserved, the standard formula is obtained. When the order is not preserved, a new formula is obtained because the number of possible attribute adjacency states is smaller. Estimated contagion is also smaller when each pixel pair is counted twice (instead of once) because double-counting pixel adjacencies makes the attribute adjacency matrix symmetric across the main diagonal.

114 citations


Journal ArticleDOI
TL;DR: In this article, a computational paradigm called spatial aggregation has been developed to unify the description of a class of imagistic problem solvers (e.g., kam, maps, and hipair).
Abstract: Visual thinking plays an important role in scientific reasoning Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers A program written in this paradigm has the following properties It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates It uses a data structure, the neighborhood graph, as a common interface to modularize computations To illustrate our theory, we describe the computational structure of three implemented problem solvers - kam, maps, and hipair - in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines

73 citations



Book ChapterDOI
TL;DR: This chapter describes algorithms for connected component labeling and region adjacency graph construction and gives several sequential algorithms for two-dimensional connected componentlabeling and an algorithm for three-dimensionalconnected component labeling.
Abstract: In machine vision, an original gray tone image is processed to produce features that can be used by higher-level processes, such as recognition and inspection procedures. Thresholding the image results in a binary image whose pixels are labeled as foreground or background. Segmenting the image results in a symbolic image whose pixels are assigned labels representing various classifications. In both cases, an important next step in the analysis of the image is an operation called connected component labeling that groups the pixels into regions, such that adjacent pixels have the same label, and pixels belonging to distinct regions have different labels. Properties of the regions and relationships among them may then be calculated. The most common relationship, spatial adjacency, can be represented by a region adjacency graph. This chapter describes algorithms for connected component labeling and region adjacency graph construction. In addition to giving several sequential algorithms for two-dimensional connected component labeling, it also discusses several parallel algorithms and an algorithm for three-dimensional connected component labeling.

69 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the number of points with pairwise different sets of neighbors in a graph is O(2r/2), where r is the rank of the adjacency matrix.
Abstract: We show that the number of points with pairwise different sets of neighbors in a graph is O(2r/2), where r is the rank of the adjacency matrix. We also give an example that achieves this bound. © 1996 John Wiley & Sons, Inc.

63 citations


Journal ArticleDOI
TL;DR: A new topographic index is calculated considering molecules as weighted graphs, where the elements of edge adjacency relationships in molecular graphs are considered.
Abstract: Edge adjacency relationships in molecular graphs have been used to define a new topographic index. The novel index is calculated considering molecules as weighted graphs, where the elements of edge...

63 citations


Book ChapterDOI
18 Sep 1996
TL;DR: This paper presents a polynomial time algorithm to compute a bend-minimum orthogonal drawing under the restriction that the number of bends at each edge is at most 1.
Abstract: In a 2-visibility drawing the vertices of a given graph are represented by rectangular boxes and the adjacency relations are expressed by horizontal and vertical lines drawn between the boxes. In this paper we want to emphasize this model as a practical alternative to other representations of graphs, and to demonstrate the quality of the produced drawings. We give several approaches, heuristics as well as provably good algorithms, to represent planar graphs within this model. To this, we present a polynomial time algorithm to compute a bend-minimum orthogonal drawing under the restriction that the number of bends at each edge is at most 1.

57 citations


Journal Article
TL;DR: In this paper, the authors considered representations of graphs as rectanglevisibility graphs and as doubly linear graphs and proved that these graphs have at most 6n−20 edges for each n ≥ 8.
Abstract: This paper considers representations of graphs as rectanglevisibility graphs and as doubly linear graphs. These are, respectively, graphs whose vertices are isothetic rectangles in the plane with adjacency determined by horizontal and vertical visibility, and graphs that can be drawn as the union of two straight-edged planar graphs. We prove that these graphs have, with n vertices, at most 6n−20 (resp., 6n−18) edges, and we provide examples of these graphs with 6n−20 edges for each n≥8.

51 citations


Journal ArticleDOI
TL;DR: In this article, the floorplanning problem for a layout problem is formulated as a global optimization problem, where the area of each building block is assumed to be fixed and its width and height are allowed to vary subject to aspect ratio constraints.
Abstract: In this paper, the floorplanning problem for a layout problem is formulated as a global optimization problem. The area of each building block is assumed to be fixed. However, its width and height are allowed to vary subject to aspect ratio constraints. Also, a block may be arbitrarily oriented in parallel to the xy orthogonal axes, subject to partition constraints with associated adjacency relationships. The objective is to minimize the rectangular area of the entire layout. By formulating the problem appropriately, it becomes a geometric programming problem. Its global minimum can then be found by using standard convex optimization techniques. The problem formulation and its conversion to a convex optimization problem are first illustrated through a simple example. The general procedure is then described and the effectiveness of the approach demonstrated through numerical examples.

50 citations


Journal ArticleDOI
TL;DR: In this paper, the eigenvalues of the adjacency operators of the finite Euclidean graphs were shown to be Kloosterman sums, and the graphs were compared with finite upper half planes constructed in a similar way using an analogue of Poincare's non-Euclidean distance.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: An overview to the implementation of Lemon, a complete optical music recognition system, among the techniques employed are: template matching, the Hough transform, line adjacency graphs, character profiles, and graph grammars.
Abstract: This paper provides an overview to the implementation of Lemon, a complete optical music recognition system. Among the techniques employed by the implementation are: template matching, the Hough transform, line adjacency graphs, character profiles, and graph grammars. Experimental results, including comparisons with commercial systems, are provided.

Proceedings ArticleDOI
B.-H. Tran1, F. Seide, T. Steinbiss
03 Oct 1996
TL;DR: An efficient algorithm for the exhaustive search of N-best sentence hypotheses in a word graph based on a two-pass algorithm that is also applied in speech understanding to select the most likely sentence hypothesis that satisfies some additional constraints.
Abstract: The authors introduce an efficient algorithm for the exhaustive search of N-best sentence hypotheses in a word graph. The search procedure is based on a two-pass algorithm. In the first pass, a word graph is constructed with standard time-synchronous beam search. The actual extraction of N-best word sequences from the word graph takes place during the second pass. With the implementation of a tree-organized N-best list, the search is performed directly on the resulting word graph. Therefore, the parallel bookkeeping of N hypotheses at each processing step during the search is not necessary. It is important to point out that the proposed N-best search algorithm produces an exact N-best list as defined by the word graph structure. Possible errors can only result from pruning during the construction of the word graph. In a postprocessing step, the N candidates can be rescored with a more complex language model with highly reduced computational cost. This algorithm is also applied in speech understanding to select the most likely sentence hypothesis that satisfies some additional constraints.

Journal ArticleDOI
TL;DR: In this paper, the authors present a mapping methodology which assigns to each instruction of CDFG a time step and a HW resource for its execution, and minimizes the cost function through a simulated Annealing algorithm.

Journal ArticleDOI
01 Aug 1996-Infor
TL;DR: In this article, the authors explore the approach of identifying a minimal subset of a class of structural adjacency constraints, and develop a two-stage procedure to identify and fine tune a minimal subsegment.
Abstract: Maintaining spatial integrity is an important concern in both the tactical and operational levels of forestry planning. Spatial relationships are typically represented by adjacency constraints. The number of needed adjacency constraints for even a small number of planning units, if not kept to a minimum, may be too large to include in a mathematical programming formulation. Several approaches have been developed to “minimize” the number of adjacency constraints used. These approaches involve either constraint subset selection or constraint aggregation. We demonstrate that with constraint aggregation the theoretical minimum of necessary adjacency constraints is one. However, the range of coefficients of one aggregated adjacency constraint is impractical for actual application. As an alternative, we explore the approach of identifying a minimal subset of a class of structural adjacency constraints. As a part of this approach, we develop a two-stage procedure to identify and fine tune a minimal subse...

Journal ArticleDOI
TL;DR: This work is mainly interested in three questions regarding the eigenvalues and eigenfunctions of the combinatorial Laplacian as q goes to infinity: How large is the second largest eigenvalue, in absolute value, compared with the graph's degree?
Abstract: We surveywhat is known about spectra of combinatorial Laplacians (or adjacency operators) of graphs on the simplest finite symmetric spaces. This work is joint with J. Angel, N. Celniker, A. Medrano, P. Myers, S. Poulos, H. Stark, C. Trimble, and E.Velasquez. For each finite field Fq with q odd, we consider graphs associated to finite Euclidean and non-Euclidean symmetric spaces over Fq. We are mainly interested in three questions regarding the eigenvalues and eigenfunctions of the combinatorial Laplacian as q goes to infinity: How large is the second largest eigenvalue, in absolute value, compared with the graph's degree? (The largesteigenvalue is the degree.) What can one say about the distribution of eigenvalues? What can one say about the “level curves” of the eigenfunctions?

Journal ArticleDOI
TL;DR: A general approach to clustering is explored, which incorporates uncertainty regarding space-time locations into these nearest neighbour, distance or adjacency relationships and can be used with almost all existing cluster tests.
Abstract: Health professionals are investigating an increasing number of possible disease clusters, and statistical tests play an important role in cluster description and analysis. Existing cluster statistics assume precise data, when in reality health events are often imprecise (for example, place of residence is known only to the census district or zip code) and uncertain (for example, 'I first became ill sometime in 1985'). This incompatibility--precise methods used to analyse imprecise data--is largely ignored, resulting in test statistics of unknown accuracy. Most cluster statistics can be written as the cross-product of two matrices where one matrix reflects nearest-neighbour, distance or adjacency relationships and the second matrix is health related (for example, case-control identities). This paper explores a general approach to clustering, which incorporates uncertainty regarding space-time locations into these nearest neighbour, distance or adjacency relationships. Because the approach is general it can be used with almost all existing cluster tests, and, because it accounts for imprecise location data, it is suited to the 'real-world' nature of disease cluster investigations.

Journal ArticleDOI
TL;DR: An algorithm to find a maximum clique in a proper circular arc graph represented by a sorted simple family of circular arcs or by an equivalent representation given by its adjacency lists is presented.
Abstract: We present an $O(n)$ algorithm to find a maximum clique in a proper circular arc graph. We assume that the input graph is represented by a sorted simple family of circular arcs or by an equivalent representation. In Deng, Hell, and Huang [SIAM J. Comput., 25 (1996), pp. 390--403], we gave an $O(m + n)$ algorithm to find such a representation for a proper circular arc graph given by its adjacency lists. As an application we also give an $O(n)$ algorithm for $q$-coloring proper circular arc graphs for a fixed $q$. (Such an algorithm was first given by Teng and Tucker.) Finally we indicate how our algorithm can be modified to find a maximum weight clique in a weighted graph, also in time $O(n)$.

Book ChapterDOI
01 Jan 1996
TL;DR: A morphological operator is called connected if it does not split components of the levelsets, but acts on the level of flat zones, and a simple description of such operators can be obtained by representing an image as a region adjacency graph, a graph whose vertices represent the component of the level sets and whose edges describe adjacencies.
Abstract: A morphological operator is called connected if it does not split components of the levelsets, but acts on the level of flat zones. A simple description of such operators can be obtained byrepresenting an image as a region adjacency graph, a graph whose vertices represent the componentsof the level sets and whose edges describe adjacency. In this graph connected operators can onlychange grey-values of the vertices. To obtain the adjacency graph of the transformed image, onehas to merge adjacent vertices which carry the same grey-value.

Patent
03 Jul 1996
TL;DR: In this article, the image is represented in a compressed form by approximating regions with slowly changing brightness or color, by a background, formed by low degree polynomials.
Abstract: A process for producing a compressed representation of 2D and 3D images. The image is represented in a compressed form by approximating regions with slowly changing brightness or color, by a background, formed by low degree polynomials. Fast brightness or color changes are represented by special models, including local models and curvilinear structures. Visual adjacency relations between the models are identified, the background partition represents these adjacency relations, curvilinear structures are approximated by spline functions. The three-dimensional image is represented by producing one or several compressed images of the scene from different positions, in which a depth value is associated to each model. A view of the scene from any prescribed point is produced by a geometric processing of these compressed data.

Journal ArticleDOI
TL;DR: An algorithm for the point-location problem in 2D finite element meshes as a special case of plane straight-line graphs (PSLG) is presented and it is shown in numerical examples that the algorithm performs equally well for meshes with extreme local refenement.
Abstract: An algorithm for the point-location problem in 2D finite element meshes as a special case of plane straight-line graphs (PSLG) is presented. The element containing a given point P is determined combining a quadtree data structure to generate a quaternary search tree and a local search wave using adjacency information. The preprocessing construction of the search tree has a complexity ofO(n·log(n)) and requires only pointer swap operations. The query time to locate a start element for local search isO(log(n)) and the final point search by ‘point-in-polygon’ tests is independent of the total number of elements in the mesh and thus determined in constant time. Although the theoretical efficiency estimates are only given for quasi-uniform meshes, it is shown in numerical examples, that the algorithm performs equally well for meshes with extreme local refenement.

03 Oct 1996
TL;DR: A new iterative dynamic load balancing algorithm which uses Leiss and Reddy's and Devine and Flaherty's approach of requesting work from the most heavily loaded neighbor, but which proposes to view the load requests as a forest of trees.
Abstract: Parallel adaptive finite element methods on distributed memory computers requires capabilities for mesh refinement and coarsening as well as a subsequent dynamic load balancing of processors. For their implementation, these procedures need a greater repertoire of entity adjacency and update operators on the distributed mesh and a migration procedure to facilitate arbitrary mobility of elements among processors. PMDB, Parallel Mesh Database, is a software tool developed using message passing libraries to provide one common run-time library for finite element analysis, mesh generation, refinement, coarsening and load balancing procedures on complex geometry unstructured meshes. We develop data structures which augment the hierarchical mesh entity relationships by attaching inter-processor links to shared entities on partition boundaries. The inter-processor links are managed by doubly linked structures to provide various query routines such as processor adjacency, lists of entities on partition boundaries, and update operators such as insertion and deletion of these entities. For entity migration, we use an owner updates rule which lets the processor owning a shared entity on partition boundary to collect and inform the updated links to the processors holding these entities. The updating process is restricted only to migrated boundary entities. In addition, global entity identification generation can also be replaced with the notion of ownership generation for new boundary entities. These approaches enable us to develop a scalable procedure for mesh migration. Finally, we develop a new iterative dynamic load balancing algorithm which uses Leiss and Reddy's (43) and Devine and Flaherty's (22) approach of requesting work from the most heavily loaded neighbor, but which proposes to view the load requests as a forest of trees. We present tree edge coloring and load balancing algorithms on trees linearized by the depth first search links. Each of the trees is then balanced by computing load migrations using logarithmic scan operations. We establish the convergence of the algorithm. We also present a comparison of the performance of load balancing and parallel inertia recursive bisection partitioning on various meshes and on an adaptive computational fluid dynamics application.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A system for form dropout when the filled-in characters or symbols are either touching or crossing the form frames and the form model is unknown and a form structure-based template is automatically generated which includes form model, skew angle and preprinted data areas.
Abstract: This paper describes a system for form dropout when the filled-in characters or symbols are either touching or crossing the form frames and the form model is unknown. Since some of the character strokes are either touching or crossing the form frames, we need to address the following three issues: (i) localization of form frames; (ii) separation between characters and form frames, and (ii) reconstruction of broken strokes introduced during separation. The form frame is automatically located by finding long straight lines based on a data structure, called block adjacency graph. Form frame removal and character reconstruction are implemented in this graph. When the same process is applied to a blank form, followed by the procedure of connected component extraction and clustering, a form structure-based template is automatically generated which includes form model, skew angle and preprinted data areas. Given the form template, our system can extract both handwritten and machine-typed filled-in data. Experimental results on three different types of forms demonstrate the performance of our system.

Journal ArticleDOI
TL;DR: A system which computes an integrated description of an object from multiple range images in the form of B-rep (boundary representation), which has not been achieved by the computer vision community is presented.
Abstract: We present a system which computes an integrated description of an object from multiple range images. The object description is in the form of B-rep (boundary representation), which has not been achieved by the computer vision community. To do so, we emphasize the inherent difficulties and ambiguities in the low to mid level vision, and present novel techniques of resolving them. In this system, each view of the object is represented as an attributed graph, where nodes correspond to the surfaces (vertices) and links represent the relationship between surfaces. The main issue in surface extraction is contour closure, which is formulated as a dynamic network. The underlying principle for this network is weak smoothness and geometric cohesion, and is modeled as the interaction between long and short term variables. Long term variables represent the initial boundary grouping computed from the low level surface features, and short term variables represent the competing hypotheses that cooperate with the long term variables. The matching problem involves matching visible surfaces and vertices, and provides the necessary basis for volumetric reconstruction from multiple views. The matching strategy is a two step process, where in each step uses the Hopfield network. At each step, we specify a set of local, adjacency and global constraints, and define an appropriate energy function to be minimized. At the first level of this hierarchy, surface patches are matched and the rigidity transformation is computed. At the second level, the mapping is refined by matching the corresponding vertices, and the transformation is verified. The multiple-view reconstruction consists of two steps. First, we build a composite graph that contains the bounding surfaces and their corresponding attributes, and then intersect these surfaces so that the edges and vertices corresponding to the B-rep description are identified. We present results on objects with planar, as well as quadratically-curved, surfaces.

Journal ArticleDOI
TL;DR: This work presents here the current state of a volumetric feature recognition method that considers interacting features in prismoidal parts and operates in two stages: recognition of regions of interest and interpretation.
Abstract: Since it is a complex task to formalize the feature recognition problem explicitly, a large variety of systems has been developed. One of the problems these systems have to overcome is the recognition and interpretation of interacting features. A fair success has been achieved in surface based methods to recognize certain classes of interacting features. However the problem remains for general cases of interacting features. Recently much effort has been focused on the volumetric approach. We present here the current state of a volumetric feature recognition method. The system considers interacting features in prismoidal parts and it operates in two stages: (1) recognition of regions of interest: a spatial decomposition of the space bounded by a predefined circumscribing volume is performed. A ‘cell evaluated and directed adjacency graph’ is then established. This graph is traversed to identify cavity volumes. (2) interpretation: cavity volumes made up of more than one cell can be produced by different mac...

03 Oct 1996
TL;DR: This work defines relational adjacency grammars, which make it possible to develop efficient off-line parsing algorithms for a large family of visual languages for depicting diagrams, and develops novel incremental parsing algorithms which can handle partial sentences and erroneous or unrecognized items.
Abstract: Integration of pen input into interactive environments holds the promise of gains in "user-friendliness" and more natural interaction styles than present direct manipulation tools can provide, since handwritten gestures and pen strokes offer more expressive power than the keyboard or mouse. However, pen-based applications have yet to enjoy widespread acceptance, because pen data are difficult to analyze due to noise, user variations and imprecise meaning. Calligraphic interfaces is a term we have coined to designate a class of pen-based user interfaces in which hand-sketching or drawing serves as the main organizational metaphor, as opposed to the point-and-click desktop metaphor commonly employed in current graphical user interfaces (even those which support some pen-based interaction). Our study of calligraphic interfaces extends previous research in formal visual languages, syntactic pattern recognition, and user interface design. We define relational adjacency grammars, which make it possible to develop efficient off-line parsing algorithms for a large family of visual languages for depicting diagrams. Our adjacency-driven parsing approach improves on previous research, by using adjacency-constraining relations to prune the search space during intermediate analysis steps, thereby yielding gains in asymptotic complexity. Fuzzy relational adjacency grammars are then considered as a means to add the expressive power of fuzzy logic, so as to deal with imprecise and uncertain data in the calligraphic interface. We develop novel incremental parsing algorithms which can handle partial sentences and erroneous or unrecognized items. Fuzzy template matching is the core technique of our stroke recognizer, which classifies atomic gestures and symbols. Finally, this recognizer is coupled to a parser-generator, to provide an environment for developing calligraphic interfaces.

Journal ArticleDOI
TL;DR: Heuristic procedures are proposed to remove noise from character patterns by refining the procedure to condense black LAGs into different types of nodes, the types of node depending on the local configuration of the pattern at that node.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: The aim of the proposed technique is to define simple shaped objects in a scene using motion information and a simple test and it starts from a block-based segmentation.
Abstract: This paper describes a region merging method for joint motion estimation and segmentation of digital video sequences. The region merging criterion is based on the measure of the matching error for a region when applying a previously estimated motion to it. A region adjacency graph is used for data representation, which allows a scan independent processing and gives a high-level view. The method is simple-shape object-oriented and starts from a block-based segmentation. The aim of the proposed technique is to define simple shaped objects in a scene using motion information and a simple test.

Proceedings ArticleDOI
25 Aug 1996
TL;DR: A graph theoretic approach for extracting the skeleton of a binary line image with emphasis on the preservation of the topology of both the foreground and the background, using the underlying graph structure as an optimisation problem.
Abstract: We propose a graph theoretic approach for extracting the skeleton of a binary line image. Unlike other thinning methods, emphasis is placed on the preservation of the topology of both the foreground and the background. Such conditions guarantee a relevant resulting structure that can be used as input for pattern recognition. Using the underlying graph structure, we can readily formulate this problem as an optimisation problem. Local information such as centrality is given by a distance transform operation. Global information such as location of a branch end is given via a minimum weighted spanning tree which spans all foreground pixels. The resulting structure is then characterised as a union of central paths between end points with their adjacency inter-relationships. Other image characteristics (e.g. width and length of the branches) are also provided. Computational results applied on real images illustrate the noise insensitivity of this method.

Proceedings ArticleDOI
27 Feb 1996
TL;DR: In this article, a hybrid image segmentation algorithm is proposed which combines edge-preserving statistical noise reduction, gradient approximation, detection of watersheds on gradient magnitude image, and hierarchical region merging (HRM) in order to get semantically meaningful segmentations.
Abstract: A hybrid image segmentation algorithm is proposed which combines edge- and region-based techniques through the morphological algorithm of watersheds. The algorithm consists of the following steps: (1) edge-preserving statistical noise reduction, (2) gradient approximation, (3) detection of watersheds on gradient magnitude image, and (4) hierarchical region merging (HRM) in order to get semantically meaningful segmentations. The HRM process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all the RAG edges in a priority queue (heap). We propose a significantly faster algorithm which maintains an additional graph, the most similar neighbor graph, through which the priority queue size and processing time are drastically reduced. The final segmentation is an image partition which, through the RAG, provides information that can be used by knowledge-based high level processes, i.e. recognition. In addition, this region based representation provides one-pixel wide, closed, and accurately localized contours/surfaces. Due to the small number of free parameters, the algorithm can be quite effectively used in interactive image processing. Experimental results obtained with 2D MR images are presented.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.