scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2003"


Journal ArticleDOI
TL;DR: For almost all graphs the answer to the question in the title is still unknown as mentioned in this paper, and the cases for which the answer is known are surveyed in the survey of cases where the Laplacian matrix is known.

605 citations


Journal ArticleDOI
TL;DR: This paper explores how to embed symbolic relational graphs with unweighted edges in a pattern-space using a graph-spectral approach and illustrates the utility of the embedding methods on neighbourhood graphs representing the arrangement of corner features in 2D images of 3D polyhedral objects.

287 citations


Journal ArticleDOI
TL;DR: A novel 2-D graphical representation of DNA sequences preserving information on sequential adjacency of bases and allowing numerical characterization is considered and illustrated on the coding sequence of the first exon of human β-globin gene.

274 citations


Journal ArticleDOI
TL;DR: An improved genetic algorithm is proposed to derive solutions for multi-floor facility layouts that are to have inner structure walls and passages and is applied to the multi-deck compartment layout problem of the ship with the computational result compared with theMulti- deck compartment layout of the actual ship.

156 citations


Journal ArticleDOI
TL;DR: The survey concentrates mainly on combinatorial and algorithmic techniques, such as adjacency and distance labeling schemes and interval schemes for routing, and covers complexity results on various applications, focusing on compact localized schemes for message routing in communication networks.
Abstract: This survey concerns the role of data structures for compactly storing and representing various types of information in a localized and distributed fashion. Traditional approaches to data representation are based on global data structures, which require access to the entire structure even if the sought information involves only a small and local set of entities. In contrast, localized data representation schemes are based on breaking the information into small local pieces, or labels, selected in a way that allows one to infer information regarding a small set of entities directly from their labels, without using any additional (global) information. The survey concentrates mainly on combinatorial and algorithmic techniques, such as adjacency and distance labeling schemes and interval schemes for routing, and covers complexity results on various applications, focusing on compact localized schemes for message routing in communication networks.

154 citations


Journal ArticleDOI
TL;DR: In this article, a multi-period harvest-scheduling ARM model with spatial constraints was presented, and a tabu search procedure with 2-Opt moves (exchanging at most 2 units) was developed.
Abstract: Because of environmental concerns, modern forest harvesting plans are being modified to address aesthetics and wildlife protection issues. In the literature, the most referenced are adjacency constraints with exclusion or greenup years. Also of interest are the concepts of old growth patch size and total old growth area restrictions. Typically, harvest blocks have been defined a priori. Recently, some have proposed a more complex approach in which basic cells are defined, and the decision-making process includes forming harvest blocks from these cells. This has been called ARM (Area Restriction Model). Until now, no successful exact method has been reported for large-scale ARM problems, leaving heuristics as a viable option. In this work we present a multiperiod harvest-scheduling ARM model with these spatial constraints. A tabu search procedure with 2-Opt moves (exchanging at most 2 units) was developed. For small size instances, the procedure found the optimal solution in most cases, and for a real-size problem an 8% optimality gap is provided.

68 citations


Journal ArticleDOI
TL;DR: It is shown that the cocoons are organized into a hierarchy which is a sub-hierarchy of the one produced by the standard clustering algorithm of complete linkage, which offers a new point of view on what the complete linkage algorithm achieves when it is applied on image data.

64 citations


Proceedings Article
01 Jul 2003
TL;DR: A topological approach to fiducial recognition for real-time applications based on region adjacency trees that makes the system tolerant to severe distortion, and allows encoding of extra information.
Abstract: We report a topological approach to fiducial recognition for real-time applications. Independence from geometry makes the system tolerant to severe distortion, and allows encoding of extra information. The method is based on region adjacency trees. After describing the mathematical foundations, we present a set of simulations to evaluate the algorithm and optimise the fiducial design.

63 citations


Proceedings ArticleDOI
24 Nov 2003
TL;DR: The intrinsic complexity of graph matching is greatly reduced by coupling it with the segmentation, and an attractive feature of this approach is its ability to keep track of occluded objects.
Abstract: Attribute graphs offer a compact representation of 2D or 3D images, as each node represents a region with its attributes and the edges convey the neighborhood relations between adjacent regions. Such graphs may be used in the analysis of video sequences and the tracking of objects of interest. Each image of a sequence is segmented and represented as a region adjacency graph. Object tracking becomes a particular graph-matching problem, in which the nodes representing the same object are to be matched. The intrinsic complexity of graph matching is greatly reduced by coupling it with the segmentation. An attractive feature of our approach is its ability to keep track of occluded objects. Results on real sequences show the potential of this approach.

57 citations


Journal Article
TL;DR: Benefits of the indirect search method are: (1) objective function values can be higher than those computed through other heuristic algorithms, and (2) the algorithm produces good results without time-consuming experimentation with parameters of the search algorithm.
Abstract: An indirect search heuristic is described for solving harvest-scheduling problems under adjacency constraints. This method works in combination with a greedy algorithm by diversifying the search through random changes in prioritized harvest queues. The indirect search is tested on a series of tactical problems and compared with published results for tabu search, simulated annealing, integer programming and linear programming. Results for large strategic problems are compared to a simulated annealing search algorithm. Objective function values are comparable to tabu search and simulated annealing, and solution times range from 38 seconds to 40 minutes, depending on the problem size and the number of iterations. Benefits of the indirect search method are: (1) objective function values can be higher than those computed through other heuristic algorithms, and (2) the algorithm produces good results without time-consuming experimentation with parameters of the search algorithm. The method also has potential for solving more complicated, multiple objective problems.

54 citations


Journal ArticleDOI
TL;DR: A methodology to discover cluster structure in home videos, which uses video shots as the unit of organization, and is based on two concepts: the development of statistical models of visual similarity, duration, and temporal adjacency of consumer video segments and the reformulation of hierarchical clustering as a sequential binary Bayesian classification process.
Abstract: Accessing, organizing, and manipulating home videos present technical challenges due to their unrestricted content and lack of storyline. We present a methodology to discover cluster structure in home videos, which uses video shots as the unit of organization, and is based on two concepts: (1) the development of statistical models of visual similarity, duration, and temporal adjacency of consumer video segments and (2) the reformulation of hierarchical clustering as a sequential binary Bayesian classification process. A Bayesian formulation allows for the incorporation of prior knowledge of the structure of home video and offers the advantages of a principled methodology. Gaussian mixture models are used to represent the class-conditional distributions of intra- and inter-segment visual and temporal features. The models are then used in the probabilistic clustering algorithm, where the merging order is a variation of highest confidence first, and the merging criterion is maximum a posteriori. The algorithm does not need any ad-hoc parameter determination. We present extensive results on a 10-h home-video database with ground truth which thoroughly validate the performance of our methodology with respect to cluster detection, individual shot-cluster labeling, and the effect of prior selection.

Patent
22 Sep 2003
TL;DR: In this article, a method and apparatus for establishing adjacencies on a network is described, which consists of sending hello packets on the network and receiving hello packets from other nodes on the basis of the received hello packets.
Abstract: A method and apparatus are disclosed for establishing adjacencies on a network, the method comprising, at a first node of the network, sending hello packets on the network and receiving hello packets from other nodes on the network on the basis of the received hello packets. The node then sends a link-state packet without adjacency information and without an overload bit set. The node then interrogates a link-state adjacency table and, when only one adjacency is listed in the link-state table, sends a further link-state packet with the adjacency information and the overload bit set. On convergence of a forward cache, the node sends a further link-state packet with adjacency information and without the overload bit set.

Proceedings ArticleDOI
15 Jan 2003
TL;DR: It is shown that a language based on ordered types can use the property of adjacency to give an exact account of the layout of data in memory.
Abstract: Ordered type theory is an extension of linear type theory in which variables in the context may be neither dropped nor re-ordered. This restriction gives rise to a natural notion of adjacency. We show that a language based on ordered types can use this property to give an exact account of the layout of data in memory. The fuse constructor from ordered logic describes adjacency of values in memory, and the mobility modal describes pointers into the heap. We choose a particular allocation model based on a common implementation scheme for copying garbage collection and show how this permits us to separate out the allocation and initialization of memory locations in such a way as to account for optimizations such as the coalescing of multiple calls to the allocator.

Proceedings Article
09 Aug 2003
TL;DR: A sparse representation of the closed list is proposed in which only a fraction of already expanded nodes need to be stored to perform the two functions of the Closed List - preventing duplicate search effort and allowing solution extraction.
Abstract: We describe a framework for reducing the space complexity of graph search algorithms such as A* that use Open and Closed lists to keep track of the frontier and interior nodes of the search space. We propose a sparse representation of the Closed list in which only a fraction of already expanded nodes need to be stored to perform the two functions of the Closed List - preventing duplicate search effort and allowing solution extraction. Our proposal is related to earlier work on search algorithms that do not use a Closed list at all [Korf and Zhang, 2000]. However, the approach we describe has several advantages that make it effective for a wider variety of problems.

Journal ArticleDOI
TL;DR: This note shows an application of MA ordering to the maximum flow problem with integral capacities to get a new polynomial-time algorithm.

Journal ArticleDOI
TL;DR: It is shown that representations of factor schemes over normal closed subsets of G can be viewed as representations of G itself, and necessary and sufficient conditions for an irreducible character of G to be a character of a factor scheme of G are given.
Abstract: In the present paper we investigate the relationship between the complex representations of an association scheme G and the complex representations of certain factor schemes of G. Our first result is that, similar to group representation theory, representations of factor schemes over normal closed subsets of G can be viewed as representations of G itself. We then give necessary and sufficient conditions for an irreducible character of G to be a character of a factor scheme of G. These characterizations involve the central primitive idempotents of the adjacency algebra of G and they are obtained with the help of the Frobenius reciprocity low which we prove for complex adjacency algebras.

Book ChapterDOI
TL;DR: In this article, external memory graph algorithms for a few representative problems are reviewed, and a comparison is made between external-memory (EM) graph algorithms and graph-based methods.
Abstract: Solving real-world optimization problems frequently boils down to processing graphs. The graphs themselves are used to represent and structure relationships of the problem’s components. In this chapter we review external-memory (EM) graph algorithms for a few representative problems:

Journal ArticleDOI
TL;DR: This paper shows that kernels producing a slow reduction rate can be combined to speed up reduction, and proposes one sequential and one parallel algorithm to compute the contracted combinatorial maps.

Journal Article
TL;DR: Several novel topological matrices, like distance-path, Cluj, layer-matrices, walk matrix, walk (triple matrix) operator, characteristic and "property" polynomials, and the corresponding topological descriptors may be calculated by the TOPOCLUJ software package.
Abstract: Several topological indices-numerical descriptors encoding topological attributes of a molecular graph have been used both in graph discriminating analysis and correlating studies for modeling a variety of physico-chemical properties and biological activities. However, only few software packages, viz., CODESSA, MOLCONN Z, DRAGON, TOSS MODE and POLLY, are available for calculating topological indices. These incorporate correlating analysis statistics, as well. The TOPOCLUJ software package is designed to calculate topological descriptors from topological matrices and/or polynomials. Several weighting schemes including group electronegativity, group mass and partial charges are proposed. Topological indices derived from the matrices like adjacency, connectivity, distance, detour, distance-path, detour-path, Cluj, their reciprocal matrices, walk-matrices, walk-operated matrices, layer- and shell-matrices have been successfully used in correlating studies and graph discriminating analysis during the last decade. Several novel topological matrices, like distance-path, Cluj (with its variants), layer-matrices, walk matrix, walk (triple matrix) operator, characteristic and "property" polynomials, and the corresponding topological descriptors may be calculated by the TOPOCLUJ software package.

Journal ArticleDOI
TL;DR: A new method based on parsing the concavity code is used to partition a digital contour into concave and convex sections, achieving two-state partitioning by implementing simple notions of cumulative curvature, vertex adjacency, shallow curvature absorption, and residue sharing.

Journal ArticleDOI
TL;DR: The proposed algorithm is so fast that output of the structures is by far the most time-consuming part of the process, and contributes to enumeration in chemistry, a topic studied for over a century, and is useful in library making, QSAR/QSPR, and synthesis studies.
Abstract: After a short historic review, we briefly describe a new algorithm for constructive enumeration of polyhex and fusene hydrocarbons. In this process our algorithm also enumerates isomers and symmetry groups of molecules (which implies enumeration of enantiomers). Contrary to previous methods often based on the boundary code or its variants (which records orientation of edges along the boundary) or on the DAST code, which uses a rigid dualist graph (whose vertices are associated with faces and edges with adjacency between them), the proposed algorithm proceeds in two phases. First inner dual graphs are enumerated; then molecules obtained from each of them by specifying angles between adjacent edges are obtained. Favorable computational results are reported. The new algorithm is so fast that output of the structures is by far the most time-consuming part of the process. It thus contributes to enumeration in chemistry, a topic studied for over a century, and is useful in library making, QSAR/QSPR, and synthesis studies.

Journal ArticleDOI
TL;DR: Several topology-based measures that characterise proximity relationships between regions in a spatial system are introduced, derived from a relative adjacency operator that is computed from the dual graph of aatial system.
Abstract: This paper introduces several topology-based measures that characterise proximity relationships between regions in a spatial system. These measures are derived from a relative adjacency operator that is computed from the dual graph of a spatial system. The operator is flexible as the respective importance of neighbouring and outlying regions can be parameterised. Given a reference region in a spatial system, we also show how the relative adjacency supports the analysis of the relative distribution of other regions, and how these regions are clustered with respect to that reference region. Extensions of the relative adjacency integrate additional spatial and thematic criteria. The properties of the relative adjacency are illustrated by means of reference examples and a case study.

Book ChapterDOI
TL;DR: This paper describes the construction scheme of a Combinatorial Pyramid and provides a constructive definition of the notions of reduction windows and receptive fields within the Combinator Pyramid framework.
Abstract: Irregular pyramids are made of a stack of successively reduced graphs embedded in the plane Each vertex of a reduced graph corresponds to a connected set of vertices in the level below One connected set of vertices reduced into a single vertex at the above level is called the reduction window of this vertex In the same way, a connected set of vertices in the base level graph reduced to a single vertex at a given level is called the receptive field of this vertex The graphs used in the pyramid may be region adjacency graphs, dual graphs or combinatorial maps This last type of pyramids are called Combinatorial Pyramids Compared to usual graph data structures, combinatorial maps encode one graph and its dual within a same formalism and offer an explicit encoding of the orientation of edges around vertices This paper describes the construction scheme of a Combinatorial Pyramid We also provide a constructive definition of the notions of reduction windows and receptive fields within the Combinatorial Pyramid framework

Journal ArticleDOI
TL;DR: It is shown that the notion of saddle points, barriers, and basins can be extended to the poset-valued case in a meaningful way and described an algorithm that efficiently extracts these features from an exhaustive enumeration of a given generalized landscape.
Abstract: Fitness landscapes have proved to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. Usually, a fitness landscape is considered as a mapping from a configuration space equipped with some notion of adjacency, nearness, distance, or accessibility, into the real numbers. In the context of multi-objective optimization problems this concept can be extended to poset-valued landscapes. In a geometric analysis of such a structure, local Pareto points take on the role of local minima. We show that the notion of saddle points, barriers, and basins can be extended to the poset-valued case in a meaningful way and describe an algorithm that efficiently extracts these features from an exhaustive enumeration of a given generalized landscape.

Book ChapterDOI
01 Jan 2003
TL;DR: A publicly available, simple, state-machine implementation of the Edgebreaker compression, which traverses the corner table, computes the CLERS symbols, and constructs an ordered list of vertex References.
Abstract: A triangulated surface S with V vertices is sometimes stored as a list of T independent triangles, each described by the 3 floating-point coordinates of its vertices. This representation requires about 576V bits and provides no explicit information regarding the adjacency between neighboring triangles or vertices. A variety of boundary-graph data structures may be derived from such a representation in order to make explicit the various adjacency and incidence relations between triangles, edges, and vertices. These relations are stored to accelerate algorithms that visit the surface in a systematic manner and access the neighbors of each vertex or triangle. Instead of these complex data structures, we advocate a simple Corner Table, which explicitly represents the triangle/vertex incidence and the triangle/ triangle adjacency of any manifold or pseudo-manifold triangle mesh, as two tables of integers. The Corner Table requires about 12Vlog2V bits and must be accompanied by a vertex table, which requires 96V bits, if Floats are used. The Corner Table may be derived from the list of independent triangles. For meshes homeomorphic to a sphere, it may be compressed to less that 4V bits by storing the “clers” sequence of triangle-labels from the set {C,L,E,R,S}. Further compression to 3.6V bits may be guaranteed by using context-based codes for the clers symbols. Entropy codes reduce the storage for large meshes to less than 2V bits. Meshes with more complex topologies may require O(log2V) additional bits per handle or hole. We present here a publicly available, simple, state-machine implementation of the Edgebreaker compression, which traverses the corner table, computes the CLERS symbols, and constructs an ordered list of vertex References. Vertices are encoded, in the order in which they appear on the list, as corrective displacements between their predicted and actual locations. Quantizing vertex coordinates to 12 bits and predicting each vertex as a linear combinations of its previously encoded neighbors leads to short displacements, for which entropy codes drop the total vertex location storage for heavily sampled typical meshes below 16V bits.

01 Jun 2003
TL;DR: In this article, all 2-cell embeddings of the vertex-transitive graphs on 12 vertices or less are constructed and their automorphism groups and dual maps are also constructed.
Abstract: Embeddings of graphs on the torus are studied. All 2-cell embeddings of the vertex-transitive graphs on 12 vertices or less are constructed. Their automorphism groups and dual maps are also constructed. A table of embeddings is presented. 1. Toroidal Graphs Let G be a 2-connected graph. The vertex and edge sets of G are V (G) and E(G), respectively. E(G) is a multiset consisting of unordered pairs {u,v}, where u,v 2 V (G), and possibly ordered pairs (v,v), as the graphs G will sometimes have multiple edges and/or loops. We write the pair {u,v} as uv, and the ordered pair (v,v) as vv, which represents a loop on vertex v. If u,v 2 V (G) then u ! v means that u is adjacent to v (and so also v ! u). The reader is referred to Bondy and Murty [2], West [11], or Gross and Tucker [3] for other graph-theoretic terminology. An embedding of a graph on a surface is represented combinatorially by a rotation system [3]. This consists of a cyclic ordering of the incident edges, for each vertex v. Let v be a vertex of G, incident on edges e1,e2,...,ek. We write v ! (e1,e2,...,ek) to indicate the cyclic ordering for v in a rotation system. If some ei is a loop vv, then this loop must appear twice in the cyclic adjacency list (e1,e2,...,ek), because walking around the vertex v along a small circle in the torus will require that a loop vv be crossed twice. Thus, we assume that if ei is a loop vv, there is another e 0 in the list corresponding to the same loop vv. Since every loop must appear twice in the rotation system, a loop contributes two to the degree of a vertex. If ei with endpoints uv is not a loop, then it will appear in the cyclic adjacency list of both vertices u and v. Given ei in the list for u, the corresponding ej in the list for v is given by the rotation sytem. Figure 1 shows an embedding of the complete

Book ChapterDOI
30 Jun 2003
TL;DR: The utility of the embedding methods on neighbourhood graphs representing the arrangement of corner features in 2D images of 3D polyhedral objects is illustrated.
Abstract: In this paper we explore how to use spectral methods for embedding and clustering unweighted graphs. We use the leading eigenvectors of the graph adjacency matrix to define eigenmodes of the adjacency matrix. For each eigenmode, we compute vectors of spectral properties. These include the eigenmode perimeter, eigenmode volume, Cheeger number, inter-mode adjacency matrices and intermode edge-distance. We embed these vectors in a pattern-space using two contrasting approaches. The first of these involves performing principal or independent components analysis on the covariance matrix for the spectral pattern vectors. The second approach involves performing multidimensional scaling on the L2 norm for pairs of pattern vectors. We illustrate the utility of the embedding methods on neighbourhood graphs representing the arrangement of corner features in 2D images of 3D polyhedral objects.

Proceedings Article
01 Jan 2003
TL;DR: A memory efcient, fast in-core solution to the problem using cell adjacency and a complete adaptive kd-tree based on the vertices and the data structure is tested on several large unstructured grids from computational uid dynamics (CFD) simulations for industrial applications.
Abstract: Visualization of data dened over large unstructured grids requires an efcient solution to the point location problem, since many visualization methods need values at arbitrary positions given in global coordinates. This paper presents a memory efcient, fast in-core solution to the problem using cell adjacency and a complete adaptive kd-tree based on the vertices. Since cell adjacency information is stored rather often (for example for fast ray-casting or streamline integration), the extra memory needed is very small compared to conventional solutions like octrees. The method is especially useful for highly non-uniform point distributions and extreme edge ratios in the cells. The data structure is tested on several large unstructured grids from computational uid dynamics (CFD) simulations for industrial applications.

Proceedings ArticleDOI
15 May 2003
TL;DR: The construction of hierarchical feature clustering is described and how to overcome general problems of region growing algorithms such as seed point selection and processing order is shown.
Abstract: In this paper we describe the construction of hierarchical feature clustering and show how to overcome general problems of region growing algorithms such as seed point selection and processing order. Access to medical knowledge inherent in medical image databases requires content-based descriptions to allow non-textual retrieval, e.g., for comparison, statistical inquiries, or education. Due to varying medical context and questions, data structures for image description must provide all visually perceivable regions and their topological relationships, which poses one of the major problems for content extraction. In medical applications main criteria for segmenting images are local features such as texture, shape, intensity extrema, or gray values. For this new approach, these features are computed pixel-based and neighboring pixels are merged if the Euclidean distance of corresponding feature vectors is below a threshold. Thus, the planar adjacency of clusters representing connected image partitions is preserved. A cluster hierarchy is obtained by iterating and recording the adjacency merging. The resulting inclusion and neighborhood relations of the regions form a hierarchical region adjacency graph. This graph represents a multiscale image decomposition and therefore an extensive content description. It is examined with respect to application in daily routine by testing invariance against transformation, run time behavior, and visual quality For retrieval purposes, a graph can be matched with graphs of other images, where the quality of the matching describes the similarity of the images.

Proceedings ArticleDOI
11 Feb 2003
TL;DR: A concise and responsiveness data structure, called AIF (Adjacency and Incidence Framework), for multiresolution meshes, as well as a new simplification algorithm based on the planarity of neighboring faces are introduced.
Abstract: This paper introduces a concise and responsiveness data structure, called AIF (Adjacency and Incidence Framework), for multiresolution meshes, as well as a new simplification algorithm based on the planarity of neighboring faces. It is an optimal data structure for polygonal meshes, manifold and non-manifold, which means that a minimal number of direct and indirect accesses are required to retrieve adjacency and incidence information from it. These querying tools are necessary for dynamic multiresolution meshing algorithms (e.g. refinement and simplification operations). AIF is an orientable, but not oriented, data structure, i.e. an orientation can be topologically induced as needed in many computer graphics and geometric modelling applications. On the other hand, the simplification algorithm proposed in this paper is "memoryless" in the sense that only the current approximation counts to compute the next one; no information about the original shape or previous approximations is considered.