scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2009"


Proceedings ArticleDOI
28 Jun 2009
TL;DR: This work proposes simple combinatorial formulations that encapsulate efficient compressibility of graphs and shows that some of the problems are NP-hard yet admit effective heuristics, some of which can exploit properties of social networks such as link reciprocity.
Abstract: Motivated by structural properties of the Web graph that support efficient data structures for in memory adjacency queries, we study the extent to which a large network can be compressed. Boldi and Vigna (WWW 2004), showed that Web graphs can be compressed down to three bits of storage per edge; we study the compressibility of social networks where again adjacency queries are a fundamental primitive. To this end, we propose simple combinatorial formulations that encapsulate efficient compressibility of graphs. We show that some of the problems are NP-hard yet admit effective heuristics, some of which can exploit properties of social networks such as link reciprocity. Our extensive experiments show that social networks and the Web graph exhibit vastly different compressibility characteristics.

391 citations


Proceedings Article
01 Jan 2009
TL;DR: This paper proposes a semi-supervised learning framework based on `1 graph to utilize both labeled and unlabeled data for inference on a graph and demonstrates the superiority of this framework over the counterparts based on traditional graphs.
Abstract: In this paper, we present a novel semi-supervised learning framework based on `1 graph. The `1 graph is motivated by that each datum can be reconstructed by the sparse linear superposition of the training data. The sparse reconstruction coefficients, used to deduce the weights of the directed `1 graph, are derived by solving an `1 optimization problem on sparse representation. Different from conventional graph construction processes which are generally divided into two independent steps, i.e., adjacency searching and weight selection, the graph adjacency structure as well as the graph weights of the `1 graph is derived simultaneously and in a parameter-free manner. Illuminated by the validated discriminating power of sparse representation in [16], we propose a semi-supervised learning framework based on `1 graph to utilize both labeled and unlabeled data for inference on a graph. Extensive experiments on semi-supervised face recognition and image classification demonstrate the superiority of our proposed semi-supervised learning framework based on `1 graph over the counterparts based on traditional graphs.

282 citations


Book ChapterDOI
21 Aug 2009
TL;DR: This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph to allow for extended functionality not usually considered in compressed graph representations.
Abstract: This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. Our results show that our method is competitive with the best alternatives in the literature, offering a very good compression ratio (3.3---5.3 bits per link) while permitting fast navigation on the graph to obtain direct as well as reverse neighbors (2---15 microseconds per neighbor delivered). Moreover, it allows for extended functionality not usually considered in compressed graph representations.

122 citations


Journal ArticleDOI
TL;DR: An efficient graph-based mining (GBM) algorithm for mining the frequent trajectory patterns in a spatial-temporal database that outperforms the Apriori-based and PrefixSpan-based methods by more than one order of magnitude.

97 citations


Journal ArticleDOI
TL;DR: A hybrid approach that effectively uses volume subtraction and face adjacency graph is proposed to recognize manufacturing features from 3-D model data in STEP AP-203 format to reduce the number of alternatives for evaluation and thereby the computational effort.
Abstract: For seamless automation, computer-aided design and manufacturing activities have to be linked by computer-aided process planning (CAPP). An important subtask in CAPP is setup planning, in which a setup plan must be generated ideally from a given 3-D model of the component. In this paper, a hybrid approach that effectively uses volume subtraction and face adjacency graph is proposed to recognize manufacturing features from 3-D model data in STEP AP-203 format. The proposed feature recognition is generic in nature and is capable of recognizing intersecting features also with relative ease. The manufacturing features are clustered based on preferential base for machining and a setup sequence is obtained by alternative rating and ranking. Finally, locating and clamping for each setup are determined considering intermediate shapes of the workpiece. This setup planning method reduces the number of alternatives for evaluation and thereby the computational effort.

85 citations


Journal ArticleDOI
TL;DR: In this article, a unique representation of a graph, the characteristic adjacency matrix, is derived from all the loops of the graph obtained through a new algorithm, and the canonical perimeter graph is obtained by relabelling the perimeter graph.

83 citations


Journal ArticleDOI
TL;DR: The distance spectrum and energy of the join-based compositions of regular graphs in terms of their adjacency spectrum is described and it is shown that there exist a number of families of sets of noncospectral graphs with equal distance energy.

72 citations


Journal ArticleDOI
TL;DR: The experimental results show that sparsity exploitation via coloring yields enormous savings in runtime and makes the computation of Hessians of very large size feasible and the results also show that evaluating a Hessian via an indirect method is often faster than a direct evaluation.
Abstract: The computation of a sparse Hessian matrix H using automatic differentiation (AD) can be made efficient using the following four-step procedure: (1) Determine the sparsity structure of H, (2) obtain a seed matrix S that defines a column partition of H using a specialized coloring on the adjacency graph of H, (3) compute the compressed Hessian matrix B ≡ HS, and (4) recover the numerical values of the entries of H from B. The coloring variant used in the second step depends on whether the recovery in the fourth step is direct or indirect: a direct method uses star coloring and an indirect method uses acyclic coloring. In an earlier work, we had designed and implemented effective heuristic algorithms for these two NP-hard coloring problems. Recently, we integrated part of the developed software with the AD tool ADOL-C, which has recently acquired a sparsity detection capability. In this paper, we provide a detailed description and analysis of the recovery algorithms and experimentally demonstrate the efficacy of the coloring techniques in the overall process of computing the Hessian of a given function using ADOL-C as an example of an AD tool. We also present new analytical results on star and acyclic coloring of chordal graphs. The experimental results show that sparsity exploitation via coloring yields enormous savings in runtime and makes the computation of Hessians of very large size feasible. The results also show that evaluating a Hessian via an indirect method is often faster than a direct evaluation. This speedup is achieved without compromising numerical accuracy.

72 citations


Book ChapterDOI
20 May 2009
TL;DR: Several algorithms that solve the single-source shortest-path problem using CUDA on a database, composed of hundreds of large graphs represented by adjacency lists and adjacence matrices, achieving high speedups regarding a CPU implementation based on Fibonacci heaps.
Abstract: We present several algorithms that solve the single-source shortest-path problem using CUDA. We have run them on a database, composed of hundreds of large graphs represented by adjacency lists and adjacency matrices, achieving high speedups regarding a CPU implementation based on Fibonacci heaps. Concerning correctness, we outline why our solutions work, and show that a previous approach [10] is incorrect.

66 citations


Journal ArticleDOI
TL;DR: All connected cubic integral Cayley graphs are determined and some infinite families of connected integral Cayleys are introduced.
Abstract: Let $G$ be a non-trivial group, $S\subseteq G\setminus \{1\}$ and $S=S^{-1}:=\{s^{-1} \;|\; s\in S\}$. The Cayley graph of $G$ denoted by $\Gamma(S:G)$ is a graph with vertex set $G$ and two vertices $a$ and $b$ are adjacent if $ab^{-1}\in S$. A graph is called integral, if its adjacency eigenvalues are integers. In this paper we determine all connected cubic integral Cayley graphs. We also introduce some infinite families of connected integral Cayley graphs.

65 citations


01 Sep 2009
TL;DR: Preliminary results demonstrate the ability of this approach to automatically filter artifacts and segment pavements from 3D data.
Abstract: This paper presents an automatic method for filtering and segmenting 3D point clouds acquired from mobile LIDAR systems. Our approach exploits 3D information by using range images and several morphological operators. Firstly, a detection of artifacts is carried out in order to filter point clouds. The artifact detection is based on a Top-Hat of hole filling algorithm. Secondly, ground segmentation extracts the contour between pavements and roads. The method uses a quasi-flat zone algorithm and a region adjacency graph representation. Edges are evaluated with the local height difference along the corresponding boundary. Finally, edges with a value compatible with the pavement/road difference (about 14[cm]) are selected. Preliminary results demonstrate the ability of this approach to automatically filter artifacts and segment pavements from 3D data.

Book ChapterDOI
23 Feb 2009
TL;DR: A new genetic algorithm for community detection is proposed that uses the fundamental measure criterion modularity Q as the fitness function and a special locus-based adjacency encoding scheme is applied to represent the community partition.
Abstract: With the rapidly grown evidence that various systems in nature and society can be modeled as complex networks, community detection in networks becomes a hot research topic in many research fields. This paper proposes a new genetic algorithm for community detection. The algorithm uses the fundamental measure criterion modularity Q as the fitness function. A special locus-based adjacency encoding scheme is applied to represent the community partition. The encoding scheme is suitable for the community detection based on the reason that it determines the community number automatically and reduces the search space distinctly. In addition, the corresponding crossover and mutation operators are designed. The experiments in three aspects show that the algorithm is effective, efficient and steady.

Proceedings ArticleDOI
18 Sep 2009
TL;DR: This paper presents a reconstruction pipeline for recovering branching structure of trees from laser scanned data points using a variational k-means clustering algorithm to give a compact and accurate reconstruction of the branching system.
Abstract: This paper presents a reconstruction pipeline for recovering branching structure of trees from laser scanned data points. The process is made up of two main blocks: segmentation and reconstruction. Based on a variational k-means clustering algorithm, cylindrical components and ramified regions of data points are identified and located. An adjacency graph is then built from neighborhood information of components. Simple heuristics allow us to extract a skeleton structure and identify branches from the graph. Finally, a B-spline model is computed to give a compact and accurate reconstruction of the branching system.

Proceedings ArticleDOI
20 Jul 2009
TL;DR: It is showed that the interpretation of m-ary adjacency relations is the same of binary relations and therefore they can consistently be employed in social network analysis and some novel results be derived.
Abstract: Adjacency relations for social network analysis have usually been tackled in their bidimensional form, in the sense that relations are computed over pairs of objects. Nevertheless, this paper considers the bidimensional case as restrictive and it proposes an approach where the dimension of the analysis is not limited to binary relations. With the aid of fuzzy logic and OWA operators, it is showed that the interpretation of m-ary adjacency relations is the same of binary relations and therefore they can consistently be employed in social network analysis and some novel results be derived. Besides justifying the use of m-ary relations, the paper proposes a way to characterize them and, eventually, it will provide the reader with an example section.

Journal ArticleDOI
TL;DR: In this paper, it was shown that T-shape trees are determined by their Laplacian spectra, and among them those that are determined using their adjacency spectra are characterized.

Patent
12 Nov 2009
TL;DR: In this article, a plurality of network site interfaces in communication with two or more networks, each of the networks associated with a different Virtual Routing and Forwarding (VRF) instance, and a processor configured for mapping the VRF instances to an Interior Gateway Protocol (IGP) adjacency and transmitting VRF information on the IGP adjACency along with a VRF identifier indicating the network associated with the information.
Abstract: In one embodiment, an apparatus includes a plurality of network site interfaces in communication with two or more networks, each of the networks associated with a different Virtual Routing and Forwarding (VRF) instance, and a processor configured for mapping the VRF instances to an Interior Gateway Protocol (IGP) adjacency and transmitting VRF information on the IGP adjacency along with a VRF identifier indicating the network associated with the VRF information. A method is also disclosed.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: A motion segmentation algorithm that partitions the image plane into disjoint regions based on their parametric motion, which generalizes naturally to multi-label partitions that can handle multiple motions.
Abstract: We present a motion segmentation algorithm that partitions the image plane into disjoint regions based on their parametric motion. It relies on a finer partitioning of the image domain into regions of uniform photometric properties, with motion segments made of unions of such “superpixels.” We exploit recent advances in combinatorial graph optimization that yield computationally efficient estimates. The energy functional is built on a superpixel graph, and is iteratively minimized by computing a parametric motion model in closed-form, followed by a graph cut of the superpixel adjacency graph. It generalizes naturally to multi-label partitions that can handle multiple motions.

Book ChapterDOI
21 Sep 2009
TL;DR: The approach allows multivariate data with covariates to be accounted for, and provides the flexibility to design a wide range of spatial interaction models between the attributes, including adjacency properties or distances between and within categories.
Abstract: Finding geographical patterns by analysing the spatial configuration distribution of events, objects or their attributes has a long history in geography, ecology and epidemiology. Measuring the presence of patterns, clusters, or comparing the spatial organisation for different attributes, symbols within the same map or for different maps, is often the basis of analysis. Landscape ecology has provided a long list of interesting indicators, e.g. summaries of patch size distribution. Looking at content information, the Shannon entropy is also a measure of a distribution providing insight into the organisation of data, and has been widely used for example in economical geography. Unfortunately, using the Shannon entropy on the bare distribution of categories within the spatial domain does not describe the spatial organisation itself. Particularly in ecology and geography, some authors have proposed integrating some spatial aspects into the entropy: using adjacency properties or distances between and within categories. This paper goes further with adjacency, emphasising the use of co-occurences of categories at multiple orders, the adjacency being seen as a particular co-occurence of order 2 with a distance of collocation null, and proposes a spatial entropy measure framework. The approach allows multivariate data with covariates to be accounted for, and provides the flexibility to design a wide range of spatial interaction models between the attributes. Generating a multivariate multinomial distribution of collocations describing the spatial organisation, allows the interaction to be assessed via an entropy formula. This spatial entropy is dependent on the distance of collocation used, which can be seen as a scale factor in the spatial organisation to be analysed.

Journal ArticleDOI
TL;DR: A randomized version of the ILT model is presented that exhibits a tunable densification power-law exponent and maintains several properties of the deterministic model.
Abstract: We present a deterministic model for online social networks (OSNs) based on transitivity and local knowledge in social interactions In the iterated local transitivity (ILT) model, at each time step and for every existing node x, a new node appears that joins to the closed neighbor set of x The ILT model provably satisfies a number of both local and global properties that have been observed in OSNs and other real-world complex networks, such as a densification power law, decreasing average distance, and higher clustering than in random graphs with the same average degree Experimental studies of social networks demonstrate poor expansion properties as a consequence of the existence of communities with low numbers of intercommunity edges Bounds on the spectral gap for both the adjacency and normalized Laplacian matrices are proved for graphs arising from the ILT model indicating such bad expansion properties The cop and domination numbers are shown to remain the same as those of the graph from

Journal ArticleDOI
TL;DR: A formal representation of consistency constraints dedicated to building interiors and associated with a topological model is proposed and it is explained how this model can be successfully used for lighting and radio-wave propagation simulations.
Abstract: Virtual architectural (indoor) scenes are often modeled in 3D for various types of simulation systems. For instance, some authors propose methods dedicated to lighting, heat transfer, acoustic or radio-wave propagation simulations. These methods rely in most cases on a volumetric representation of the environment, with adjacency and incidence relationships. Unfortunately, many buildings data are only given by 2D plans and the 3D needs varies from one application to another. To face these problems, we propose a formal representation of consistency constraints dedicated to building interiors and associated with a topological model. We show that such a representation can be used for: (i) reconstructing 3D models from 2D architectural plans (ii) detecting automatically geometrical, topological and semantical inconsistencies (iii) designing automatic and semi-automatic operations to correct and enrich a 2D plan. All our constraints are homogeneously defined in 2D and 3D, implemented with generalized maps and used in modeling operations. We explain how this model can be successfully used for lighting and radio-wave propagation simulations.

Proceedings ArticleDOI
08 Jun 2009
TL;DR: In this article, the problem of finding an area-universal layout for a given set of adjacency requirements whenever such a layout exists has been studied and a simple necessary and sufficient condition for a rectangular layout to be area universal has been identified.
Abstract: A rectangular layout is a partition of a rectangle into a finite set of interior-disjoint rectangles. They are used as rectangular cartograms in cartography, as floorplans in building architecture and VLSI design, and as graph drawings. Often areas are associated with the rectangles of a rectangular layout and it is desirable for one rectangular layout to represent several area assignments. A layout is area-universal if any assignment of areas to rectangles can be realized by a combinatorially equivalent rectangular layout. We identify a simple necessary and sufficient condition for a rectangular layout to be area-universal: a rectangular layout is area-universal if and only if it is one-sided. We also investigate similar questions for perimeter assignments. The adjacency requirements for the rectangles of a rectangular layout can be specified in various ways, most commonly via the dual graph of the layout. We show how to find an area-universal layout for a given set of adjacency requirements whenever such a layout exists.

Journal ArticleDOI
TL;DR: Almost all finite graphs have the $n$-e.c. c.\ adjacency property, although until recently few explicit examples of such graphs were known.
Abstract: Almost all finite graphs have the $n$-e.c.\ adjacency property, although until recently few explicit examples of such graphs were known. We survey some recently discovered families of explicit finite $n$-e.c.\ graphs, and present a new construction of strongly regular $n$-e.c.\ arising from affine planes.

Journal ArticleDOI
TL;DR: In this paper, the authors characterize the graphs with the minimal least eigenvalue among all graphs of fixed order with given vertex connectivity or edge connectivity, where vertex connectivity is defined as a function of edge connectivity.

Journal ArticleDOI
TL;DR: In this article, the authors consider the case of weighted graphs and give an optimal condition to ensure that every self-adjoint realization of the adjacency matrix is also unbounded from below.
Abstract: Given a locally finite simple graph so that its degree is not bounded, every self-adjoint realization of the adjacency matrix is unbounded from above. In this note we give an optimal condition to ensure it is also unbounded from below. We also consider the case of weighted graphs. We discuss the question of self-adjoint extensions and prove an optimal criterium.

Posted Content
TL;DR: A randomized version of the ILT model is presented, which exhibits a tuneable densification power law exponent, and maintains several properties of the deterministic model.
Abstract: We present a deterministic model for on-line social networks (OSNs) based on transitivity and local knowledge in social interactions. In the Iterated Local Transitivity (ILT) model, at each time-step and for every existing node $x$, a new node appears which joins to the closed neighbour set of $x.$ The ILT model provably satisfies a number of both local and global properties that were observed in OSNs and other real-world complex networks, such as a densification power law, decreasing average distance, and higher clustering than in random graphs with the same average degree. Experimental studies of social networks demonstrate poor expansion properties as a consequence of the existence of communities with low number of inter-community edges. Bounds on the spectral gap for both the adjacency and normalized Laplacian matrices are proved for graphs arising from the ILT model, indicating such bad expansion properties. The cop and domination number are shown to remain the same as the graph from the initial time-step $G_0$, and the automorphism group of $G_0$ is a subgroup of the automorphism group of graphs generated at all later time-steps. A randomized version of the ILT model is presented, which exhibits a tuneable densification power law exponent, and maintains several properties of the deterministic model.

Journal ArticleDOI
TL;DR: A probabilistic approach for automatically segmenting foreground objects from a video sequence by maximizing the conditional joint probability density function of these two elements using a Bayesian network.
Abstract: This paper presents a probabilistic approach for automatically segmenting foreground objects from a video sequence. In order to save computation time and be robust to noise effects, a region detection algorithm incorporating edge information is first proposed to identify the regions of interest, within which the spatial relationships are represented by a region adjacency graph. Next, we consider the motion of the foreground objects and, hence, utilize the temporal coherence property in the regions detected. Thus, the foreground segmentation problem is formulated as follows. Given two consecutive image frames and the segmentation result priorly obtained, we simultaneously estimate the motion vector field and the foreground segmentation mask in a mutually supporting manner by maximizing the conditional joint probability density function of these two elements. To represent the conditional joint probability density function in a compact form, a Bayesian network is adopted, which is derived to model the interdependency of these two elements. Experimental results for several video sequences are provided to demonstrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: Information has been defined as a measure of the variety in a given system as discussed by the authors, and has been used as a concept of no less importance than matter or energy in many applications.
Abstract: Information is a concept of no less importance than matter or energy. Information has been de- scribed as a measure of the variety in a given system. We review several of the indices of molecular information, including those indices related to molecular composition, molecular symmetry, graph automorphism, graph coloring, and connectivity as measured by adjacency, path lengths, and cy- clicity. Electronic information indices include those related to the division of electronic structures into u and ?r bond spaces, core and valence spaces, and lone pair, bond pair, or molecular fragment spaces. General molecular information indices have been formed by combining two or more specific indices. Information theoretic indices haye been supplied to problems of isomer discrimination, classification of molecular complexity, and structure-property and structure-activity correlations. The potential for further applications is great. Introduction The concept of information appears to be one of the most fundamental con- ceptions in the science of the 20th century, a concept of no less importance than that of matter or energy. This assertion follows from the very definitions of in- formation. According to Wiener

Journal ArticleDOI
TL;DR: In this article, a generalization of the explicit central-difference time integration scheme, using a time step variable not only in time but also in space, is proposed, where the solution at each element/node is advanced in time following local rather than global stability limitations.
Abstract: This paper proposes a generalization of the explicit central-difference time integration scheme, using a time step variable not only in time but also in space The solution at each element/node is advanced in time following local rather than global stability limitations This allows substantial saving of computer time in realistic applications with non-uniform meshes, especially in multi-field problems like fluid–structure interactions A binary scheme in space is used: time steps are not completely arbitrary, but stay in a constant ratio of two when passing from one partition level to the next one This choice greatly facilitates implementation (via an integer-based logic), ensures inherent synchronization and avoids any interpolations, necessary in other partitioning schemes in the literature, but which may reduce numerical stability The mesh partition is automatically built up and continuously updated by simple spatial adjacency considerations The resulting algorithm deals automatically with large variations in time of stability limits The paper introduces the core spatial partitioning technique in the Lagrangian formulation Some academic numerical examples allow a detailed comparison with the standard, spatially uniform algorithm A final more realistic example shows the application of partitioning in simulations with arbitrary Lagrangian Eulerian formulation and fully-coupled boundary conditions (fluid–structure interaction) Copyright © 2008 John Wiley & Sons, Ltd

Proceedings Article
Miroslav N. Velev1, Ping Gao1
22 Oct 2009
TL;DR: Novel approaches for solving of hard combinatorial problems by translation to Boolean Satisfiability (SAT) by using the absolute SAT encoding of permutations, where for each of the n objects and each of its pos- sible positions in a permutation, a predicate is defined to indicate whether the object is placed in that position.
Abstract: We study novel approaches for solving of hard combinatorial problems by translation to Boolean Satisfiability (SAT). Our focus is on combinatorial problems that can be represented as a permutation of n objects, subject to additional constraints. In the case of the Hamiltonian Cycle Problem (HCP), these constraints are that two adjacent nodes in a permutation should also be neighbors in the original graph for which we search for a Hamiltonian cycle. We use the absolute SAT encoding of permutations, where for each of the n objects and each of its pos- sible positions in a permutation, a predicate is defined to indicate whether the object is placed in that position. For implementation of this predicate, we compare the direct and logarithmic encodings that have been used previously, against 16 hierarchical parameterizable encodings of which we explore 416 instantiations. We propose the use of enumerative adjacency constraints—that enumerate the possible neighbors of a node in a permutation — instead of, or in addition to the exclusivity adjacency constraints — that exclude impossible neighbors, and that have been applied previously. We study 11 heuristics for efficiently choosing the first node in the Hamiltonian cycle, as well as 8 heuristics for static CNF variable ordering. We achieve at least 4 orders of magnitude average speedup on HCP benchmarks from the phase transition region, relative to the previously used encodings for solving of HCPs via SAT, such that the speedup is increasing with the size of the graphs.

Book ChapterDOI
24 Jul 2009
TL;DR: This work shows that any list edge-coloring can be transformed into any other under the sufficient condition that the number of allowed colors for each edge is strictly larger than the degrees of both its endpoints.
Abstract: We study the problem of reconfiguring one list edge-coloring of a graph into another list edge-coloring by changing one edge color at a time, while at all times maintaining a list edge-coloring, given a list of allowed colors for each edge. First we show that this problem is PSPACE-complete, even for planar graphs of maximum degree 3 and just six colors. Then we consider the problem restricted to trees. We show that any list edge-coloring can be transformed into any other under the sufficient condition that the number of allowed colors for each edge is strictly larger than the degrees of both its endpoints. This sufficient condition is best possible in some sense. Our proof yields a polynomial-time algorithm that finds a transformation between two given list edge-colorings of a tree with n vertices using O (n 2) recolor steps. This worst-case bound is tight: we give an infinite family of instances on paths that satisfy our sufficient condition and whose reconfiguration requires ***(n 2) recolor steps.