scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2018"


Journal ArticleDOI
25 Apr 2018
TL;DR: This paper surveys some fundamental and historic as well as recent results on how algebraic graph theory informs electrical network analysis, dynamics, and design, and reviews the algebraic and spectral properties of graph adjacency, Laplacian, incidence, and resistance matrices.
Abstract: Algebraic graph theory is a cornerstone in the study of electrical networks ranging from miniature integrated circuits to continental-scale power systems. Conversely, many fundamental results of algebraic graph theory were laid out by early electrical circuit analysts. In this paper, we survey some fundamental and historic as well as recent results on how algebraic graph theory informs electrical network analysis, dynamics, and design. In particular, we review the algebraic and spectral properties of graph adjacency, Laplacian, incidence, and resistance matrices and how they relate to the analysis, network reduction, and dynamics of certain classes of electrical networks. We study these relations for models of increasing complexity ranging from static resistive direct current (dc) circuits, over dynamic resistor..inductor..capacitor (RLC) circuits, to nonlinear alternating current (ac) power flow. We conclude this paper by presenting a set of fundamental open questions at the intersection of algebraic graph theory and electrical networks.

191 citations


Journal ArticleDOI
TL;DR: A novel adjacency coefficient representation is proposed, which does not only capture the category information between different samples, but also reflects the continuity between similar samples and the similarity betweenDifferent samples.
Abstract: This paper develops a new dimensionality reduction method, named biomimetic uncorrelated locality discriminant projection (BULDP), for face recognition. It is based on unsupervised discriminant projection and two human bionic characteristics: principle of homology continuity and principle of heterogeneous similarity. With these two human bionic characteristics, we propose a novel adjacency coefficient representation, which does not only capture the category information between different samples, but also reflects the continuity between similar samples and the similarity between different samples. By applying this new adjacency coefficient into the unsupervised discriminant projection, it can be shown that we can transform the original data space into an uncorrelated discriminant subspace. A detailed solution of the proposed BULDP is given based on singular value decomposition. Moreover, we also develop a nonlinear version of our BULDP using kernel functions for nonlinear dimensionality reduction. The performance of the proposed algorithms is evaluated and compared with the state-of-the-art methods on four public benchmarks for face recognition. Experimental results show that the proposed BULDP method and its nonlinear version achieve much competitive recognition performance.

100 citations


Journal ArticleDOI
TL;DR: A Laplacian echo state network (LAESN), is proposed to overcome the ill-posed problem and obtain low-dimensional output weights and experimental results based on two real-world data sets substantiate the effectiveness and characteristics of the proposed LAESN model.
Abstract: Echo state network is a novel kind of recurrent neural networks, with a trainable linear readout layer and a large fixed recurrent connected hidden layer, which can be used to map the rich dynamics of complex real-world data sets. It has been extensively studied in time series prediction. However, there may be an ill-posed problem caused by the number of real-world training samples less than the size of the hidden layer. In this brief, a Laplacian echo state network (LAESN), is proposed to overcome the ill-posed problem and obtain low-dimensional output weights. First, an echo state network is used to map the multivariate time series into a large reservoir. Then, assuming that an unknown underlying manifold is inside the reservoir, we employ the Laplacian eigenmaps to estimate the manifold by constructing an adjacency graph associated with the reservoir states. Finally, the output weights are calculated by the low-dimensional manifold. In addition, some criteria of transient stability, local controllability, and local observability are given. Experimental results based on two real-world data sets substantiate the effectiveness and characteristics of the proposed LAESN model.

90 citations


Proceedings Article
01 Jan 2018
TL;DR: A new algorithm called Coordinated Matrix Minimization (CMM) is proposed, which alternately performs nonnegative matrix factorization and least square matching in the vertex adjacency space of the hypernetwork, in order to infer a subset of candidate hyperlinks that are most suitable to fill the training hypernetwork.
Abstract: This paper addresses the hyperlink prediction problem in hypernetworks. Different from the traditional link prediction problem where only pairwise relations are considered as links, our task here is to predict the linkage of multiple nodes, i.e., hyperlink. Each hyperlink is a set of an arbitrary number of nodes which together form a multiway relationship. Hyperlink prediction is challenging – since the cardinality of a hyperlink is variable, existing classifiers based on a fixed number of input features become infeasible. Heuristic methods, such as the common neighbors and Katz index, do not work for hyperlink prediction, since they are restricted to pairwise similarities. In this paper, we formally define the hyperlink prediction problem, and propose a new algorithm called Coordinated Matrix Minimization (CMM), which alternately performs nonnegative matrix factorization and least square matching in the vertex adjacency space of the hypernetwork, in order to infer a subset of candidate hyperlinks that are most suitable to fill the training hypernetwork. We evaluate CMM on two novel tasks: predicting recipes of Chinese food, and finding missing reactions of metabolic networks. Experimental results demonstrate the superior performance of our method over many seemingly promising baselines.

84 citations


Journal ArticleDOI
TL;DR: A modified TLPP (MTLPP) is proposed via building an adjacency graph on a dual feature space rather than the original space to preserve the intrinsic geometric structure of data and enhance the discriminative ability of features in the low-dimensional space.
Abstract: By considering the cubic nature of hyperspectral image (HSI) to address the issue of the curse of dimensionality, we have introduced a tensor locality preserving projection (TLPP) algorithm for HSI dimensionality reduction and classification. The TLPP algorithm reveals the local structure of the original data through constructing an adjacency graph. However, the hyperspectral data are often susceptible to noise, which may lead to inaccurate graph construction. To resolve this issue, we propose a modified TLPP (MTLPP) via building an adjacency graph on a dual feature space rather than the original space. To this end, the region covariance descriptor is exploited to characterize a region of interest around each hyperspectral pixel. The resulting covariances are the symmetric positive definite matrices lying on a Riemannian manifold such that the Log-Euclidean metric is utilized as the similarity measure for the search of the nearest neighbors. Since the defined covariance feature is more robust against noise, the constructed graph can preserve the intrinsic geometric structure of data and enhance the discriminative ability of features in the low-dimensional space. The experimental results on two real HSI data sets validate the effectiveness of our proposed MTLPP method.

62 citations


Journal ArticleDOI
TL;DR: In this paper, the authors exploit the properties of cluster adjacency for scattering amplitudes in planar N = 4$ super Yang-Mills theory to construct the symbol of the four-loop NMHV heptagon amplitude.
Abstract: We exploit the recently described property of cluster adjacency for scattering amplitudes in planar $\mathcal{N}=4$ super Yang-Mills theory to construct the symbol of the four-loop NMHV heptagon amplitude. We use a manifestly cluster adjacent ansatz and describe how the parameters of this ansatz are determined using simple physical consistency requirements. We then specialise our answer for the amplitude to the multi-Regge limit, finding agreement with previously available results up to the next-to-leading logarithm, and obtaining new predictions up to (next-to)$^3$-leading-logarithmic accuracy.

61 citations


Book ChapterDOI
16 Sep 2018
TL;DR: The proposed approach significantly reduces abnormalities produced during the segmentation of brain structures segmentation, and can be used in a semi-supervised way, opening a path to better generalization to unseen data.
Abstract: Image segmentation based on convolutional neural networks is proving to be a powerful and efficient solution for medical applications However, the lack of annotated data, presence of artifacts and variability in appearance can still result in inconsistencies during the inference We choose to take advantage of the invariant nature of anatomical structures, by enforcing a semantic constraint to improve the robustness of the segmentation The proposed solution is applied on a brain structures segmentation task, where the output of the network is constrained to satisfy a known adjacency graph of the brain regions This criteria is introduced during the training through an original penalization loss named NonAdjLoss With the help of a new metric, we show that the proposed approach significantly reduces abnormalities produced during the segmentation Additionally, we demonstrate that our framework can be used in a semi-supervised way, opening a path to better generalization to unseen data

58 citations


Journal ArticleDOI
TL;DR: A new approach for automatically detect and classify urban ground elements from 3D point clouds that enables a high level of detail classification from the combination of geometric and topological information.

57 citations


Journal ArticleDOI
TL;DR: A graph-theoretic necessary condition for controllability is given by virtue of almost equitable partition of directed weighted signed networks, and some necessary and sufficient conditions are given for the controllable of generic linear multi-agent systems.

54 citations


Journal ArticleDOI
TL;DR: It is shown that the eccentricity matrix of trees is irreducible, and the relations between the eigenvalues of the adjacency and eccentricity matrices are investigated.

43 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: Log(Graph as mentioned in this paper is a graph representation that combines high compression ratios with very low-overhead decompression to enable cheaper and faster graph processing, which can improve the design of various graph processing engines or libraries on single NUMA nodes.
Abstract: Today's graphs used in domains such as machine learning or social network analysis may contain hundreds of billions of edges. Yet, they are not necessarily stored efficiently, and standard graph representations such as adjacency lists waste a significant number of bits while graph compression schemes such as WebGraph often require time-consuming decompression. To address this, we propose Log(Graph): a graph representation that combines high compression ratios with very low-overhead decompression to enable cheaper and faster graph processing. The key idea is to encode a graph so that the parts of the representation approach or match the respective storage lower bounds. We call our approach "graph logarithmization" because these bounds are usually logarithmic. Our high-performance Log(Graph) implementation based on modern bitwise operations and state-of-the-art succinct data structures achieves high compression ratios as well as performance. For example, compared to the tuned Graph Algorithm Processing Benchmark Suite (GAPBS), it reduces graph sizes by 20-35% while matching GAPBS' performance or even delivering speedups due to reducing amounts of transferred data. It approaches the compression ratio of the established WebGraph compression library while enabling speedups of up to more than 2×. Log(Graph) can improve the design of various graph processing engines or libraries on single NUMA nodes as well as distributed-memory systems.

Proceedings ArticleDOI
15 Feb 2018
TL;DR: This paper presents a graph processing system on an FPGA-HMC platform, based on software/hardware co-design and co- optimization, and develops two algorithm optimization techniques: degree-aware adjacency list reordering anddegree-aware vertex index sorting, which substantially reduce the amount of access to external memory.
Abstract: Graph traversal is a core primitive for graph analytics and a basis for many higher-level graph analysis methods. However, irregularities in the structure of scale-free graphs (e.g., social network) limit our ability to analyze these important and growing datasets. A key challenge is the redundant graph computations caused by the presence of high-degree vertices which not only increase the total amount of computations but also incur unnecessary random data access. In this paper, we present a graph processing system on an FPGA-HMC platform, based on software/hardware co-design and co- optimization. For the first time, we leverage the inherent graph property i.e. vertex degree to co-optimize algorithm and hardware architecture. In particular, we first develop two algorithm optimization techniques:degree-aware adjacency list reordering anddegree-aware vertex index sorting. The former can reduce the number of redundant graph computations, while the latter can create a strong correlation between vertex index and data access frequency, which can be effectively applied to guide the hardware design. We further implement the optimized hybrid graph traversal algorithm on an FPGA-HMC platform. By leveraging the strong correlation between vertex index and data access frequency made by degree-aware vertex index sorting, we develop two platform-dependent hardware optimization techniques, namely degree-aware data placement and degree-aware adjacency list compression. These two techniques together substantially reduce the amount of access to external memory. Finally, we conduct extensive experiments on an FPGA-HMC platform to verify the effectiveness of the proposed techniques. To the best of our knowledge, our implementation achieves the highest performance (45.8 billion traversed edges per second) among existing FPGA-based graph processing systems.

Journal ArticleDOI
TL;DR: Experimental results on ORL, AR and CMU PIE face databases validate the superiority of CRLDP over other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A geometric construction called adjacency polytope is developed that accurately captures the topology of a power network and is immensely useful in the computation of the solution bound of load flow equations.
Abstract: Active research activity in power systems areas has focused on developing computational methods to solve load flow equations where a key question is the maximum number of solutions. Although several upper bounds exist, recent studies have hinted that much sharper upper bounds that depend on the topology of underlying power networks may exist. This paper provides a significant refinement of these observations. We also develop a geometric construction called adjacency polytope that accurately captures the topology of a power network and is immensely useful in the computation of the solution bound. Finally, we highlight the significant implications of the development of such solution bounds in numerically solving load flow equations.

Journal ArticleDOI
TL;DR: In this article, it was shown that the (local) metric dimension of the corona product of a graph of order n and some non-trivial graph H equals n times the local adjacency dimension of H.

Journal ArticleDOI
17 Jan 2018
TL;DR: A new graph embedding algorithm, called bilinear regularized locality preserving (BRLP), is derived upon the Riemannian graph for addressing the problems of high dimensionality frequently arising in BCIs.
Abstract: In off-line training of motor imagery-based brain-computer interfaces (BCIs), to enhance the generalization performance of the learned classifier, the local information contained in test data could be used to improve the performance of motor imagery as well. Further considering that the covariance matrices of electroencephalogram (EEG) signal lie on Riemannian manifold, in this paper, we construct a Riemannian graph to incorporate the information of training and test data into processing. The adjacency and weight in Riemannian graph are determined by the geodesic distance of Riemannian manifold. Then, a new graph embedding algorithm, called bilinear regularized locality preserving (BRLP), is derived upon the Riemannian graph for addressing the problems of high dimensionality frequently arising in BCIs. With a proposed regularization term encoding prior information of EEG channels, the BRLP could obtain more robust performance. Finally, an efficient classification algorithm based on extreme learning machine is proposed to perform on the tangent space of learned embedding. Experimental evaluations on the BCI competition and in-house data sets reveal that the proposed algorithms could obtain significantly higher performance than many competition algorithms after using same filter process.

Journal ArticleDOI
TL;DR: In this paper, a new method for the identification of local retail agglomerations within Great Britain, implementing a modification of the established density based spatial clustering of applications with noise (DBSCAN) method that improves local sensitivity to variable point densities.
Abstract: This research introduces a new method for the identification of local retail agglomerations within Great Britain, implementing a modification of the established density based spatial clustering of applications with noise (DBSCAN) method that improves local sensitivity to variable point densities The variability of retail unit density can be related to both the type and function of retail centers, but also to characteristics such as size and extent of urban areas, population distribution, or property values The suggested method implements a sparse graph representation of the retail unit locations based on a distance-constrained k-nearest neighbor adjacency list that is subsequently decomposed using the Depth First Search algorithm DBSCAN is iteratively applied to each subgraph to extract the clusters with point density closer to an overall density for each study area This innovative approach has the advantage of adjusting the radius parameter of DBSCAN at the local scale, thus improving the clustering output A comparison of the estimated retail clusters against a sample of existing boundaries of retail areas shows that the suggested methodology provides a simple yet accurate and flexible way to automate the process of identifying retail clusters of varying shapes and densities across large areas; and by extension, enables their automated update over time

Journal ArticleDOI
TL;DR: A novel approach that trains a fully convolutional network (FCN) to predict text line structure in document images and shows high performance together with the robustness of the system with different types of languages and multi-skewed text lines.
Abstract: Line detection in handwritten documents is an important problem for processing of scanned documents. While existing approaches mainly use hand-designed features or heuristic rules to estimate the location of text lines, the authors present a novel approach that trains a fully convolutional network (FCN) to predict text line structure in document images. A rough estimation of text line, or a line map, is obtained by using FCN, from which text strings that pass through characters in each text line are constructed. Finally, the touching characters should be separated and assigned to different text lines to complete the segmentation, for which line adjacency graph is used. Experimental results on ICDAR2013 Handwritten Segmentation Contest data set show high performance together with the robustness of the system with different types of languages and multi-skewed text lines.

Journal ArticleDOI
TL;DR: A new software framework is presented, named BootCMatch, which implements all the components needed to build and apply the described adaptive AMG both as a stand-alone solver and as a preconditioner in a Krylov method.
Abstract: This article has two main objectives: one is to describe some extensions of an adaptive Algebraic Multigrid (AMG) method of the form previously proposed by the first and third authors, and a second one is to present a new software framework, named BootCMatch, which implements all the components needed to build and apply the described adaptive AMG both as a stand-alone solver and as a preconditioner in a Krylov method. The adaptive AMG presented is meant to handle general symmetric and positive definite (SPD) sparse linear systems, without assuming any a priori information of the problem and its origin; the goal of adaptivity is to achieve a method with a prescribed convergence rate. The presented method exploits a general coarsening process based on aggregation of unknowns, obtained by a maximum weight matching in the adjacency graph of the system matrix. More specifically, a maximum product matching is employed to define an effective smoother subspace (complementary to the coarse space), a process referred to as compatible relaxation, at every level of the recursive two-level hierarchical AMG process.Results on a large variety of test cases and comparisons with related work demonstrate the reliability and efficiency of the method and of the software.

Proceedings ArticleDOI
11 Nov 2018
TL;DR: A fully-dynamic graph data structure for the Graphics Processing Unit (GPU) that delivers high update rates while keeping a low memory footprint using autonomous memory management directly on the GPU, demonstrating the suitability of the framework even for memory access intensive algorithms.
Abstract: In this paper, we present a fully-dynamic graph data structure for the Graphics Processing Unit (GPU). It delivers high update rates while keeping a low memory footprint using autonomous memory management directly on the GPU. The data structure is fully-dynamic, allowing not only for edge but also vertex updates. Performing the memory management on the GPU allows for fast initialization times and efficient update procedures without additional intervention or reallocation procedures from the host. Our optimized approach performs initialization completely in parallel; up to 300x faster compared to previous work. It achieves up to 200 million edge updates per second for sorted and unsorted update batches; up to 30x faster than previous work. Furthermore, it can perform more than 300 million adjacency queries and millions of vertex updates per second. On account of efficient memory management techniques like a queuing approach, currently unused memory is reused later on by the framework, permitting the storage of tens of millions of vertices and hundreds of millions of edges in GPU memory. We evaluate algorithmic performance using a PageRank and a Static Triangle Counting (STC) implementation, demonstrating the suitability of the framework even for memory access intensive algorithms.

Posted Content
TL;DR: This paper proposes a representational model for grid cells that can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning.
Abstract: This paper proposes a representational model for grid cells. In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub-models. (1) Vector-matrix multiplication. The movement from the current position to the next position is modeled by matrix-vector multiplication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning.

Proceedings ArticleDOI
01 Feb 2018
TL;DR: It is shown that WSIM is NP-hard as long as one of the matrices has unbounded rank or negative eigenvalues: hence, the realm of tractability is restricted to positive semi-definite matrices of bounded rank.
Abstract: The graph similarity problem, also known as approximate graph isomorphism or graph matching problem, has been extensively studied in the machine learning community, but has not received much attention in the algorithms community: Given two graphs G,H of the same order n with adjacency matrices A_G,A_H, a well-studied measure of similarity is the Frobenius distance dist(G,H):=min_{pi}|A_G^{pi}-A_H|_F, where pi ranges over all permutations of the vertex set of G, where A_G^pi denotes the matrix obtained from A_G by permuting rows and columns according to pi, and where |M |_F is the Frobenius norm of a matrix M. The (weighted) graph similarity problem, denoted by GSim (WSim), is the problem of computing this distance for two graphs of same order. This problem is closely related to the notoriously hard quadratic assignment problem (QAP), which is known to be NP-hard even for severely restricted cases. It is known that GSim (WSim) is NP-hard; we strengthen this hardness result by showing that the problem remains NP-hard even for the class of trees. Identifying the boundary of tractability for WSim is best done in the framework of linear algebra. We show that WSim is NP-hard as long as one of the matrices has unbounded rank or negative eigenvalues: hence, the realm of tractability is restricted to positive semi-definite matrices of bounded rank. Our main result is a polynomial time algorithm for the special case where the associated (weighted) adjacency graph for one of the matrices has a bounded number of twin equivalence classes. The key parameter underlying our algorithm is the clustering number of a graph; this parameter arises in context of the spectral graph drawing machinery.

Posted Content
TL;DR: In this paper, the authors consider spectral clustering algorithms for community detection under a general bipartite stochastic block model (SBM) and propose a new data-driven regularization that can restore the concentration of the adjacency matrix even for the sparse networks.
Abstract: We consider spectral clustering algorithms for community detection under a general bipartite stochastic block model (SBM). A modern spectral clustering algorithm consists of three steps: (1) regularization of an appropriate adjacency or Laplacian matrix (2) a form of spectral truncation and (3) a k-means type algorithm in the reduced spectral domain. We focus on the adjacency-based spectral clustering and for the first step, propose a new data-driven regularization that can restore the concentration of the adjacency matrix even for the sparse networks. This result is based on recent work on regularization of random binary matrices, but avoids using unknown population level parameters, and instead estimates the necessary quantities from the data. We also propose and study a novel variation of the spectral truncation step and show how this variation changes the nature of the misclassification rate in a general SBM. We then show how the consistency results can be extended to models beyond SBMs, such as inhomogeneous random graph models with approximate clusters, including a graphon clustering problem, as well as general sub-Gaussian biclustering. A theme of the paper is providing a better understanding of the analysis of spectral methods for community detection and establishing consistency results, under fairly general clustering models and for a wide regime of degree growths, including sparse cases where the average expected degree grows arbitrarily slowly.

Journal ArticleDOI
TL;DR: A novel graph-based approach for semi-supervised learning problems, which considers an adaptive adjacency of the examples throughout the unsupervised portion of the training, which provides an effective and scalable graph- based solution which is natural to the operational mechanism of deep neural networks.

Journal ArticleDOI
TL;DR: A new methodology for the analysis of spatial fields of object data distributed over complex domains, using a random domain decomposition, whose realizations define sets of homogeneous sub-regions where to perform simple, independent, weak local analyses (divide), eventually aggregated into a final strong one (impera).
Abstract: We propose a new methodology for the analysis of spatial fields of object data distributed over complex domains. Our approach enables to jointly handle both data and domain complexities, through a divide et impera approach. As a key element of innovation, we propose to use a random domain decomposition, whose realizations define sets of homogeneous sub-regions where to perform simple, independent, weak local analyses (divide), eventually aggregated into a final strong one (impera). In this broad framework, the complexity of the domain (e.g., strong concavities, holes or barriers) can be accounted for by defining its partitions on the basis of a suitable metric, which allows to properly represent the adjacency relationships among the complex data (such as scalar, functional or constrained data) over the domain. As an illustration of the potential of the methodology, we consider the analysis and spatial prediction (Kriging) of the probability density function of dissolved oxygen in the Chesapeake Bay.

Journal ArticleDOI
TL;DR: A new and alternative method named heuristic four-color labeling is proposed, which aims to generate more reasonable color maps with a global view of the whole image, which is a good substitute for random coloring method when the latter produces unsatisfactory messy segmentation.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: It is shown that a simple natural relaxation of ROM model allows us to implement fundamental graph search methods like BFS and DFS more space efficiently than in ROM, and the model is more powerful than ROM if L !
Abstract: Read-only memory (ROM) model is a classical model of computation to study time-space tradeoffs of algorithms A classical result on the ROM model is that any algorithm to sort n numbers using O(s) words of extra space requires Omega (n^2/s) comparisons for lg n <= s <= n/lg n and the bound has also been recently matched by an algorithm However, if we relax the model, we do have sorting algorithms (say Heapsort) that can sort using O(n lg n) comparisons using O(lg n) bits of extra space, even keeping a permutation of the given input sequence at anytime during the algorithm We address similar relaxations for graph algorithms We show that a simple natural relaxation of ROM model allows us to implement fundamental graph search methods like BFS and DFS more space efficiently than in ROM By simply allowing elements in the adjacency list of a vertex to be permuted, we show that, on an undirected or directed connected graph G having n vertices and m edges, the vertices of G can be output in a DFS or BFS order using O(lg n) bits of extra space and O(n^3 lg n) time Thus we obtain similar bounds for reachability and shortest path distance (both for undirected and directed graphs) With a little more (but still polynomial) time, we can also output vertices in the lex-DFS order As reachability in directed graphs (even in DAGs) and shortest path distance (even in undirected graphs) are NL-complete, and lex-DFS is P-complete, our results show that our model is more powerful than ROM if L != P En route, we also introduce and develop algorithms for another relaxation of ROM where the adjacency lists of the vertices are circular lists and we can modify only the heads of the lists Here we first show a linear time DFS implementation using n + O(lg n) bits of extra space Improving the extra space exponentially to only O(lg n) bits, we also obtain BFS and DFS albeit with a slightly slower running time Both the models we propose maintain the graph structure throughout the algorithm, only the order of vertices in the adjacency list changes In sharp contrast, for BFS and DFS, to the best of our knowledge, there are no algorithms in ROM that use even O(n^{1-epsilon}) bits of extra space; in fact, implementing DFS using cn bits for c<1 has been mentioned as an open problem Furthermore, DFS (BFS, respectively) algorithms using n+o(n) (o(n), respectively) bits of extra use Reingold's [JACM, 2008] or Barnes et al's reachability algorithm [SICOMP, 1998] and hence have high runtime Our results can be contrasted with the recent result of Buhrman et al [STOC, 2014] which gives an algorithm for directed st-reachability on catalytic Turing machines using O(lg n) bits with catalytic space O(n^2 lg n) and time O(n^9)

Journal ArticleDOI
TL;DR: In this paper, the authors studied the spectral determination problem for signed n-cycles with respect to the adjacency spectrum and the Laplacian spectrum, and they proved that the balanced odd cycles and unbalanced cycles, denoted by C 2 n + 1 + and C n −, are uniquely determined by their LaplACian spectra.

Proceedings ArticleDOI
Xiaoting Cui1, Yanxiang Jiang1, Xuan Chen1, Fuchun Zhengy1, Xiaohu You1 
05 Mar 2018
TL;DR: This paper formulate the clustering optimization problem with the consideration of cooperative caching and local content popularity, which falls into the scope of combinatorial programming, and proposes an effective graph-based approach to solve this challenging problem.
Abstract: In this paper, the cooperative caching problem in fog radio access networks (F-RAN) is investigated. To maximize the incremental offloaded traffic, we formulate the clustering optimization problem with the consideration of cooperative caching and local content popularity, which falls into the scope of combinatorial programming. We then propose an effective graph-based approach to solve this challenging problem. Firstly, a node graph is constructed with its vertex set representing the considered fog access points (F-APs) and its edge set reflecting the potential cooperations among the F-APs. Then, by exploiting the adjacency table of each vertex of the node graph, we propose to get the complete subgraphs through indirect searching for the maximal complete subgraphs for the sake of a reduced searching complexity. Furthermore, by using the complete subgraphs so obtained, a weighted graph is constructed. By setting the weights of the vertices of the weighted graph to be the incremental offloaded traffics of their corresponding complete subgraphs, the original clustering optimization problem can be transformed into an equivalent 0–1 integer programming problem. The max-weight independent subset of the vertex set of the weighted graph, which is equivalent to the objective cluster sets, can then be readily obtained by solving the above optimization problem through the greedy algorithm that we propose. Our proposed graph-based approach has an apparently low complexity in comparison with the brute force approach which has an exponential complexity. Simulation results show the remarkable improvements in terms of offloading gain by using our proposed approach.

Journal ArticleDOI
TL;DR: A generic solution to the problem of constructing a rectangular floor plan for the given adjacency requirements is presented by enumerating a set of RFP that topologically contain all possible RFP.