scispace - formally typeset
Search or ask a question

Showing papers on "Adjacency list published in 2017"


Journal ArticleDOI
TL;DR: The approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness.
Abstract: In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs--labeled directed graphs-- and predicting missing relationships--labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices--thus all possible relation/adjacency matrices--are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.

196 citations


Journal ArticleDOI
Deyuan Meng1
TL;DR: A class of bipartite containment tracking problems for leader-following networks associated with signed digraphs, which admits multiple leaders which can be not only stationary but also dynamically changing via interactions between them and their neighbored leaders is considered.

148 citations


Journal ArticleDOI
TL;DR: Le et al. as discussed by the authors studied the concentration of the adjacency and Laplacian matrices in the spectral norm of inhomogeneous Erdos-Renyi random graphs, where edges form independently and possibly with different probabilities pij.
Abstract: Author(s): Le, CM; Levina, E; Vershynin, R | Abstract: This paper studies how close random graphs are typically to their expectations. We interpret this question through the concentration of the adjacency and Laplacian matrices in the spectral norm. We study inhomogeneous Erdos-Renyi random graphs on n vertices, where edges form independently and possibly with different probabilities pij. Sparse random graphs whose expected degrees are o(log n) fail to concentrate; the obstruction is caused by vertices with abnormally high and low degrees. We show that concentration can be restored if we regularize the degrees of such vertices, and one can do this in various ways. As an example, let us reweight or remove enough edges to make all degrees bounded above by O(d) where d = max npij. Then we show that the resulting adjacency matrix A’ concentrates with the optimal rate: ||A’−EA||= O(√d). Similarly, if we make all degrees bounded below by d by adding weight d / n to all edges, then the resulting Laplacian concentrates with the optimal rate: ||L(A’) − L(EA’)|| = O(1/√d). Our approach is based on Grothendieck-Pietsch factorization, using which we construct a new decomposition of random graphs. We illustrate the concentration results with an application to the community detection problem in the analysis of networks. © 2017 Wiley Periodicals, Inc. Random Struct. Alg., 51, 538–561, 2017.

89 citations


Journal ArticleDOI
TL;DR: In this article, a test statistic that is a kernel-based function of the estimated latent positions obtained from the adjacency spectral embedding for each graph is proposed, which converges to the test statistic obtained using the true but unknown latent positions and hence that the proposed test procedure is consistent across a broad range of alternatives.
Abstract: We consider the problem of testing whether two independent finite-dimensional random dot product graphs have generating latent positions that are drawn from the same distribution, or distributions that are related via scaling or projection. We propose a test statistic that is a kernel-based function of the estimated latent positions obtained from the adjacency spectral embedding for each graph. We show that our test statistic using the estimated latent positions converges to the test statistic obtained using the true but unknown latent positions and hence that our proposed test procedure is consistent across a broad range of alternatives. Our proof of consistency hinges upon a novel concentration inequality for the suprema of an empirical process in the estimated latent positions setting.

67 citations


Patent
07 Feb 2017
TL;DR: In this paper, the authors used the information from the interaction with the deception mechanism, the interaction information of the network, and machine information for each machine to determine a possible trajectory of an adversary.
Abstract: This disclosure is related to using network flow information of a network to determine the trajectory of an attack. In some examples, an adjacency data structure is generated for a network. The adjacency data structure can include a machine of the network that has interacted with another machine of the network. The network can further include one or more deception mechanisms. The deception mechanisms can indicate that an attack is occurring when a machine interacts with one of the deception mechanisms. When the attack is occurring, attack trajectory information can be generated by locating in the adjacency data structure the machine that interacted with the deception mechanism. The attack trajectory information can correlate the information from the interaction with the deception mechanism, the interaction information of the network, and machine information for each machine to determine a possible trajectory of an adversary.

62 citations


Journal ArticleDOI
TL;DR: The results provide some evidence that a smaller number of neighbours used in defining the spatial weights matrix yields a better model fit, and may provide a more accurate representation of the underlying spatial random field.
Abstract: When analysing spatial data, it is important to account for spatial autocorrelation. In Bayesian statistics, spatial autocorrelation is commonly modelled by the intrinsic conditional autoregressive prior distribution. At the heart of this model is a spatial weights matrix which controls the behaviour and degree of spatial smoothing. The purpose of this study is to review the main specifications of the spatial weights matrix found in the literature, and together with some new and less common specifications, compare the effect that they have on smoothing and model performance. The popular BYM model is described, and a simple solution for addressing the identifiability issue among the spatial random effects is provided. Seventeen different definitions of the spatial weights matrix are defined, which are classified into four classes: adjacency-based weights, and weights based on geographic distance, distance between covariate values, and a hybrid of geographic and covariate distances. These last two definitions embody the main novelty of this research. Three synthetic data sets are generated, each representing a different underlying spatial structure. These data sets together with a real spatial data set from the literature are analysed using the models. The models are evaluated using the deviance information criterion and Moran’s I statistic. The deviance information criterion indicated that the model which uses binary, first-order adjacency weights to perform spatial smoothing is generally an optimal choice for achieving a good model fit. Distance-based weights also generally perform quite well and offer similar parameter interpretations. The less commonly explored options for performing spatial smoothing generally provided a worse model fit than models with more traditional approaches to smoothing, but usually outperformed the benchmark model which did not conduct spatial smoothing. The specification of the spatial weights matrix can have a colossal impact on model fit and parameter estimation. The results provide some evidence that a smaller number of neighbours used in defining the spatial weights matrix yields a better model fit, and may provide a more accurate representation of the underlying spatial random field. The commonly used binary, first-order adjacency weights still appear to be a good choice for implementing spatial smoothing.

62 citations


Proceedings Article
12 Jul 2017
TL;DR: It is demonstrated that NUMA-awareness and its attendant pre-processing costs are beneficial only on large machines and for certain algorithms, calling into question the benefits of proposed algorithmic optimizations that rely on extensive preprocessing.
Abstract: Graph processing systems are used in a wide variety of fields, ranging from biology to social networks, and a large number of such systems have been described in the recent literature. We perform a systematic comparison of various techniques proposed to speed up in-memory multicore graph processing. In addition, we take an end-to-end view of execution time, including not only algorithm execution time, but also pre-processing time and the time to load the graph input data from storage. More specifically, we study various data structures to represent the graph in memory, various approaches to pre-processing and various ways to structure the graph computation. We also investigate approaches to improve cache locality, synchronization, and NUMA-awareness. In doing so, we take our inspiration from a number of graph processing systems, and implement the techniques they propose in a single system. We then selectively enable different techniques, allowing us to assess their benefits in isolation and independent of unrelated implementation considerations. Our main observation is that the cost of pre-processing in many circumstances dominates the cost of algorithm execution, calling into question the benefits of proposed algorithmic optimizations that rely on extensive preprocessing. Equally surprising, using radix sort turns out to be the most efficient way of pre-processing the graph input data into adjacency lists, when the graph input data is already in memory or is loaded from fast storage. Furthermore, we adapt a technique developed for out-of-core graph processing, and show that it significantly improves cache locality. Finally, we demonstrate that NUMA-awareness and its attendant pre-processing costs are beneficial only on large machines and for certain algorithms.

58 citations


Journal ArticleDOI
TL;DR: Experiments conducted on several DARPA VIVID video sequences as well as self-captured videos show that the proposed method is robust to unknown transformations, with significant improvements in overall precision and recall compared to existing works.
Abstract: Image registration has been long used as a basis for the detection of moving objects. Registration techniques attempt to discover correspondences between consecutive frame pairs based on image appearances under rigid and affine transformations. However, spatial information is often ignored, and different motions from multiple moving objects cannot be efficiently modeled. Moreover, image registration is not well suited to handle occlusion that can result in potential object misses. This paper proposes a novel approach to address these problems. First, segmented video frames from unmanned aerial vehicle captured video sequences are represented using region adjacency graphs of visual appearance and geometric properties. Correspondence matching (for visible and occluded regions) is then performed between graph sequences by using multigraph matching. After matching, region labeling is achieved by a proposed graph coloring algorithm which assigns a background or foreground label to the respective region. The intuition of the algorithm is that background scene and foreground moving objects exhibit different motion characteristics in a sequence, and hence, their spatial distances are expected to be varying with time. Experiments conducted on several DARPA VIVID video sequences as well as self-captured videos show that the proposed method is robust to unknown transformations, with significant improvements in overall precision and recall compared to existing works.

52 citations


Journal ArticleDOI
TL;DR: This paper considers the distributed consensus tracking problem for a class of high-order stochastic multiagent systems with uncertain nonlinear functions under a fixed undirected graph through the recursive method, and the novel nonlinear distributed controllers are designed.
Abstract: This paper considers the distributed consensus tracking problem for a class of high-order stochastic multiagent systems with uncertain nonlinear functions under a fixed undirected graph. Through the recursive method, the novel nonlinear distributed controllers are designed. By constructing a kind of special form for the virtual controller in the first step of recursive design, we realize that the state variables of every agent are separated except the outputs of the adjacency agents. The designed controller of each agent only depends on its own state variables and the outputs of the adjacent multiagents. With the proposed method, it is not required any more that the orders of the agents are same. This makes the designed controller be easier to be implemented and the proposed method be applicable for a wider class of multiagent systems. The efficiency of the design approach is illustrated by a simulation example.

47 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic method for exploring the different possible parameterizations of a planar domain by collections of quadrilateral patches and aims to find the optimal multi-patch parameterization with respect to an objective function that captures the parameterization quality.
Abstract: As a remarkable difference to the existing CAD technology, where shapes are represented by their boundaries, FEM-based isogeometric analysis typically needs a parameterization of the interior of the domain. Due to the strong influence on the accuracy of the analysis, methods for constructing a good parameterization are fundamentally important. The flexibility of single patch representations is often insufficient, especially when more complex geometric shapes have to be represented. Using a multi-patch structure may help to overcome this challenge. In this paper we present a systematic method for exploring the different possible parameterizations of a planar domain by collections of quadrilateral patches. Given a domain, which is represented by a certain number of boundary curves, our aim is to find the optimal multi-patch parameterization with respect to an objective function that captures the parameterization quality. The optimization considers both the location of the control points and the layout of the multi-patch structure. The latter information is captured by pre-computed catalogs of all available multi-patch topologies. Several numerical examples demonstrate the performance of the method.

43 citations


Journal ArticleDOI
TL;DR: A method is being introduced for a graph representation, which is based on adjacency of vertices and soft set theory, and a metric is defined to find distances between graphs represented by soft sets.
Abstract: Neighborhood of each vertex in a graph can be very useful in its representation. Soft set theory provides a new tool for such representation. In this paper, a method is being introduced for a graph representation, which is based on adjacency of vertices and soft set theory. With this representation of a graph, application of algebraic operations, available in soft sets may reveal many new aspects of graph theory. In addition, a metric is defined to find distances between graphs represented by soft sets.

DOI
07 Dec 2017
TL;DR: This document provides the theoretical basis that hides behind the modules of MAJA processor, which is applicable to time series of Sentinel-2, Landsat, Venµs, and Formosat satellites.
Abstract: This document provides the theoretical basis that hides behind the modules of MAJA processor. MAJA stands for MACCS-ATCOR Joint Algorithm, where MACCS was the Multi-Temporal Atmospheric Correction and Cloud Screening software, developed by CNES and CESBIO, and ATCOR is the Atmospheric Correction soft- ware developed by DLR. MAJA is based on MACCS architecture and includes modules that come from AT- COR. The MAJA processor is applicable to time series of Sentinel-2, Landsat, Venµs, and Formosat satellites. This Algorithmic Theoretical Basis Document (ATBD) provides a scientific description of the methods used within MAJA and some justifications of the choices made, as well as basic validation results.

Journal ArticleDOI
TL;DR: A bi-objective model for multi-floor facility layout problem with fixed inner configuration and room adjacency constraints is proposed to minimize both the total material handling cost and total occupied room area.

Journal ArticleDOI
TL;DR: The results show that the proposed method outperforms the existing normalized cut based method for small road networks and provides impressive results for much larger networks, where other methods may face serious problems of time and space complexities.

Journal ArticleDOI
TL;DR: It is shown that there exists a graph G with O(n) nodes, where any forest of n nodes is a node-induced subgraph of G, and for constant arboricity k, the result implies the existence of a graph with O (nk) nodes that contains all n-node graphs as node- induced subgraphs, matching a Ω( nk) lower bound.
Abstract: In this article, we show that there exists a graph G with O(n) nodes such that any forest of n nodes is an induced subgraph of G. Furthermore, for constant arboricity k, the result implies the existence of a graph with O(nk) nodes that contains all n-node graphs of arboricity k as node-induced subgraphs, matching a Ω(nk) lower bound of Alstrup and Rauhe. Our upper bounds are obtained through a log2n+O(1) labeling scheme for adjacency queries in forests.We hereby solve an open problem being raised repeatedly over decades by authors such as Kannan et al., Chung, and Fraigniaud and Korman.

Journal ArticleDOI
TL;DR: The concept of energy of fuzzy graph is extended to the energy of a vague graph, which has many applications in physics, chemistry, computer science, and other branches of mathematics.
Abstract: The concept of vague graph was introduced by Ramakrishna (Int J Comput Cognit 7:51–58, 2009). Since the vague models give more precision, flexibility, and compatibility to the system as compared to the classical and fuzzy models, in this paper, the concept of energy of fuzzy graph is extended to the energy of a vague graph. It has many applications in physics, chemistry, computer science, and other branches of mathematics. We define adjacency matrix, degree matrix, laplacian matrix, spectrum, and energy of a vague graph in terms of their adjacency matrix. The spectrum of a vague graph appears in physics statistical problems, and combinatorial optimization problems in mathematics. Also, the lower and upper bounds for the energy of a vague graph are also derived. Finally, we give some applications of energy in vague graph and other sciences.

Proceedings Article
01 Jan 2017
TL;DR: In this article, a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance was proposed, which can recover the true directed acyclic graph (DAG) structure with high probability.
Abstract: Learning the directed acyclic graph (DAG) structure of a Bayesian network from observational data is a notoriously difficult problem for which many non-identifiability and hardness results are known. In this paper we propose a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance --- a class of Bayesian networks for which the DAG structure can be uniquely identified from observational data --- under high-dimensional settings. We show that $O(k^4 \log p)$ number of samples suffices for our method to recover the true DAG structure with high probability, where $p$ is the number of variables and $k$ is the maximum Markov blanket size. We obtain our theoretical guarantees under a condition called \emph{restricted strong adjacency faithfulness} (RSAF), which is strictly weaker than strong faithfulness --- a condition that other methods based on conditional independence testing need for their success. The sample complexity of our method matches the information-theoretic limits in terms of the dependence on $p$. We validate our theoretical findings through synthetic experiments.

Journal ArticleDOI
TL;DR: A fully automatic mesh segmentation scheme using heterogeneous graph Laplacian and weighted dual mesh graph that outperforms the state-of-the-art unsupervised methodologies and is comparable to the best supervised approaches.
Abstract: A fully automatic mesh segmentation scheme using heterogeneous graphs is presented. We introduce a spectral framework where local geometry affinities are coupled with surface patch affinities. A heterogeneous graph is constructed combining two distinct graphs: a weighted graph based on adjacency of patches of an initial over-segmentation, and the weighted dual mesh graph. The partitioning relies on processing each eigenvector of the heterogeneous graph Laplacian individually, taking into account the nodal set and nodal domain theory. Experiments on standard datasets show that the proposed unsupervised approach outperforms the state-of-the-art unsupervised methodologies and is comparable to the best supervised approaches.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effect of five meshing parameters on the analysis solving time and the analysis quality, including the adjacency ratio and minimum and maximum element size.
Abstract: This work investigates the use of hierarchical mesh decomposition strategies for topology optimisation using bi-directional evolutionary structural optimisation algorithm. The proposed method uses a dual mesh system that decouples the design variables from the finite element analysis mesh. The investigation focuses on previously unexplored areas of these techniques to investigate the effect of five meshing parameters on the analysis solving time (i.e. computational effort) and the analysis quality (i.e. solution optimality). The foreground mesh parameters, including adjacency ratio and minimum and maximum element size, were varied independently across solid and void domain regions. Within the topology optimisation, strategies for controlling the mesh parameters were investigated. The differing effects of these parameters on the efficiency and efficacy of the analysis and optimisation stages are discussed, and recommendations are made for parameter combinations. Some of the key findings were that increasing the adjacency ratio increased the efficiency only modestly – the largest effect was for the minimum and maximum element size parameters – and that the most dramatic reduction in solve time can be achieved by not setting the minimum element size too low, assuming mapping onto a background mesh with a minimum element size of 1.

Journal ArticleDOI
TL;DR: In this paper, the authors study properties of Cartesian products of digital images for which adjacencies based on the normal product adjacency are used, and show that the use of such adjACencies lets us obtain many "product properties" for which the analogous statement is either unknown or invalid if, instead, we were to use c_u-adjacencies.
Abstract: We study properties of Cartesian products of digital images for which adjacencies based on the normal product adjacency are used. We show that the use of such adjacencies lets us obtain many "product properties" for which the analogous statement is either unknown or invalid if, instead, we were to use c_u-adjacencies.

Journal ArticleDOI
TL;DR: A signature-based search algorithm is proposed that encodes the shortest-path distance from a vertex to any given keyword in the graph, and can find query answers by exploring fewer paths, so that the time and communication costs are low.
Abstract: Graph keyword search has drawn many research interests, since graph models can generally represent both structured and unstructured databases and keyword searches can extract valuable information for users without the knowledge of the underlying schema and query language. In practice, data graphs can be extremely large, e.g., a Web-scale graph containing billions of vertices. The state-of-the-art approaches employ centralized algorithms to process graph keyword searches, and thus they are infeasible for such large graphs, due to the limited computational power and storage space of a centralized server. To address this problem, we investigate keyword search for Web-scale graphs deployed in a distributed environment. We first give a naive search algorithm to answer the query efficiently. However, the naive search algorithm uses a flooding search strategy that incurs large time and network overhead. To remedy this shortcoming, we then propose a signature-based search algorithm. Specifically, we design a vertex signature that encodes the shortest-path distance from a vertex to any given keyword in the graph. As a result, we can find query answers by exploring fewer paths, so that the time and communication costs are low. Moreover, we reorganize the graph data in the cluster after its initial random partitioning so that the signature-based techniques are more effective. Finally, our experimental results demonstrate the feasibility of our proposed approach in performing keyword searches over Web-scale graph data.

Journal ArticleDOI
30 May 2017-Cauchy
TL;DR: In this article, the authors investigated adjacency spectrum, Laplacian spectrum, signless L 2 n, and detour spectrum of commuting and non-commuting graph of dihedral group D 2 n.
Abstract: Study about spectra of graph has became interesting work as well as study about commuting and non commuting graph of a group or a ring But the study about spectra of commuting and non commuting graph of dihedral group has not been done yet In this paper, we investigate adjacency spectrum, Laplacian spectrum, signless Laplacian spectrum, and detour spectrum of commuting and non commuting graph of dihedral group D 2 n

Posted Content
TL;DR: This paper contributes in a uniformization process of a general hypergraph to allow the definition of an e-adjacency tensor, viewed as a hypermatrix, reflecting the general hyper graph structure.
Abstract: Adjacency between two vertices in graphs or hypergraphs is a pairwise relationship. It is redefined in this article as 2-adjacency. In general hypergraphs, hyperedges hold for $n$-adic relationship. To keep the $n$-adic relationship the concepts of $k$-adjacency and e-adjacency are defined. In graphs 2-adjacency and e-adjacency concepts match, just as $k$-adjacency and e-adjacency do for $k$-uniform hypergraphs. For general hypergraphs these concepts are different. This paper also contributes in a uniformization process of a general hypergraph to allow the definition of an e-adjacency tensor, viewed as a hypermatrix, reflecting the general hypergraph structure. This symmetric e-adjacency hypermatrix allows to capture not only the degree of the vertices and the cardinality of the hyperedges but also makes a full separation of the different layers of a hypergraph.

Journal ArticleDOI
TL;DR: A sufficient condition is found for adjacency of vertices of the (unbounded version of the) set covering polyhedron, and it is applied to show a new infinite family of minimally nonideal matrices.

Journal ArticleDOI
TL;DR: This research introduces a new approach by integrating fuzzy AHP and gray MCDM methods to solve all decision-making problems in the case of a copper mine area.
Abstract: The accurate selection of a processing plant site can result in decreasing total mining cost. This problem can be solved by multi-criteria decision-making (MCDM) methods. This research introduces a new approach by integrating fuzzy AHP and gray MCDM methods to solve all decision-making problems. The approach is applied in the case of a copper mine area. The critical criteria are considered adjacency to the crusher, adjacency to tailing dam, adjacency to a power source, distance from blasting sources, the availability of sufficient land, and safety against floods. After studying the mine map, six feasible alternatives are prioritized using the integrated approach. Results indicated that sites A, B, and E take the first three ranks. The separate results of fuzzy AHP and gray MCDM confirm that alternatives A and B have the first two ranks. Moreover, the field investigations approved the results obtained by the approach.

Posted Content
TL;DR: In this paper, the authors reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complexvalued embedding and unitary diagonalization.
Abstract: In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation/adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.

Journal ArticleDOI
05 Oct 2017
TL;DR: In this article, it was proved that any connected graph cospectral with multicone graph is determined by its adjacency spectra as well as its Laplacian spectra.
Abstract: This paper deals with graphs that are known as multicone graphs. A multicone graph is a graph obtained from the join of a clique and a regular graph. Let $ w $, $ l $, $ m $ be natural numbers and $ k$ is a natural number. It is proved that any connected graph cospectral with multicone graph $K_w\bigtriangledown mECP_{l}^{k}$ is determined by its adjacency spectra as well as its Laplacian spectra, where $ ECP_{l}^{k}={K_{\underbrace {{3^k},\,{3^k},\,...,\,{3^k}}_{l\,times}}}$. Also, we show that complements of some of these multicone graphs are determined by their adjacency spectra.Moreover, we prove that any connected graph cospectral with these multicone graphs must be perfect. Finally, we pose two problems for further researches.


Proceedings ArticleDOI
01 Sep 2017
TL;DR: In this article, a graph-theoretic strategy of reconfiguration switching scheme incorporating microgrids to localize potential fraudulent area is presented, where a graph transformation from a distribution network to a spanning tree is proposed and a mathematical conversion from the graph to adjacency or incidence matrix is then formed.
Abstract: The intelligent grid has been envisioned as a next- generation framework to modernize power grid and improve its efficiency and sustainability. Advanced metering infrastructure (AMI) has become indispensable in a smart grid to support the real-time and reliable information exchange. However, computerizing the metering system also introduces numerous new vectors for energy fraud. Energy fraud is a notorious problem in electric power system, which can cause great economic losses in business. This paper presents a graph-theoretic strategy of reconfiguration switching scheme incorporating microgrids to localize potential fraudulent area. First, a graph transformation from a distribution network to a spanning tree is proposed. A mathematical conversion from the graph to adjacency or incidence matrix is then formed. The switching procedures based on the matrix level are utilized to detect fraud in accordance with anomaly score and likelihood of the fraudulent events.

Journal ArticleDOI
TL;DR: All graphs for which the adjacency matrix has at most two eigenvalues not equal to $$-2$$-2, or 0, and which of these graphs are determined by their adjacencies spectrum are determined.
Abstract: We determine all graphs for which the adjacency matrix has at most two eigenvalues (multiplicities included) not equal to $$-2$$-2, or 0, and determine which of these graphs are determined by their adjacency spectrum.