scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Algorithm 447: efficient algorithms for graph manipulation

01 Jun 1973-Communications of The ACM (ACM)-Vol. 16, Iss: 6, pp 372-378
TL;DR: Efficient algorithms are presented for partitioning a graph into connected components, biconnected components and simple paths and each iteration produces a new path between two vertices already on paths.
Abstract: Efficient algorithms are presented for partitioning a graph into connected components, biconnected components and simple paths. The algorithm for partitioning of a graph into simple paths of iterative and each iteration produces a new path between two vertices already on paths. (The start vertex can be specified dynamically.) If V is the number of vertices and E is the number of edges, each algorithm requires time and space proportional to max (V, E) when executed on a random access computer.
Citations
More filters
Journal ArticleDOI
TL;DR: The network-based statistic (NBS) is introduced for the first time and its power is evaluated with the use of receiver operating characteristic (ROC) curves to demonstrate its utility with application to a real case-control study involving a group of people with schizophrenia for which resting-state functional MRI data were acquired.

2,042 citations


Cites methods from "Algorithm 447: efficient algorithms..."

  • ...This can be achieved in a runtime of O(N+L) using a breadth-first search (Hopcroft & Tarjan, 1973), where N is the number of nodes and L is the number of suprathreshold links....

    [...]

Journal ArticleDOI
TL;DR: An efficient algorithm to determine whether an arbitrary graph G can be embedded in the plane is described, which used depth-first search and has time and space bounds.
Abstract: This paper describes an efficient algorithm to determine whether an arbitrary graph G can be embedded in the plane. The algorithm may be viewed as an iterative version of a method originally proposed by Auslander and Parter and correctly formulated by Goldstein. The algorithm used depth-first search and has O(V) time and space bounds, where V is the number of vertices in G. An ALGOL implementation of the algorithm succesfully tested graphs with as many as 900 vertices in less than 12 seconds.

1,183 citations

Journal ArticleDOI
TL;DR: Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.
Abstract: Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

930 citations

MonographDOI
28 Apr 2014
TL;DR: Social Media Mining introduces the unique problems arising from social media data and presents fundamental concepts, emerging issues, and effective algorithms for network analysis and data mining.
Abstract: The growth of social media over the last decade has revolutionized the way individuals interact and industries conduct business. Individuals produce data at an unprecedented rate by interacting, sharing, and consuming content through social media. Understanding and processing this new type of data to glean actionable patterns presents challenges and opportunities for interdisciplinary research, novel algorithms, and tool development. Social Media Mining integrates social media, social network analysis, and data mining to provide a convenient and coherent platform for students, practitioners, researchers, and project managers to understand the basics and potentials of social media mining. It introduces the unique problems arising from social media data and presents fundamental concepts, emerging issues, and effective algorithms for network analysis and data mining. Suitable for use in advanced undergraduate and beginning graduate courses as well as professional short courses, the text contains exercises of different degrees of difficulty that improve understanding and help apply concepts, principles, and methods in various scenarios of social media mining.

550 citations


Cites background from "Algorithm 447: efficient algorithms..."

  • ...For more information on finding connected components of a graph refer to [130]....

    [...]

Journal ArticleDOI
TL;DR: An improved and general approach to connected-component labeling of images is presented, and it is shown that when the algorithm is specialized to a pixel array scanned in raster order, the total processing time is linear in the number of pixels.
Abstract: An improved and general approach to connected-component labeling of images is presented. The algorithm presented in this paper processes images in predetermined order, which means that the processing order depends only on the image representation scheme and not on specific properties of the image. The algorithm handles a wide variety of image representation schemes (rasters, run lengths, quadrees, bintrees, etc.). How to adapt the standard UNION-FIND algorithm to permit reuse of temporary labels is shown. This is done using a technique called age balancing, in which, when two labels are merged, the older label becomes the father of the younger label. This technique can be made to coexist with the more conventional rule of weight balancing, in which the label with more descendants becomes the father of the label with fewer descendants. Various image scanning orders are examined and classified. It is also shown that when the algorithm is specialized to a pixel array scanned in raster order, the total processing time is linear in the number of pixels. The linear-time processing time follows from a special property of the UNION-FIND algorithm, which may be of independent interest. This property states that under certain restrictions on the input, UNION-FIND runs in time linear in the number of FIND and UNION operations. Under these restrictions, linear-time performance can be achieved without resorting to the more complicated Gabow-Tarjan algorithm for disjoint set union.

518 citations

References
More filters
Book
01 Jan 1969

16,023 citations

Journal ArticleDOI
TL;DR: A computational algorithm is presented for determining whether a graph is planar, which is based on a decomposition theorem which reduces the problem of testing the planarity of an arbitrary graph G to a set of "pseudo-Hamiltonian" graphs which are systematically formed from G.
Abstract: A computational algorithm is presented for determining whether a graph is planar. All of the operations of the algorithm are expressed in terms of the incidence matrix of the graph. If the graph is nonplanar, the algorithm systematically identifies a set of edges whose deletion yields a subgraph that is planar. A simple procedure for drawing the planar subgraph is also presented. The algorithm has been programmed for a computer and is computationally efficient. The program can also be used to obtain a planar partition of a nonplanar graph. The algorithm is based on a decomposition theorem which reduces the problem of testing the planarity of an arbitrary graph G to the problem of testing the planarity of a set of "pseudo-Hamiltonian" graphs which are systematically formed from G . The necessary and sufficient conditions that a pseudo-Hamiltonian graph be planar are presented. These conditions are expressed directly in terms of the incidence matrix of the graph. The incidence matrix implementation is applied to arbitrary graphs by means of the decomposition theorem. Several techniques which are necessary to insure the convergence and computational efficiency of the algorithm are given.

48 citations

Journal ArticleDOI
TL;DR: An efficient method is presented for finding blocks and cutnodes of an arbitrary undirected graph using a packed adjacency matrix generated by an extension of the web grammar approach.
Abstract: An efficient method is presented for finding blocks and cutnodes of an arbitrary undirected graph. The graph may be represented either (i) as an ordered list of edges or (ii) as a packed adjacency matrix. If w denotes the word length of the machine employed, the storage (in machine words) required for a graph with n nodes and m edges increases essentially as 2(m + n) in case (i), or n2/w in case (ii). A spanning tree with labeled edges is grown, two edges finally bearing different labels if and only if they belong to different blocks. For both representations the time required to analyze a graph on n nodes increases as ng where g depends on the type of graph, 1 ≤ g ≤ 2, and both bounds are attained. Values of g are derived for each of several suitable families of test graphs, generated by an extension of the web grammar approach. The algorithm is compared in detail with that proposed by Read for which 1 ≤ g ≤ 3.

35 citations

Proceedings Article
01 Feb 1971
TL;DR: An efficient algorithm is presented for determining whether or not a given graph is planar, using extensive list-processing features to speed computation.
Abstract: An efficient algorithm is presented for determining whether or not a given graph is planar. If V is the number of vertices in the graph, the algorithm requires time proportional to V log V and space proportional to V when run on a random-access computer. The algorithm constructs the facial boundaries of a planar representation without backup, using extensive list-processing features to speed computation. The theoretical time bound improves on that of previously published algorithms. Experimental evidence indicates that graphs with a few thousand edges can be tested within seconds.

18 citations