scispace - formally typeset
Search or ask a question
Author

Earl R. Barnes

Bio: Earl R. Barnes is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Linear programming & Optimal control. The author has an hindex of 15, co-authored 42 publications receiving 1401 citations. Previous affiliations of Earl R. Barnes include Massachusetts Institute of Technology & IBM.

Papers
More filters
Journal ArticleDOI
TL;DR: A heuristic algorithm for partitioning the nodes of a graph into a given number of subsets in such a way that the number of edges connecting the various subsets is a minimum.
Abstract: Let $G = \{ N,E \}$ be an undirected graph having nodes N and edges E We consider the problem of partitioning N into k disjoint subsets $N_1 , \cdots ,N_k $ of given sizes $m_1 , \cdots ,m_k $, respectively, in such a way that the number of edges in E that connect different subsets is minimal We obtain a heuristic solution from the solution of a linear programming transportation problem

323 citations

Journal ArticleDOI
Earl R. Barnes1
TL;DR: The algorithm described here is a variation on Karmarkar’s algorithm for linear programming that applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function.
Abstract: The algorithm described here is a variation on Karmarkar's algorithm for linear programming. It has several advantages over Karmarkar's original algorithm. In the first place, it applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function. The minimum value of the objective function does not have to be known in advance. Secondly, in the absence of degeneracy, the algorithm converges to an optimal basic feasible solution with the nonbasic variables converging monotonically to zero. This makes it possible to identify an optimal basis before the algorithm converges.

307 citations

Journal ArticleDOI
TL;DR: This work develops a detailed formal description of project portfolio management as a multistage stochastic integer program with endogenous uncertainty, and proposes an efficient solution approach, which involves the development of a formulation technique that is amenable to scenario decomposition.

137 citations

Journal ArticleDOI
TL;DR: In this article, the authors exploit properties of protein-based maximum clique problems to develop specialized preprocessing techniques and show how they can be used to solve contact map overlap instances to optimality.
Abstract: In biology, the protein structure alignment problem answers the question of how similar two proteins are. Proteins with strong physical similarities in their tertiary (folded) structure often have similar functions, so understanding physical similarity could be a key to developing protein-based medical treatments. One of the models for protein structure alignment is the maximum contact map overlap (CMO) model. The CMO model of protein structure alignment can be cast as a maximum clique problem on an appropriately defined graph. We exploit properties of these protein-based maximum clique problems to develop specialized preprocessing techniques and show how they can be used to more quickly solve contact map overlap instances to optimality.

121 citations

Proceedings ArticleDOI
Vojin G. Oklobdzija1, Earl R. Barnes1
04 Jun 1985
TL;DR: An efficient scheme for carry propagation in an ALU implemented in n-MOS technology is presented and an implementation of a fast ALU which due to its regular structure occupies a modest amount of silicon is presented.
Abstract: An efficient scheme for carry propagation in an ALU implemented in n-MOS technology is presented. An algorithm that determines the optimum division of the carry chain of a parallel adder for various data path sizes is developed. This yields an implementation of a fast ALU which due to its regular structure occupies a modest amount of silicon. The speed of the implementation described is comparable to the carry look-ahead scheme. Our method is based on the optimization of the carry path implemented in n-MOS technology but the results can be applied to other technologies.

79 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations

Journal ArticleDOI
TL;DR: This work presents a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of theSize of the final partition obtained after multilevel refinement, and presents a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening.
Abstract: Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445--452; Hendrickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993]. From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to consistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.

5,629 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph.
Abstract: The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is, shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorith...

1,762 citations