scispace - formally typeset
Search or ask a question
JournalISSN: 0004-5411

Journal of the ACM 

Association for Computing Machinery
About: Journal of the ACM is an academic journal published by Association for Computing Machinery. The journal publishes majorly in the area(s): Time complexity & Upper and lower bounds. It has an ISSN identifier of 0004-5411. Over the lifetime, 2953 publications have been published receiving 426316 citations. The journal is also known as: Association for Computing Machinery. Journal & JACM.


Papers
More filters
Journal ArticleDOI
Jon Kleinberg1
TL;DR: This work proposes and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure, and has connections to the eigenvectors of certain matrices associated with the link graph.
Abstract: The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.

8,328 citations

Journal ArticleDOI
TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

7,067 citations

Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process.
Abstract: The consensus problem involves an asynchronous system of processes, some of which may be unreliable The problem is for the reliable processes to agree on a binary value In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem

4,389 citations

Journal ArticleDOI
TL;DR: The phrase "direct search" is used to describe sequential examination of trial solutions involving comparison of each trial solution with the "best" obtained up to that time together with a strategy for determining (as a function of earlier results) what the next trial solution will be.
Abstract: In dealing with numerical problems for which classical methods of solution are unfeasible, many people have tried various procedures of searching for an answer on a computer. Our efforts in this direction have produced procedures which seem to have had (for us and for others who have used them) more success than has been achieved elsewhere, so that we have been encouraged to publish this report of our studies. We use the phrase \"direct search\" to describe sequential examination of trial solutions involving comparison of each trial solution with the \"best\" obtained up to that time together with a strategy for determining (as a function of earlier results) what the next trial solution will be. The phrase implies our preference, based on experience, for straightforward search strategies which employ no techniques of classical analysis except where there is a demonstrable advantage in doing so. We have found it worthwhile to study direct search methods for the following reasons: (a) They have provided solutions to some problems, of importance to us, which had been unsuccessfully attacked by classical methods. (Examples are given below.) (b) They promise to provide faster solutions for some problems that are solvable by classical methods. (For example, a method for solving systems of linear equations, proposed in Section 5, seems to take an amount of time that is proportional only to the first power of the number of equations.) (c) They are well adapted to use on electronic computers, since they tend to use repeated identical arithmetic operations with a simple logic. Classical methods, developed for human use, often stress minimization of arithmetic by increased sophistication of logic, a goal which may not be desirable when a computer is to be used. (d) They provide an approximate solution, improving all the while, at all stages of the calculation. This feature can be important when a tentative solution is needed before the calculations are completed. (e) They require (or permit) different kinds of assumptions about the functions involved in various problems, and thus suggest new classifications of functions which may repay study. Direct search is described roughly in Section 2, and explained heuristically in Section 3. Section 4 describes a kind of strategy. Sections 5 and 6 describe

4,184 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202319
202250
202135
202038
201933
201844