scispace - formally typeset
Search or ask a question
Topic

Greedy algorithm

About: Greedy algorithm is a research topic. Over the lifetime, 15347 publications have been published within this topic receiving 393945 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A locally competitive algorithm (LCA) is described that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function to produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation.
Abstract: While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholding functions to induce local (usually one-way) inhibitory competitions between nodes to produce sparse representations. LCAs produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation. Additionally, LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular (i.e., smoother and more predictable) than the coefficients produced by greedy algorithms.

453 citations

Journal ArticleDOI
TL;DR: The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic PDEs by determining a “good” n-dimensional space to be used in approximating the elements of a compact set $\mathcal{F}$ in a Hilbert space $\ mathscal{H}$.
Abstract: The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic PDEs Abstractly, it can be viewed as determining a “good” n-dime

453 citations

Proceedings Article
01 Jan 1984
TL;DR: In this paper, the authors propose a family of heuristic algorithms for static task assignment in distributed computing systems, i.e., given a set of k communicating tasks to be executed on a distributed system of n processors, to which processor should each task be assigned?
Abstract: Investigate the problem of static task assignment in distributed computing systems, i.e. given a set of k communicating tasks to be executed on a distributed system of n processors, to which processor should each task be assigned? The author proposes a family of heuristic algorithms for Stone's classic model of communicating tasks whose goal is the minimization of the total execution and communication costs incurred by an assignment. In addition, she augments this model to include interference costs which reflect the degree of incompatibility between two tasks. Whereas high communication costs serve as a force of attraction between tasks, causing them to be assigned to the same processor, interference costs serve as a force of repulsion between tasks, causing them to be distributed over many processors. The inclusion of interference costs in the model yields assignments with greater concurrency, thus overcoming the tendency of Stone's model to assign all tasks to one or a few processors. Simulation results show that the algorithms perform well and in particular, that the highly efficient Simple Greedy Algorithm performs almost as well as more complex heuristic algorithms. >

452 citations

Journal ArticleDOI
TL;DR: This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions, and leads to a rigorous proof that for a linearly separable problem, AdaBoost becomes an L 1 -margin maximizer when left to run to convergence.
Abstract: Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulting estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early-stopping strategies under which boosting is shown to be consistent based on i.i.d. samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step-sizes. as known in practice through the work of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with E → 0 step-size becomes an L 1 -margin maximizer when left to run to convergence.

451 citations

Proceedings ArticleDOI
04 May 1997
TL;DR: This work considers the problem of clustering dynamic point sets in a metric space and proposes a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications.
Abstract: Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications. The goal is to efficiently maintain clusters of small diameter as new points are inserted. We analyze several natural greedy algorithms and demonstrate that they perform poorly. We propose new deterministic and randomized incremental clustering algorithms which have a provably good performance, and which we believe should also perform well in practice. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number of clusters.

449 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
92% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
87% related
Node (networking)
158.3K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023350
2022690
2021809
2020939
20191,006
2018967