scispace - formally typeset
Search or ask a question
Topic

Greedy algorithm

About: Greedy algorithm is a research topic. Over the lifetime, 15347 publications have been published within this topic receiving 393945 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario, andSimultaneous recovery variants of NIHT, NHTP, and CoSaMP all outperform the rank-aware algorithm.
Abstract: Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario: Iterative Hard Thresholding (IHT), Normalized IHT (NIHT), Hard Thresholding Pursuit (HTP), Normalized HTP (NHTP), and Compressive Sampling Matching Pursuit (CoSaMP). Using the asymmetric restricted isometry property (ARIP), sufficient conditions for all five algorithms establish bounds on the discrepancy between the algorithms' output and the optimal row-sparse representation. When the initial multiple measurement vectors are jointly sparse, ARIP-based guarantees for exact recovery are also established. The algorithms are then compared via the recovery phase transition framework. The strong phase transitions describing the family of Gaussian matrices which satisfy the sufficient conditions are obtained via known bounds on the ARIP constants. The algorithms' empirical weak phase transitions are compared for various numbers of multiple measurement vectors. Finally, the performance of the algorithms is compared against a known rank aware greedy algorithm, Rank Aware Simultaneous Orthogonal Matching Pursuit + MUSIC. Simultaneous recovery variants of NIHT, NHTP, and CoSaMP all outperform the rank-aware algorithm.

145 citations

Journal ArticleDOI
TL;DR: In this article, a mobility-aware caching placement strategy is proposed to maximize the data offloading ratio, which is defined as the percentage of the requested data that can be delivered via D2D links rather than through base stations.
Abstract: Caching at mobile devices can facilitate device-to-device (D2D) communications, which may significantly improve spectrum efficiency and alleviate the heavy burden on backhaul links. However, most previous works ignored user mobility, thus having limited practical applications. In this paper, we take advantage of the user mobility pattern by the inter-contact times between different users, and propose a mobility-aware caching placement strategy to maximize the data offloading ratio , which is defined as the percentage of the requested data that can be delivered via D2D links rather than through base stations. Given the NP-hard caching placement problem, we first propose an optimal dynamic programming algorithm to obtain a performance benchmark with much lower complexity than exhaustive search. We then prove that the problem falls in the category of monotone submodular maximization over a matroid constraint, and propose a time-efficient greedy algorithm, which achieves an approximation ratio as $\frac {1}{2}$ . Simulation results with real-life data sets will validate the effectiveness of our proposed mobility-aware caching placement strategy. We observe that users moving at either a very low or very high speed should cache the most popular files, while users moving at a medium speed should cache less popular files to avoid duplication.

145 citations

Proceedings Article
21 Jun 2010
TL;DR: An efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases is developed and it is shown that if the available dictionary column vectors are incoherent, the objective function satisfies approximate submodularity.
Abstract: We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems.

145 citations

Proceedings ArticleDOI
15 Nov 2004
TL;DR: It seems that "greedy" algorithms, such as SPAM, SRIDHCR, and TDS, do not perform particularly well for supervised clustering and seem to terminate prematurely too often.
Abstract: This work centers on a novel data mining technique we term supervised clustering. Unlike traditional clustering, supervised clustering assumes that the examples are classified and has the goal of identifying class-uniform clusters that have high probability densities. Four representative-based algorithms for supervised clustering are introduced: a greedy algorithm with random restart, named SRIDHCR, that seeks for solutions by inserting and removing single objects from the current solution, SPAM (a variation of the clustering algorithm PAM), an evolutionary computing algorithm named SCEC, and a fast medoid-based top-down splitting algorithm, named TDS. The four algorithms were evaluated using a benchmark consisting of four UCI machine learning data sets. In general, it seems that "greedy" algorithms, such as SPAM, SRIDHCR, and TDS, do not perform particularly well for supervised clustering and seem to terminate prematurely too often. We also briefly describe the applications of supervised clustering.

144 citations

Journal ArticleDOI
TL;DR: It is shown that this problem of cutting a subset of the edges of a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk is NP-hard in general, even for manifolds without boundary and for punctured spheres.
Abstract: We consider the problem of cutting a subset of the edges of a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length. We show that this problem is NP-hard in general, even for manifolds without boundary and for punctured spheres. We also describe an algorithm with running time n O(g+k), where n is the combinatorial complexity, g is the genus, and k is the number of boundary components of the input surface. Finally, we describe a greedy algorithm that outputs a O(log2 g)-approximation of the minimum cut graph in O(g 2 n log n) time.

144 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
92% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
87% related
Node (networking)
158.3K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023350
2022690
2021809
2020939
20191,006
2018967