scispace - formally typeset
Search or ask a question
Topic

Greedy algorithm

About: Greedy algorithm is a research topic. Over the lifetime, 15347 publications have been published within this topic receiving 393945 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed optimization framework reveals the caching performance upper bound for general adaptive video streaming systems, while the proposed algorithm provides some design guidelines for the edge servers to select the cached representations in practice based on both the video popularity and content information.
Abstract: Caching at mobile edge servers can smooth temporal traffic variability and reduce the service load of base stations in mobile video delivery. However, the assignment of multiple video representations to distributed servers is still a challenging question in the context of adaptive streaming, since any two representations from different videos or even from the same video will compete for the limited caching storage. Therefore, it is important, yet challenging, to optimally select the cached representations for each edge server in order to effectively reduce the service load of base station while maintaining a high quality of experience (QoE) for users. To address this, we study a QoE-driven mobile edge caching placement optimization problem for dynamic adaptive video streaming that properly takes into account the different rate-distortion (R–D) characteristics of videos and the coordination among distributed edge servers. Then, by the optimal caching placement of representations for multiple videos, we maximize the aggregate average video distortion reduction of all users while minimizing the additional cost of representation downloading from the base station, subject not only to the storage capacity constraints of the edge servers, but also to the transmission and initial startup delay constraints of the users. We formulate the proposed optimization problem as an integer linear program to provide the performance upper bound, and as a submodular maximization problem with a set of knapsack constraints to develop a practically feasible cost benefit greedy algorithm. The proposed algorithm has polynomial computational complexity and a theoretical lower bound on its performance. Simulation results further show that the proposed algorithm is able to achieve a near-optimal performance with very low time complexity. Therefore, the proposed optimization framework reveals the caching performance upper bound for general adaptive video streaming systems, while the proposed algorithm provides some design guidelines for the edge servers to select the cached representations in practice based on both the video popularity and content information.

134 citations

Journal ArticleDOI
TL;DR: This work develops an integer programming based optimization algorithm capable of solving small to medium size instances of the inventory routing problem with continuous moves, and embeds it in local search procedure to improve solutions produced by a randomized greedy heuristic.

133 citations

Journal ArticleDOI
Licheng Liu1, Long Chen1, C. L. Philip Chen1, Yuan Yan Tang1, Chi-Man Pun1 
TL;DR: This paper proposes a weighted JSR (WJSR) model to simultaneously encode a set of data samples that are drawn from the same subspace but corrupted with noise and outliers and introduces a greedy algorithm called weighted simultaneous orthogonal matching pursuit to efficiently approximate the global optimal solution.
Abstract: Joint sparse representation (JSR) has shown great potential in various image processing and computer vision tasks. Nevertheless, the conventional JSR is fragile to outliers. In this paper, we propose a weighted JSR (WJSR) model to simultaneously encode a set of data samples that are drawn from the same subspace but corrupted with noise and outliers. Our model is desirable to exploit the common information shared by these data samples while reducing the influence of outliers. To solve the WJSR model, we further introduce a greedy algorithm called weighted simultaneous orthogonal matching pursuit to efficiently approximate the global optimal solution. Then, we apply the WJSR for mixed noise removal by jointly coding the grouped nonlocal similar image patches. The denoising performance is further improved by incorporating it with the global prior and the sparse errors into a unified framework. Experimental results show that our denoising method is superior to several state-of-the-art mixed noise removal methods.

133 citations

Proceedings ArticleDOI
23 Jan 2005
TL;DR: The adaptivity gap is investigated for these problems: the maximum ratio between the expected values achieved by optimal adaptive and non-adaptive policies, and the hardness results for deterministic PIP are improved.
Abstract: We study stochastic variants of Packing Integer Programs (PIP) --- the problems of finding a maximum-value 0/1 vector x satisfying Ax ≤ b, with A and b nonnegative Many combinatorial problems belong to this broad class, including the knapsack problem, maximum clique, stable set, matching, hypergraph matching (aka set packing), b-matching, and others PIP can also be seen as a "multidimensional" knapsack problem where we wish to pack a maximum-value collection of items with vector-valued sizes In our stochastic setting, the vector-valued sizes of each item is known to us apriori only as a probability distribution, and the size of an item is instantiated once we commit to including the item in our solutionFollowing the framework of [3], we consider both adaptive and non-adaptive policies for solving such problems, adaptive policies having the flexibility of being able to make decisions based on the instantiated sizes of items already included in the solution We investigate the adaptivity gap for these problems: the maximum ratio between the expected values achieved by optimal adaptive and non-adaptive policies We show tight bounds on the adaptivity gap for set packing and b-matching, and we also show how to find efficiently non-adaptive policies approximating the adaptive optimum For instance, we can approximate the adaptive optimum for stochastic set packing to within O(d1/2), which is not only optimal with respect to the adaptivity gap, but it is also the best known approximation factor in the deterministic case It is known that there is no polynomial-time d1/2-e approximation for set packing, unless NP = ZPP Similarly, for b-matching, we obtain algorithmically a tight bound on the adaptivity gap of O(λ) where λ satisfies Σ λbj+1 = 1For general Stochastic Packing, we prove that a simple greedy algorithm provides an O(d)-approximation to the adaptive optimum For A ∈ [0, 1]dxn, we provide an O(λ) approximation where Σ 1/λbj = 1 (For b = (B, B,, B), we get λ = d1/B) We also improve the hardness results for deterministic PIP: in the general case, we prove that a polynomial-time d1-e-approximation algorithm would imply NP = ZPP In the special case when A ∈ [0,1]dxn and b = (B,B,,B), we show that a d1/B-∈-approximation would imply NP = ZPP Finally, we prove that it is PSPACE-hard to find the optimal adaptive policy for Stochastic Packing in any fixed dimension d ≥ 2

133 citations

Journal ArticleDOI
TL;DR: This paper addresses the virtual network function (VNF) placement problem in cloud datacenter considering users’ service function chain requests (SFCRs) and designs a Two-StAge heurisTic solution (T-SAT) designed to solve the ILP.
Abstract: Network function virtualization (NFV) brings great conveniences and benefits for the enterprises to outsource their network functions to the cloud datacenter. In this paper, we address the virtual network function (VNF) placement problem in cloud datacenter considering users’ service function chain requests (SFCRs). To optimize the resource utilization, we take two less-considered factors into consideration, which are the time-varying workloads, and the basic resource consumptions (BRCs) when instantiating VNFs in physical machines (PMs). Then the VNF placement problem is formulated as an integer linear programming (ILP) model with the aim of minimizing the number of used PMs. Afterwards, a Two-StAge heurisTic solution (T-SAT) is designed to solve the ILP. T-SAT consists of a correlation-based greedy algorithm for SFCR mapping (first stage) and a further adjustment algorithm for virtual network function requests (VNFRs) in each SFCR (second stage). Finally, we evaluate T-SAT with the artificial data we compose with Gaussian function and trace data derived from Google's datacenters. The simulation results demonstrate that the number of used PMs derived by T-SAT is near to the optimal results and much smaller than the benchmarks. Besides, it improves the network resource utilization significantly.

133 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
92% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
87% related
Node (networking)
158.3K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023350
2022690
2021809
2020939
20191,006
2018967