scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2009"


Proceedings ArticleDOI
28 Jun 2009
TL;DR: Based on the results, it is believed that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time.
Abstract: Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes.Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms.

2,073 citations


Posted Content
TL;DR: This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing by allowing arbitrary structures on the feature set, which generalizes the group sparsity idea.
Abstract: This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.

457 citations


Book ChapterDOI
27 Aug 2009
TL;DR: In this paper, the problem of ranking all process models in a repository according to their similarity with respect to a given process model is investigated, and four graph matching algorithms, ranging from a greedy one to a relatively exhaustive one, are evaluated.
Abstract: We investigate the problem of ranking all process models in a repository according to their similarity with respect to a given process model. We focus specifically on the application of graph matching algorithms to this similarity search problem. Since the corresponding graph matching problem is NP-complete, we seek to find a compromise between computational complexity and quality of the computed ranking. Using a repository of 100 process models, we evaluate four graph matching algorithms, ranging from a greedy one to a relatively exhaustive one. The results show that the mean average precision obtained by a fast greedy algorithm is close to that obtained with the most exhaustive algorithm.

406 citations


Proceedings ArticleDOI
25 Oct 2009
TL;DR: In this article, the authors study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet, and show that no online algorithm can achieve an approximation ratio better than 0.632.
Abstract: We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet. In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of $1-{1\over e} \simeq 0.632$, a very familiar bound that holds for many online problems; further, the bound is tight in this case. In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio, but still, no algorithm is known that beats the $1 - {1\over e}$ bound.Our main result is a $0.67$-approximation online algorithm for stochastic bipartite matching, breaking this $1 - {1\over e}$ barrier. Furthermore, we show that no online algorithm can produce a $1-\epsilon$ approximation for an arbitrarily small $\epsilon$ for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-) matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario. This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application.

326 citations


Journal ArticleDOI
TL;DR: The improved greedy traffic-aware routing protocol (GyTAR), which is an intersection-based geographical routing protocol that is capable of finding robust and optimal routes within urban environments, is introduced.
Abstract: Vehicular ad hoc networks (VANETs) have received considerable attention in recent times. Multihop data delivery between vehicles is an important aspect for the support of VANET-based applications. Although data dissemination and routing have extensively been addressed, many unique characteristics of VANETs, together with the diversity in promising applications, offer newer research challenges. This paper introduces the improved greedy traffic-aware routing protocol (GyTAR), which is an intersection-based geographical routing protocol that is capable of finding robust and optimal routes within urban environments. The main principle behind GyTAR is the dynamic and in-sequence selection of intersections through which data packets are forwarded to the destinations. The intersections are chosen considering parameters such as the remaining distance to the destination and the variation in vehicular traffic. Data forwarding between intersections in GyTAR adopts an improved greedy carry-and-forward mechanism. Evaluation of the proposed routing protocol shows significant performance improvement in comparison with other existing routing approaches. With the aid of extensive simulations, we also validate the optimality and sensitivity of significant GyTAR parameters.

304 citations


Journal ArticleDOI
TL;DR: To solve the CDLP for real-size networks, it is proved that the associated column generation subproblem is indeed NP-hard and a simple, greedy heuristic is proposed to overcome the complexity of an exact algorithm.
Abstract: During the past few years, there has been a trend to enrich traditional revenue management models built upon the independent demand paradigm by accounting for customer choice behavior. This extension involves both modeling and computational challenges. One way to describe choice behavior is to assume that each customer belongs to a segment, which is characterized by a consideration set, i.e., a subset of the products provided by the firm that a customer views as options. Customers choose a particular product according to a multinomial-logit criterion, a model widely used in the marketing literature. In this paper, we consider the choice-based, deterministic, linear programming model (CDLP) of Gallego et al. (2004) [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. Technical Report CORC TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York], and the follow-up dynamic programming decomposition heuristic of van Ryzin and Liu (2008) [van Ryzin, G. J., Q. Liu. 2008. On the choice-based linear programming model for network revenue management. Manufacturing Service Oper. Management10(2) 288--310]. We focus on the more general version of these models, where customers belong to overlapping segments. To solve the CDLP for real-size networks, we need to develop a column generation algorithm. We prove that the associated column generation subproblem is indeed NP-hard and propose a simple, greedy heuristic to overcome the complexity of an exact algorithm. Our computational results show that the heuristic is quite effective and that the overall approach leads to high-quality, practical solutions.

303 citations


Posted Content
TL;DR: An efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) is proposed that extends Needell and Tropp's compressive sampling matching pursuit algorithm from the sparse vector to the low-rank matrix case and bounds both the number of iterations and the error in the approximate solution.
Abstract: We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion.

257 citations


Journal ArticleDOI
TL;DR: A number of new analytic results characterizing the performance limits of greedy maximal scheduling are provided, including an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph.
Abstract: In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1/6 and 1/3.

255 citations


Journal IssueDOI
TL;DR: Two new approaches to solve the coverage path planning problem in the case of agricultural fields and agricultural machines are presented for consideration and are greedy algorithms that are applicable to both robots and human-driven machines.
Abstract: In this article, a coverage path planning problem is discussed in the case of agricultural fields and agricultural machines. Methods and algorithms to solve this problem are developed. These algorithms are applicable to both robots and human-driven machines. The necessary condition is to cover the whole field, and the goal is to find as efficient a route as possible. As yet, there is no universal algorithm or method capable of solving the problem in all cases. Two new approaches to solve the coverage path planning problem in the case of agricultural fields and agricultural machines are presented for consideration. Both of them are greedy algorithms. In the first algorithm the view is from on top of the field, and the goal is to split a single field plot into subfields that are simple to drive or operate. This algorithm utilizes a trapezoidal decomposition algorithm, and a search is developed of the best driving direction and selection of subfields. This article also presents other practical aspects that are taken into account, such as underdrainage and laying headlands. The second algorithm is also an incremental algorithm, but the path is planned on the basis of the machine's current state and the search is on the next swath instead of the next subfield. There are advantages and disadvantages with both algorithms, neither of them solving the problem of coverage path planning problem optimally. Nevertheless, the developed algorithms are remarkable steps toward finding a way to solve the coverage path planning problem with nonomnidirectional vehicles and taking into consideration agricultural aspects. © 2009 Wiley Periodicals, Inc.

240 citations


Journal ArticleDOI
TL;DR: A natural Greedy heuristic for the maximum volume problem is studied and it is shown that if the optimal solution selects k columns, then Greedy will select @W(k/logk) columns, providing a logk approximation.

230 citations


Journal ArticleDOI
TL;DR: This paper studies an NP-hard multi-period production-distribution problem to minimize the sum of three costs: production setups, inventories and distribution and confirms both the interest of integrating production and distribution decisions and of using the MA|PM template.

Journal ArticleDOI
TL;DR: This paper studies the problem of allocation of tasks onto a computational grid with the aim to simultaneously minimize the energy consumption and the makespan subject to the constraints of deadlines and tasks' architectural requirements and proposes a solution from cooperative game theory based on the concept of Nash bargaining solution.
Abstract: With the explosive growth in computers and the growing scarcity in electric supply, reduction of energy consumption in large-scale computing systems has become a research issue of paramount importance. In this paper, we study the problem of allocation of tasks onto a computational grid, with the aim to simultaneously minimize the energy consumption and the makespan subject to the constraints of deadlines and tasks' architectural requirements. We propose a solution from cooperative game theory based on the concept of Nash bargaining solution. In this cooperative game, machines collectively arrive at a decision that describes the task allocation that is collectively best for the system, ensuring that the allocations are both energy and makespan optimized. Through rigorous mathematical proofs we show that the proposed cooperative game in mere O(n mlog(m)) time (where n is the number of tasks and m is the number of machines in the system) produces a Nash bargaining solution that guarantees Pareto-optimally. The simulation results show that the proposed technique achieves superior performance compared to the greedy and linear relaxation (LR) heuristics, and with competitive performance relative to the optimal solution implemented in LINDO for small-scale problems.

Journal ArticleDOI
TL;DR: Algorithms for resource allocation in Single Carrier Frequency Division Multiple Access (SC-FDMA) systems, which is the uplink multiple access scheme considered in the 3GPP-LTE standard, are presented and a greedy heuristic algorithm that approaches the optimal performance in cases of practical interest is presented.
Abstract: We present algorithms for resource allocation in Single Carrier Frequency Division Multiple Access (SC-FDMA) systems, which is the uplink multiple access scheme considered in the Third Generation Partnership Project-Long Term Evolution (3GPP-LTE) standard. Unlike the well-studied problem of Orthogonal Frequency Division Multiple Access (OFDMA) resource allocation, the "subchannel adjacency" restriction, whereby users can only be assigned multiple subchannels that are adjacent to each other, makes the problem much harder to solve. We present a novel reformulation of this problem as a pure binary-integer program called the set partitioning problem, which is a well studied problem in operations research. We also present a greedy heuristic algorithm that approaches the optimal performance in cases of practical interest. We present simulation results for 3GPP-LTE uplink scenarios.

Journal Article
Tong Zhang1
TL;DR: It is shown that under a certain irrepresentable condition on the design matrix (but independent of the sparse target), the greedy algorithm can select features consistently when the sample size approaches infinity.
Abstract: This paper studies the feature selection problem using a greedy least squares regression algorithm. We show that under a certain irrepresentable condition on the design matrix (but independent of the sparse target), the greedy algorithm can select features consistently when the sample size approaches infinity. The condition is identical to a corresponding condition for Lasso. Moreover, under a sparse eigenvalue condition, the greedy algorithm can reliably identify features as long as each nonzero coefficient is larger than a constant times the noise level. In comparison, Lasso may require the coefficients to be larger than O(√s) times the noise level in the worst case, where s is the number of nonzero coefficients.

Journal ArticleDOI
TL;DR: A low-complexity, greedy max-min algorithm is proposed to solve the resource allocation for an OFDM based cognitive radio system in which one or more spectrum holes exist between multiple primary user (PU) frequency bands.
Abstract: The problem of subcarrier, bit and power allocation for an OFDM based cognitive radio system in which one or more spectrum holes exist between multiple primary user (PU) frequency bands is studied. The cognitive radio user is able to use any portion of the frequency band as long as it does not interfere unduly with the PUs' transmissions. We formulate the resource allocation as a multidimensional knapsack problem and propose a low-complexity, greedy max-min algorithm to solve it. The proposed algorithm is simple to implement and simulation results show that its performance is very close to (within 0.3% of) the optimal solution.

Journal ArticleDOI
TL;DR: This work proposes a fast algorithm for solving the Basis Pursuit problem, min u, and claims that in combination with a Bregman iterative method, this algorithm will achieve a solution with speed and accuracy competitive with some of the leading methods for the basis pursuit problem.
Abstract: We propose a fast algorithm for solving the Basis Pursuit problem, minu $\{|u|_1\: \Au=f\}$, which has application to compressed sensing We design an efficient method for solving the related unconstrained problem minu $E(u) = |u|_1 + \lambda \||Au-f\||^2_2$ based on a greedy coordinate descent method We claim that in combination with a Bregman iterative method, our algorithm will achieve a solution with speed and accuracy competitive with some of the leading methods for the basis pursuit problem

Proceedings ArticleDOI
01 Sep 2009
TL;DR: This paper has developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods.
Abstract: This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A new greedy algorithm to perform sparse signal reconstruction from signs of signal measurements, i.e., measurements quantized to 1-bit, which demonstrates that combining the principle of consistency with a sparsity prior outperforms approaches that use only consistency or only sparsity priors.
Abstract: This paper presents Matched Sign Pursuit (MSP), a new greedy algorithm to perform sparse signal reconstruction from signs of signal measurements, i.e., measurements quantized to 1-bit. The algorithm combines the principle of consistent reconstruction with greedy sparse reconstruction. The resulting MSP algorithm has several advantages, both theoretical and practical, over previous approaches. Although the problem is not convex, the experimental performance of the algorithm is significantly better compared to reconstructing the signal by treating the quantized measurement as values. Our results demonstrate that combining the principle of consistency with a sparsity prior outperforms approaches that use only consistency or only sparsity priors.

Journal ArticleDOI
TL;DR: A constant-time distributed random access algorithm for scheduling in multi-hop wireless networks that theoretically achieves a superior efficiency factor as well as numerically achieves a significant performance improvement over the state-of-the-art.
Abstract: The scheduling problem in multi-hop wireless networks has been extensively investigated. Although throughput optimal scheduling solutions have been developed in the literature, they are unsuitable for multi-hop wireless systems because they are usually centralized and have very high complexity. In this paper, we develop a random-access based scheduling scheme that utilizes local information. The important features of this scheme include constant-time complexity, distributed operations, and a provable performance guarantee. Analytical results show that it guarantees a larger fraction of the optimal throughput performance than the state-of-the-art. Through simulations with both single-hop and multi-hop traffics, we observe that the scheme provides high throughput, close to that of a well-known highly efficient centralized greedy solution called the greedy maximal scheduler.

Proceedings ArticleDOI
30 Mar 2009
TL;DR: The augmented model is superior to the best-performing method of DUC'04 on ROUGE-1 without stopwords and augmenting the summarization model so that it takes into account the relevance to the document cluster.
Abstract: We discuss text summarization in terms of maximum coverage problem and its variant. We explore some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. On the basis of the results of comparative experiments, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is superior to the best-performing method of DUC'04 on ROUGE-1 without stopwords.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: A simple but robust generalization of greedy distance routing called Gravity-Pressure (GP) routing is proposed, which always succeeds in finding a route to the destination provided that a path exists, even if a significant fraction of links or nodes is removed subsequent to the embedding.
Abstract: We propose an embedding and routing scheme for arbitrary network connectivity graphs, based on greedy routing and utilizing virtual node coordinates. In dynamic multihop packet-switching communication networks, routing elements can join or leave during network operation or exhibit intermittent failures. We present an algorithm for online greedy graph embedding in the hyperbolic plane that enables incremental embedding of network nodes as they join the network, without disturbing the global embedding. Even a single link or node removal may invalidate the greedy routing success guarantees in network embeddings based on an embedded spanning tree subgraph. As an alternative to frequent reembedding of temporally dynamic network graphs in order to retain the greedy embedding property, we propose a simple but robust generalization of greedy distance routing called Gravity-Pressure (GP) routing. Our routing method always succeeds in finding a route to the destination provided that a path exists, even if a significant fraction of links or nodes is removed subsequent to the embedding. GP routing does not require precomputation or maintenance of special spanning subgraphs and, as demonstrated by our numerical evaluation, is particularly suitable for operation in tandem with our proposed algorithm for online graph embedding.

Journal IssueDOI
TL;DR: This study extends an efficient density-based algorithm for pairwise coverage to generate t-way interaction test suites and shows that it guarantees a logarithmic upper bound on the size of the test suites as a function of the number of factors.
Abstract: Algorithmic construction of software interaction test suites has focussed on pairwise coverage; less is known about the efficient construction of test suites for t-way interactions with t≥3. This study extends an efficient density-based algorithm for pairwise coverage to generate t-way interaction test suites and shows that it guarantees a logarithmic upper bound on the size of the test suites as a function of the number of factors. To complement this theoretical guarantee, an implementation is outlined and some practical improvements are made. Computational comparisons with other published methods are reported. Many of the results improve upon those in the literature. However, limitations on the ability of one-test-at-a-time algorithms are also identified. Copyright © 2008 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper shows that reformulating that step as a constrained flow optimization problem results in a convex problem that can be solved using standard linear programming techniques and yields excellent results on the PETS 2009 data set.
Abstract: Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming, which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization problem results in a convex problem that can be solved using standard Linear Programming techniques. In addition, this new approach is far simpler formally and algorithmically than existing techniques and yields excellent results on the PETS 2009 data set.

Proceedings ArticleDOI
22 Jun 2009
TL;DR: A simple greedy algorithm is designed that delivers a solution that k-covers at least half of the target points using at most M log(k|C|) sensors, where |C| is the maximum number of target points covered by a sensor and M is the minimum number of sensor required to k-cover all the given points.
Abstract: Sensor nodes may be equipped with a "directional" sensing device (such as a camera) which senses a physical phenomenon in a certain direction depending on the chosen orientation. In this article, we address the problem of selection and orientation of such directional sensors with the objective of maximizing coverage area. Prior works on sensor coverage have largely focused on coverage with sensors that are associated with a unique sensing region. In contrast, directional sensors have multiple sensing regions associated with them, and the orientation of the sensor determines the actual sensing region. Thus, the coverage problems in the context of directional sensors entails selection as well as orientation of sensors needed to activate in order to maximize/ensure coverage. In this article, we address the problem of selecting a minimum number of sensors and assigning orientations such that the given area (or set of target points) is k-covered (i.e., each point is covered k times). The above problem is NP-complete, and even NP-hard to approximate. Thus, we design a simple greedy algorithm that delivers a solution that k-covers at least half of the target points using at most M log(k|C|) sensors, where |C| is the maximum number of target points covered by a sensor and M is the minimum number of sensor required to k-cover all the given points. The above result holds for almost arbitrary sensing regions. We design a distributed implementation of the above algorithm, and study its performance through simulations. In addition to the above problem, we also look at other related coverage problems in the context of directional sensors, and design similar approximation algorithms for them.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: This paper is dedicated to the scheduling of the static segment in compliance with the automotive-specific AUTOSAR standard, and a fast greedy heuristic as well as a complete approach based on Integer Linear Programming are presented.
Abstract: The FlexRay bus is the prospective automotive standard communication system. For the sake of a high exibility, the protocol includes a static time-triggered and a dynamic event-triggered segment. This paper is dedicated to the scheduling of the static segment in compliance with the automotive-specific AUTOSAR standard. For the determination of an optimal schedule in terms of the number of used slots, a fast greedy heuristic as well as a complete approach based on Integer Linear Programming are presented. For this purpose, a scheme for the transformation of the scheduling problem into a bin packing problem is proposed. Moreover, a metric and optimization method for the extensibility of partially used slots is introduced. Finally, the provided experimental results give evidence of the benefits of the proposed methods. On a realistic case study, the proposed methods are capable of obtaining better results in a significantly smaller amount of time compared to a commercial tool. Additionally, the experimental results provide a case study on incremental scheduling, a scalability analysis, an exploration use case, and an additional test case to emphasis the robustness and exibility of the proposed methods.

Journal ArticleDOI
TL;DR: A new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration is developed based on the realization that many classical proofs for recovery of sparse signals can be trivially extended to the new setting.
Abstract: Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main contribution is the development of a new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration. The new selection strategy is based on the realization that many classical proofs for recovery of sparse signals can be trivially extended to the new setting. What is more, simulation studies show the computational benefits and good performance of the approach. This strategy can be used in several greedy algorithms, and we argue for the use within the gradient pursuit framework in which selected coefficients are updated using a conjugate update direction. For this update, we present a fast implementation and novel convergence result.

Journal ArticleDOI
TL;DR: The proposed unusual video event detection method is based on unsupervised clustering of object trajectories, which are modeled by hidden Markov models (HMM), and includes a dynamic hierarchical process incorporated in the trajectory clustering algorithm.
Abstract: The proposed unusual video event detection method is based on unsupervised clustering of object trajectories, which are modeled by hidden Markov models (HMM). The novelty of the method includes a dynamic hierarchical process incorporated in the trajectory clustering algorithm to prevent model overfitting and a 2-depth greedy search strategy for efficient clustering.

Journal ArticleDOI
TL;DR: A discrete differential evolution algorithm with the reference local search is presented to solve the single machine total weighted tardiness problem with sequence dependent setup times and newly designed speed-up methods are presented for the greedy job insertion into a partial solution.

Journal ArticleDOI
TL;DR: This article studies a greedy lattice basis reduction algorithm for the Euclidean norm, and shows that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm.
Abstract: Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are twofold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: The output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm.

Journal ArticleDOI
TL;DR: It is shown that, with a high probability, either suboptimal algorithm can reach an optimal point if a backoff mechanism is used for contention resolution and it is demonstrated that the adoption of adaptive modulation affects the optimal sensing-order setting of the two users, compared with the case without adaptive modulation.
Abstract: This paper investigates the sensing-order problem in two-user multichannel cognitive medium access control. When adaptive modulation is not adopted, although brute-force search can be used to find the optimal sensing-order setting of the two users, it has huge computational complexity. Accordingly, we propose two suboptimal algorithms, namely, the greedy search algorithm and the incremental algorithm, which have comparable performance with that of brute-force search and have much less computational complexity. It is shown that, with a high probability, either suboptimal algorithm can reach an optimal point if a backoff mechanism is used for contention resolution. When adaptive modulation is adopted, it is observed that the traditional stopping rule does not lead to an optimal point in the two-user case. Furthermore, we demonstrate that the adoption of adaptive modulation affects the optimal sensing-order setting of the two users, compared with the case without adaptive modulation. These findings imply that the stopping rule and the sensing-order setting should be jointly designed from a systematic point of view.