scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2010"


Journal ArticleDOI
TL;DR: In this article, the authors introduce a new class of structured compressible signals along with a new sufficient condition for robust structured compressibility signal recovery that they dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS.
Abstract: Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ? N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.

1,789 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: The proposed method to learn an over-complete dictionary is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure.
Abstract: In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.

1,331 citations


Journal ArticleDOI
29 Apr 2010
TL;DR: This paper surveys the major practical algorithms for sparse approximation with specific attention to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available.
Abstract: The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications.

1,003 citations


Posted Content
TL;DR: In this paper, the authors explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify, and examine several different classes of approximation algorithms that aim to optimize such objective functions.
Abstract: Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that "look like" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.

950 citations


Proceedings ArticleDOI
26 Apr 2010
TL;DR: Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.
Abstract: Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that "look like" good communities for the application of interest.In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.

854 citations


Book
08 Nov 2010
TL;DR: Discovering surprises in the face of intractability is found to be a challenge in finding solutions to intractable problems.
Abstract: Today most computer scientists believe that NP-hard problems cannot be solved by polynomial-time algorithms. From the polynomial-time perspective, all NP-complete problems are equivalent but their exponential-time properties vary widely. Why do some NP-hard problems appear to be easier than others? Are there algorithmic techniques for solving hard problems that are significantly faster than the exhaustive, brute-force methods? The algorithms that address these questions are known as exact exponential algorithms.The history of exact exponential algorithms for NP-hard problems dates back to the 1960s. The two classical examples are Bellman, Held and Karps dynamic programming algorithm for the traveling salesman problem and Rysers inclusionexclusion formula for the permanent of a matrix. The design and analysis of exact algorithms leads to a better understanding of hard problems and initiates interesting new combinatorial and algorithmic challenges. The last decade has witnessed a rapid development of the area, with many new algorithmic techniques discovered. This has transformed exact algorithms into a very active research field. This book provides an introduction to the area and explains the most common algorithmic techniques, and the text is supported throughout with exercises and detailed notes for further reading.The book is intended for advanced students and researchers in computer science, operations research, optimization and combinatorics.

494 citations


Journal ArticleDOI
TL;DR: In this paper, a survey of deterministic scheduling problems with availability constraints motivated by preventive maintenance is presented, where complexity results, exact algorithms and approximation algorithms in single machine, parallel machine, flow shop, open shop, job shop scheduling environment with different criteria are surveyed briefly.

376 citations


Journal ArticleDOI
TL;DR: It is argued that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes and is proposed that a particle filter with a single particle provides a good description of human inferences.
Abstract: Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.

338 citations


Proceedings ArticleDOI
05 Jun 2010
TL;DR: An algorithm that for every ε> 0 approximates the Densest k-Subgraph problem within a ratio of n¼ + ε in time nO(1/ε), and an extension to this algorithm which achieves an O(n¼ -ε)-approximation in O(2nO(ε)) time.
Abstract: In the Densest k-Subgraph problem, given a graph G and a parameter k, one needs to find a subgraph of G induced on k vertices that contains the largest number of edges. There is a significant gap between the best known upper and lower bounds for this problem. It is NP-hard, and does not have a PTAS unless NP has subexponential time algorithms. On the other hand, the current best known algorithm of Feige, Kortsarz and Peleg, gives an approximation ratio of n1/3 - c for some fixed c>0 (later estimated at around c= 1/90).We present an algorithm that for every e> 0 approximates the Densest k-Subgraph problem within a ratio of n¼ + e in time nO(1/e). If allowed to run for time nO(log n), the algorithm achieves an approximation ratio of O(n¼). Our algorithm is inspired by studying an average-case version of the problem where the goal is to distinguish random graphs from random graphs with planted dense subgraphs -- the approximation ratio we achieve for the general case matches the "distinguishing ratio" we obtain for this planted problem.At a high level, our algorithms involve cleverly counting appropriately defined trees of constant size in G, and using these counts to identify the vertices of the dense subgraph. We say that a graph G(V,E) has log-density α if its average degree is Θ(|V|α). The algorithmic core of our result is a procedure to output a k-subgraph of 'nontrivial' density whenever the log-density of the densest k-subgraph is larger than the log-density of the host graph.We outline an extension to our approximation algorithm which achieves an O(n¼ -e)-approximation in O(2nO(e)) time. We also show that, for certain parameter ranges, eigenvalue and SDP based techniques can outperform our basic distinguishing algorithm for random instances (in polynomial time), though without improving upon the O(n¼) guarantee overall.

332 citations


Proceedings Article
01 Jan 2010
TL;DR: The algorithms are based on the principle of inclusion-exclusion and the zeta transform and get exact algorithms in time for several well-studied partition problems including domatic number, chromaticNumber, maximum $k$-cut, bin packing, list coloring, and the chromatic polynomial.
Abstract: Given a set N with n elements and a family F of subsets, we show how to partition N into k such subsets in 2 n n O(1) time. We also consider variations of this problem where the subsets may overlap or are weighted, and we solve the decision, counting, summation, and optimization versions of these problems. Our algorithms are based on the principle of inclusion-exclusion and the zeta transform. In effect we get exact algorithms in 2 n n O(1) time for several well-studied partition problems including domatic number, chromatic number, maximum k-cut, bin packing, list coloring, and the chromatic polynomial. We also have applications to Bayesian learning with decision graphs and to model-based data clustering. If only polynomial space is available, our algorithms run in time 3 n n O(1) if membership in F can be decided in polynomial time. We solve chromatic number in O(2.2461 n ) time and domatic number in O(2.8718 n ) time. Finally, we present a family of polynomial space approximation algorithms that find a number between X(G) and ⌈(1 + e)χ(G)⌉ in time O(1.2209 n + 2.2461 e-en ) . .

325 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper provides the first rigorous foundation to state evolution, and proves that indeed it holds asymptotically in the large system limit for sensing matrices with iid gaussian entries.
Abstract: ‘Approximate message passing’ algorithms proved to be extremely effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide the first rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with iid gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs.

Journal ArticleDOI
01 Mar 2010
TL;DR: Several elementary measures are proposed for this rough-set framework, and a concept of approximation reduct is introduced to characterize the smallest attribute subset that preserves the lower approximation and upper approximation of all decision classes in this rough set model.
Abstract: The original rough-set model is primarily concerned with the approximations of sets described by a single equivalence relation on a given universe. With granular computing point of view, the classical rough-set theory is based on a single granulation. This correspondence paper first extends the rough-set model based on a tolerance relation to an incomplete rough-set model based on multigranulations, where set approximations are defined through using multiple tolerance relations on the universe. Then, several elementary measures are proposed for this rough-set framework, and a concept of approximation reduct is introduced to characterize the smallest attribute subset that preserves the lower approximation and upper approximation of all decision classes in this rough-set model. Finally, several key algorithms are designed for finding an approximation reduct.

Journal ArticleDOI
TL;DR: Simulation results show that the diffusion L MS algorithm with the proposed adaptive combiners outperforms those with existing static combiners and the incremental LMS algorithm, and that the theoretical analysis provides a good approximation of practical performance.
Abstract: This paper presents an efficient adaptive combination strategy for the distributed estimation problem over diffusion networks in order to improve robustness against the spatial variation of signal and noise statistics over the network. The concept of minimum variance unbiased estimation is used to derive the proposed adaptive combiner in a systematic way. The mean, mean-square, and steady-state performance analyses of the diffusion least-mean squares (LMS) algorithms with adaptive combiners are included and the stability of convex combination rules is proved. Simulation results show (i) that the diffusion LMS algorithm with the proposed adaptive combiners outperforms those with existing static combiners and the incremental LMS algorithm, and (ii) that the theoretical analysis provides a good approximation of practical performance.

Proceedings Article
Martin Jaggi1, Marek Sulovsky1
21 Jun 2010
TL;DR: A new approximation algorithm building upon the recent sparse approximate SDP solver of Hazan, 2008 is proposed, which comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics.
Abstract: Optimization problems with a nuclear norm regularization, such as eg low norm matrix factorizations, have seen many applications recently We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008) The experimental efficiency of our method is demonstrated on large matrix completion problems such as the Netflix dataset The algorithm comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics The method is free of tuning parameters, and very easy to parallelize

Journal ArticleDOI
TL;DR: Approximation Algorithms for Facility Dispersion Greedy Al algorithms for Metric Facility Location Problems Prize-Collecting Traveling Salesman and Related Problems A Development and Deployment Framework for Distributed Branch and Bound Approximations for Steiner Minimum Trees Practical ApproxIMations of Steiner Trees in Uniform Orientation Metrics Approximation Schemes.
Abstract: PREFACE BASIC METHODOLOGIES Introduction, Overview, and Notation Basic Methodologies and Applications Restriction Methods Greedy Methods Recursive Greedy Methods Linear Programming LP Rounding and Extensions On Analyzing Semidefinite Programming Relaxations of Complex Quadratic Optimization Problems Polynomial-Time Approximation Schemes Rounding, Interval Partitioning, and Separation Asymptotic Polynomial-Time Approximation Schemes Randomized Approximation Techniques Distributed Approximation Algorithms via LP-Duality and Randomization Empirical Analysis of Randomized Algorithms Reductions that Preserve Approximability Differential Ratio Approximation Hardness of Approximation LOCAL SEARCH, NEURAL NETWORKS, AND METAHEURISTICS Local Search Stochastic Local Search Very Large-Scale Neighborhood Search: Theory, Algorithms, and Applications Reactive Search: Machine Learning for Memory-Based Heuristics Neural Networks Principles of Tabu Search Evolutionary Computation Simulated Annealing Ant Colony Optimization Memetic Algorithms MULTIOBJECTIVE OPTIMIZATION, SENSITIVITY ANALYSIS, AND STABILITY Approximation in Multiobjective Problems Stochastic Local Search Algorithms for Multiobjective Combinatorial Optimization: A Review Sensitivity Analysis in Combinatorial Optimization Stability of Approximation TRADITIONAL APPLICATIONS Performance Guarantees for One-Dimensional Bin Packing Variants of Classical One-Dimensional Bin Packing Variable, Sized Bin Packing and Bin Covering Multidimensional Packing Problems Practical Algorithms for Two-Dimensional Packing A Generic Primal-Dual Approximation Algorithm for an Interval Packing and Stabbing Problem Approximation Algorithms for Facility Dispersion Greedy Algorithms for Metric Facility Location Problems Prize-Collecting Traveling Salesman and Related Problems A Development and Deployment Framework for Distributed Branch and Bound Approximations for Steiner Minimum Trees Practical Approximations of Steiner Trees in Uniform Orientation Metrics Approximation Algorithms for Imprecise Computation Tasks with 0/1 Constraint Scheduling Malleable Tasks Vehicle Scheduling Problems in Graphs Approximation Algorithms and Heuristics for Classical Planning Generalized Assignment Problem Probabilistic Greedy Heuristics for Satisfiability Problems COMPUTATIONAL GEOMETRY AND GRAPH APPLICATIONS Approximation Algorithms for Some Optimal 2D and 3D Triangulations Approximation Schemes for Minimum-Cost k-Connectivity Problems in Geometric Graphs Dilation and Detours in Geometric Networks The Well-Separated Pair Decomposition and its Applications Minimum-Edge Length Rectangular Partitions Partitioning Finite d-Dimensional Integer Grids with Applications Maximum Planar Subgraph Edge-Disjoint Paths and Unsplittable Flow Approximating Minimum-Cost Connectivity Problems Optimum Communication Spanning Trees Approximation Algorithms for Multilevel Graph Partitioning Hypergraph Partitioning and Clustering Finding Most Vital Edges in a Graph Stochastic Local Search Algorithms for the Graph Coloring Problem On Solving the Maximum Disjoint Paths Problem with Ant Colony Optimization LARGE-SCALE AND EMERGING APPLICATIONS Cost-Efficient Multicast Routing in Ad Hoc and Sensor Networks Approximation Algorithm for Clustering in Ad Hoc Networks Topology Control Problems for Wireless Ad Hoc Networks Geometrical Spanner for Wireless Ad Hoc Networks Multicast Topology Inference and its Applications Multicast Congestion in Ring Networks QoS Multimedia Multicast Routing Overlay Networks for Peer-to-Peer Networks Scheduling Data Broadcasts on Wireless Channels: Exact Solutions and Heuristics Combinatorial and Algorithmic Issues for Microarray Analysis Approximation Algorithms for the Primer Selection, Planted Motif Search, and Related Problems Dynamic and Fractional Programming-Based Approximation Algorithms for Sequence Alignment with Constraints Approximation Algorithms for the Selection of Robust Tag SNPs Sphere Packing and Medical Applications Large-Scale Global Placement Multicommodity Flow Algorithms for Buffered Global Routing Algorithmic Game Theory and Scheduling Approximate Economic Equilibrium Algorithms Approximation Algorithms and Algorithm Mechanism Design Histograms, Wavelets, Streams, and Approximation Digital Reputation for Virtual Communities Color Quantization INDEX

Journal ArticleDOI
TL;DR: An approximation algorithm for finding optimal decompositions which is based on the insight provided by the theorem and significantly outperforms a greedy approximation algorithms for a set covering problem to which the problem of matrix decomposition is easily shown to be reducible.

Journal ArticleDOI
TL;DR: In this paper, an atomic decomposition for minimum rank approximation (ADMiRA) algorithm was proposed for matrix completion with rank-restricted isometry property (R-RIP) and bound both the number of iterations and the error in the approximate solution for the general case of noisy measurements.
Abstract: In this paper, we address compressed sensing of a low-rank matrix posing the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition providing an analogy between parsimonious representations of a sparse vector and a low-rank matrix and extending efficient greedy algorithms from the vector to the matrix case. In particular, we propose an efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) that extends Needell and Tropp's compressive sampling matching pursuit (CoSaMP) algorithm from the sparse vector to the low-rank matrix case. The performance guarantee is given in terms of the rank-restricted isometry property (R-RIP) and bounds both the number of iterations and the error in the approximate solution for the general case of noisy measurements and approximately low-rank solution. With a sparse measurement operator as in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. Numerical experiments for the matrix completion problem show that, although the R-RIP is not satisfied in this case, ADMiRA is a competitive algorithm for matrix completion.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: A new algorithm is proposed, termed a-Net, that approximates the behavior of multi-cache networks by leveraging existing approximation algorithms for isolated LRU caches and it is demonstrated the utility of a- net using both per- cache and network-wide performance measures.
Abstract: Many systems employ caches to improve performance. While isolated caches have been studied in-depth, multi-cache systems are not well understood, especially in networks with arbitrary topologies. In order to gain insight into and manage these systems, a low-complexity algorithm for approximating their behavior is required. We propose a new algorithm, termed a-Net, that approximates the behavior of multi-cache networks by leveraging existing approximation algorithms for isolated LRU caches. We demonstrate the utility of a-Net using both per- cache and network-wide performance measures. We also perform factor analysis of the approximation error to identify system parameters that determine the precision of a-Net.

Journal ArticleDOI
TL;DR: This work gives the first PTAS for this problem when the geometric objects are half-spaces in ℝ3 and when they are an r-admissible set regions in the plane (this includes pseudo-disks as they are 2- admissible).
Abstract: We consider the problem of computing minimum geometric hitting sets in which, given a set of geometric objects and a set of points, the goal is to compute the smallest subset of points that hit all geometric objects. The problem is known to be strongly NP-hard even for simple geometric objects like unit disks in the plane. Therefore, unless P = NP, it is not possible to get Fully Polynomial Time Approximation Algorithms (FPTAS) for such problems. We give the first PTAS for this problem when the geometric objects are half-spaces in ℝ3 and when they are an r-admissible set regions in the plane (this includes pseudo-disks as they are 2-admissible). Quite surprisingly, our algorithm is a very simple local-search algorithm which iterates over local improvements only.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: In this article, the authors present a method for calculating the low-rank approximation of a matrix which minimizes the L 1 norm in the presence of missing data and outliers.
Abstract: The calculation of a low-rank approximation of a matrix is a fundamental operation in many computer vision applications. The workhorse of this class of problems has long been the Singular Value Decomposition. However, in the presence of missing data and outliers this method is not applicable, and unfortunately, this is often the case in practice. In this paper we present a method for calculating the low-rank factorization of a matrix which minimizes the L 1 norm in the presence of missing data. Our approach represents a generalization the Wiberg algorithm of one of the more convincing methods for factorization under the L 2 norm. By utilizing the differentiability of linear programs, we can extend the underlying ideas behind this approach to include this class of L 1 problems as well. We show that the proposed algorithm can be efficiently implemented using existing optimization software. We also provide preliminary experiments on synthetic as well as real world data with very convincing results.

Journal ArticleDOI
TL;DR: An overview of several upper bound heuristics that have been proposed and tested for the problem of determining the treewidth of a graph and finding tree decompositions and it is shown that in many cases, the heuristic give tree decomposition whose width is close to the exact treewitzer of the input graphs.
Abstract: For more and more applications, it is important to be able to compute the treewidth of a given graph and to find tree decompositions of small width reasonably fast. This paper gives an overview of several upper bound heuristics that have been proposed and tested for the problem of determining the treewidth of a graph and finding tree decompositions. Each of the heuristics produces tree decompositions whose width may be larger than the optimal width. However, experiments show that in many cases, the heuristics give tree decompositions whose width is close to the exact treewidth of the input graphs.

Journal ArticleDOI
Tao Qin1, Tie-Yan Liu1, Hang Li1
TL;DR: A general framework for direct optimization of IR measures, which enjoys several theoretical advantages, and experiments on benchmark datasets show that the algorithms deduced from the framework are very effective when compared to existing methods.
Abstract: Recently direct optimization of information retrieval (IR) measures has become a new trend in learning to rank. In this paper, we propose a general framework for direct optimization of IR measures, which enjoys several theoretical advantages. The general framework, which can be used to optimize most IR measures, addresses the task by approximating the IR measures and optimizing the approximated surrogate functions. Theoretical analysis shows that a high approximation accuracy can be achieved by the framework. We take average precision (AP) and normalized discounted cumulated gains (NDCG) as examples to demonstrate how to realize the proposed framework. Experiments on benchmark datasets show that the algorithms deduced from our framework are very effective when compared to existing methods. The empirical results also agree well with the theoretical results obtained in the paper.

Journal ArticleDOI
TL;DR: This work studies the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor, and analyze the performance of several approximation algorithms.
Abstract: We study the problem of minimizing the expected loss of a linear predictor while constraining its sparsity, i.e., bounding the number of features used by the predictor. While the resulting optimization problem is generally NP-hard, several approximation algorithms are considered. We analyze the performance of these algorithms, focusing on the characterization of the trade-off between accuracy and sparsity of the learned predictor in different scenarios.

Journal ArticleDOI
TL;DR: This work considers consensus seeking of networked agents on directed graphs where each agent has only noisy measurements of its neighbors' states and uses Stochastic approximation type algorithms to generalize the algorithm to networks with random link failures and prove convergence results.
Abstract: We consider consensus seeking of networked agents on directed graphs where each agent has only noisy measurements of its neighbors' states. Stochastic approximation type algorithms are employed so that the individual states converge both in mean square and almost surely to the same limit. We further generalize the algorithm to networks with random link failures and prove convergence results.

Journal ArticleDOI
TL;DR: Constrained versions of the relay node placement problem, where relay nodes can only be placed at a set of candidate locations, are studied and a framework of polynomial time O(1) -approximation algorithms with small approximation ratios is presented.
Abstract: One approach to prolong the lifetime of a wireless sensor network (WSN) is to deploy some relay nodes to communicate with the sensor nodes, other relay nodes, and the base stations. The relay node placement problem for wireless sensor networks is concerned with placing a minimum number of relay nodes into a wireless sensor network to meet certain connectivity or survivability requirements. Previous studies have concentrated on the unconstrained version of the problem in the sense that relay nodes can be placed anywhere. In practice, there may be some physical constraints on the placement of relay nodes. To address this issue, we study constrained versions of the relay node placement problem, where relay nodes can only be placed at a set of candidate locations. In the connected relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with a base station through a bidirectional path. In the survivable relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with two base stations (or the only base station in case there is only one base station) through two node-disjoint bidirectional paths. For each of the two problems, we discuss its computational complexity and present a framework of polynomial time O(1) -approximation algorithms with small approximation ratios. Extensive numerical results show that our approximation algorithms can produce solutions very close to optimal solutions.

Journal ArticleDOI
TL;DR: This paper discusses how preference relations on sets can be formally defined, gives examples for selected user preferences, and proposes a general preference-independent hill climber for multiobjective optimization with theoretical convergence properties.
Abstract: Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can identify three core questions in this area of research: 1) how to formalize what type of Pareto set approximation is sought; 2) how to use this information within an algorithm to efficiently search for a good Pareto set approximation; and 3) how to compare the Pareto set approximations generated by different optimizers with respect to the formalized optimization goal. There is a vast amount of studies addressing these issues from different angles, but so far only a few studies can be found that consider all questions under one roof. This paper is an attempt to summarize recent developments in the EMO field within a unifying theory of set-based multiobjective search. It discusses how preference relations on sets can be formally defined, gives examples for selected user preferences, and proposes a general preference-independent hill climber for multiobjective optimization with theoretical convergence properties. Furthermore, it shows how to use set preference relations for statistical performance assessment and provides corresponding experimental results. The proposed methodology brings together preference articulation, algorithm design, and performance assessment under one framework and thereby opens up a new perspective on EMO.

Proceedings ArticleDOI
23 Oct 2010
TL;DR: A sub exponential time approximation algorithm for the Unique Games problem that is exponential in an arbitrarily small polynomial of the input size, n, and shows that for every $\epsilon>0$ and every regular $n$-vertex graph~$G, one can break into disjoint parts so that the stochastic adjacency matrix of the induced graph on each part has at most n eigenvalues larger than $1-\eta.
Abstract: We give a sub exponential time approximation algorithm for the \textsc{Unique Games} problem. The algorithms run in time that is exponential in an arbitrarily small polynomial of the input size, $n^{\epsilon}$. The approximation guarantee depends on~$\epsilon$, but not on the alphabet size or the number of variables. We also obtain a sub exponential algorithms with improved approximations for \textsc{Small-Set Expansion} and \textsc{Multicut}. For \textsc{Max Cut}, \textsc{Sparsest Cut}, and \textsc{Vertex Cover}, we give sub exponential algorithms with improved approximations on some interesting subclasses of instances. Khot's Unique Games Conjecture (UGC) states that it is NP-hard to achieve approximation guarantees such as ours for the \textsc{Unique Games}. While our results stop short of refuting the UGC, they do suggest that \textsc{Unique Games} is significantly easier than NP-hard problems such as \textsc{Max 3Sat}, \textsc{Max 3Lin}, \textsc{Label Cover} and more, that are believed not to have a sub exponential algorithm achieving a non-trivial approximation ratio. The main component in our algorithms is a new result on graph decomposition that may have other applications. Namely we show that for every $\epsilon>0$ and every regular $n$-vertex graph~$G$, by changing at most $\epsilon$ fraction of $G$'s edges, one can break~$G$ into disjoint parts so that the stochastic adjacency matrix of the induced graph on each part has at most $ n^{\epsilon}$ eigenvalues larger than $1-\eta$, where $\eta$ depends polynomially on $\epsilon$.

Proceedings ArticleDOI
17 Mar 2010
TL;DR: This paper considers the reconstruction of structured-sparse signals from noisy linear observations, and the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior is assumed on the pattern.
Abstract: This paper considers the reconstruction of structured-sparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain/field/tree) is assumed on the pattern. Exact inference is discussed and an approximate inference scheme, based on loopy belief propagation (BP), is proposed. The proposed scheme iterates between exploitation of the observation-structure and exploitation of the pattern-structure, and is closely related to noncoherent turbo equalization, as used in digital communication receivers. An algorithm that exploits the observation structure is then detailed based on approximate message passing ideas. The application of EXIT charts is discussed, and empirical phase transition plots are calculated for Markov-chain structured sparsity.1

Proceedings ArticleDOI
01 Dec 2010
TL;DR: This work addresses the effect of leader selection on the coherence of the network, defined in terms of an H2 norm of the system, and formulates an optimization problem to select the set of leaders that results in the highest coherence.
Abstract: We consider the problem of leader-based distributed coordination in networks where agents are subject to stochastic disturbances, but where certain designated leaders are immune to those disturbances. Specifically, we address the effect of leader selection on the coherence of the network, defined in terms of an H 2 norm of the system. This quantity captures the level of agreement of the nodes in the face of the external disturbances. We show that network coherence depends on the eigenvalues of a principal submatrix of the Laplacian matrix, and we formulate an optimization problem to select the set of leaders that results in the highest coherence. As this optimization problem is combinatorial in nature, we also present several greedy algorithms for leader selection that rely on more easily computable bounds of the H 2 norm and the eigenvalues of the system. Finally, we illustrate the effectiveness of these algorithms using several network examples.

Journal ArticleDOI
Jon Lee1, Maxim Sviridenko1, Jan Vondrák1
TL;DR: In this article, it was shown that for any k ≥ 2 and any e > 0, there is a natural local search algorithm that has approximation guarantee of 1/(k + e) for the problem of maximizing a monotone submodular function subject to k matroid constraints.
Abstract: Submodular function maximization is a central problem in combinatorial optimization, generalizing many important NP-hard problems including max cut in digraphs, graphs, and hypergraphs; certain constraint satisfaction problems; maximum entropy sampling; and maximum facility location problems. Our main result is that for any k ≥ 2 and any e > 0, there is a natural local search algorithm that has approximation guarantee of 1/(k + e) for the problem of maximizing a monotone submodular function subject to k matroid constraints. This improves upon the 1/(k + 1)-approximation of Fisher, Nemhauser, and Wolsey obtained in 1978 [Fisher, M., G. Nemhauser, L. Wolsey. 1978. An analysis of approximations for maximizing submodular set functions---II. Math. Programming Stud.8 73--87]. Also, our analysis can be applied to the problem of maximizing a linear objective function and even a general nonmonotone submodular function subject to k matroid constraints. We show that, in these cases, the approximation guarantees of our algorithms are 1/(k-1 + e) and 1/(k + 1 + 1/(k-1) + e), respectively. Our analyses are based on two new exchange properties for matroids. One is a generalization of the classical Rota exchange property for matroid bases, and another is an exchange property for two matroids based on the structure of matroid intersection.