scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2006"


Journal ArticleDOI
TL;DR: A method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program, which can be completed in polynomial time with standard scientific software.
Abstract: This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis

1,536 citations


Journal ArticleDOI
TL;DR: A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomial-time solution, if robustness is intended in the proposed risk-adjusted sense.
Abstract: This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NP-hard control problems representable by means of parameter-dependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain a-priori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomial-time solution, if robustness is intended in the proposed risk-adjusted sense.

1,122 citations


Journal ArticleDOI
TL;DR: A depth-first search algorithm for generating all maximal cliques of an undirected graph, in which pruning methods are employed as in the Bron-Kerbosch algorithm, which proves that its worst-case time complexity is O(3n/3) for an n-vertex graph.

748 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: This paper proposes logical attack graphs, which directly illustrate logical dependencies among attack goals and configuration information, and shows experimental evidence that the logical attack graph generation algorithm is very efficient.
Abstract: Attack graphs are important tools for analyzing security vulnerabilities in enterprise networks. Previous work on attack graphs has not provided an account of the scalability of the graph generating process, and there is often a lack of logical formalism in the representation of attack graphs, which results in the attack graph being difficult to use and understand by human beings. Pioneer work by Sheyner, et al. is the first attack-graph tool based on formal logical techniques, namely model-checking. However, when applied to moderate-sized networks, Sheyner's tool encountered a significant exponential explosion problem. This paper describes a new approach to represent and generate attack graphs. We propose logical attack graphs, which directly illustrate logical dependencies among attack goals and configuration information. A logical attack graph always has size polynomial to the network being analyzed. Our attack graph generation tool builds upon MulVAL, a network security analyzer based on logical programming. We demonstrate how to produce a derivation trace in the MulVAL logic-programming engine, and how to use the trace to generate a logical attack graph in quadratic time. We show experimental evidence that our logical attack graph generation algorithm is very efficient. We have generated logical attack graphs for fully connected networks of 1000 machines using a Pentium 4 CPU with 1GB of RAM.

616 citations


Proceedings ArticleDOI
Dror Weitz1
21 May 2006
TL;DR: It is shown that on any graph of maximum degree Δ correlations decay with distance at least as fast as they do on the regular tree of the same degree, which resolves an open conjecture in statistical physics.
Abstract: Consider the problem of approximately counting weighted independent sets of a graph G with activity λ, i.e., where the weight of an independent set I is λ|I|. We present a novel analysis yielding a deterministic approximation scheme which runs in polynomial time for any graph of maximum degree Δ and λ

506 citations


Journal ArticleDOI
TL;DR: A bicriteria approximation algorithm that for any constant ν > 1 runs in polynomial time and guarantees an approximation ratio of O(log1.5n) (for a precise statement of the main result see Theorem 6).
Abstract: We consider the problem of partitioning a graph into k components of roughly equal size while minimizing the capacity of the edges between different components of the cut. In particular we require that for a parameter ν ≥ 1, no component contains more than ν · n/k of the graph vertices.For k = 2 and ν = 1 this problem is equivalent to the well-known Minimum Bisection problem for which an approximation algorithm with a polylogarithmic approximation guarantee has been presented in [FK]. For arbitrary k and ν ≥ 2 a bicriteria approximation ratio of O(log n) was obtained by Even et al. [ENRS1] using the spreading metrics technique.We present a bicriteria approximation algorithm that for any constant ν > 1 runs in polynomial time and guarantees an approximation ratio of O(log1.5n) (for a precise statement of the main result see Theorem 6). For ν = 1 and k ≥ 3 we show that no polynomial time approximation algorithm can guarantee a finite approximation ratio unless P = NP.

474 citations


Proceedings ArticleDOI
16 Jul 2006
TL;DR: This paper presents a recursive, dimension-sweep algorithm for computing the hypervolume indicator of the quality of a set of n non-dominated points in d > 2 dimensions that improves upon the existing HSO (Hypervolume by Slicing Objectives), by pruning the recursion tree to avoid repeated dominance checks and the recalculation of partial hypervolumes.
Abstract: This paper presents a recursive, dimension-sweep algorithm for computing the hypervolume indicator of the quality of a set of n non-dominated points in d > 2 dimensions. It improves upon the existing HSO (Hypervolume by Slicing Objectives) algorithm by pruning the recursion tree to avoid repeated dominance checks and the recalculation of partial hypervolumes. Additionally, it incorporates a recent result for the three-dimensional special case. The proposed algorithm achieves O(nd−2log n) time and linear space complexity in the worst-case, but experimental results show that the pruning techniques used may reduce the time complexity exponent even further.

445 citations


Journal Article
TL;DR: A new convolution kernel, namely the Partial Tree (PT) kernel, is proposed, to fully exploit dependency trees and an efficient algorithm for its computation is proposed which is futhermore sped-up by applying the selection of tree nodes with non-null kernel.
Abstract: In this paper, we provide a study on the use of tree kernels to encode syntactic parsing information in natural language learning. In particular, we propose a new convolution kernel, namely the Partial Tree (PT) kernel, to fully exploit dependency trees. We also propose an efficient algorithm for its computation which is futhermore sped-up by applying the selection of tree nodes with non-null kernel. The experiments with Support Vector Machines on the task of semantic role labeling and question classification show that (a) the kernel running time is linear on the average case and (b) the PT kernel improves on the other tree kernels when applied to the appropriate parsing paradigm.

434 citations


Journal ArticleDOI
TL;DR: Every subproblem of the CSP is either tractable or NP-complete, and the criterion separating them is that conjectured in Bulatov et al.
Abstract: The Constraint Satisfaction Problem (CSP) provides a common framework for many combinatorial problems. The general CSP is known to be NP-complete; however, certain restrictions on a possible form of constraints may affect the complexity and lead to tractable problem classes. There is, therefore, a fundamental research direction, aiming to separate those subclasses of the CSP that are tractable and those which remain NP-complete.Schaefer gave an exhaustive solution of this problem for the CSP on a 2-element domain. In this article, we generalise this result to a classification of the complexity of the CSP on a 3-element domain. The main result states that every subproblem of the CSP is either tractable or NP-complete, and the criterion separating them is that conjectured in Bulatov et al. [2005] and Bulatov and Jeavons [2001b]. We also characterize those subproblems for which standard constraint propagation techniques provide a decision procedure. Finally, we exhibit a polynomial time algorithm which, for a given set of allowed constraints, outputs if this set gives rise to a tractable problem class. To obtain the main result and the algorithm, we extensively use the algebraic technique for the CSP developed in Jeavons [1998b], Bulatov et al.[2005], and Bulatov and Jeavons [2001b].

411 citations


Proceedings ArticleDOI
29 Sep 2006
TL;DR: It is shown that under a setting with single-hop traffic and no rate control, the maximal scheduling policy can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs.
Abstract: We consider the problem of throughput-optimal scheduling in wireless networks subject to interference constraints. We model the interference using a family of K -hop interference models. We define a K-hop interference model as one for which no two links within K hops can successfully transmit at the same time (Note that IEEE 802.11 DCF corresponds to a 2-hop interference model.) .For a given K, a throughput-optimal scheduler needs to solve a maximum weighted matching problem subject to the K-hop interference constraints. For K=1, the resulting problem is the classical Maximum Weighted Matching problem, that can be solved in polynomial time. However, we show that for K>1,the resulting problems are NP-Hard and cannot be approximated within a factor that grows polynomially with the number of nodes. Interestingly, we show that for specific kinds of graphs, that can be used to model the underlying connectivity graph of a wide range of wireless networks, the resulting problems admit polynomial time approximation schemes. We also show that a simple greedy matching algorithm provides a constant factor approximation to the scheduling problem for all K in this case. We then show that under a setting with single-hop traffic and no rate control, the maximal scheduling policy considered in recent related works can achieve a constant fraction of the capacity region for networks whose connectivity graph can be represented using one of the above classes of graphs. These results are encouraging as they suggest that one can develop distributed algorithms to achieve near optimal throughput in case of a wide range of wireless networks.

398 citations


Posted Content
TL;DR: This paper presents a method incorporating a built-in decisional function into the protocols, and discusses the resulting efficiency of the schemes and the relevant security reductions, in the random oracle model, inThe context of different pairings one can use.
Abstract: In recent years, a large number of identity-based key agreement protocols from pairings have been proposed. Some of them are elegant and practical. However, the security of this type of protocols has been surprisingly hard to prove. The main issue is that a simulator is not able to deal with reveal queries, because it requires solving either a computational problem or a decisional problem, both of which are generally believed to be hard (i.e., computationally infeasible). The best solution of security proof published so far uses the gap assumption, which means assuming that the existence of a decisional oracle does not change the hardness of the corresponding computational problem. The disadvantage of using this solution to prove the security for this type of protocols is that such decisional oracles, on which the security proof relies, cannot be performed by any polynomial time algorithm in the real world, because of the hardness of the decisional problem. In this paper we present a method incorporating a built-in decisional function in this type of protocols. The function transfers a hard decisional problem in the proof to an easy decisional problem. We then discuss the resulting efficiency of the schemes and the relevant security reductions in the context of different pairings one can use. We pay particular attention, unlike most other papers in the area, to the issues which arise when using asymmetric pairings.

Journal ArticleDOI
TL;DR: A new, adaptive scheme to generate appropriate constraint values during the run is proposed, and it is proved that––independent of the problem or the problem size––the time complexity of the new scheme is O(km−1), where k is thenumber of Pareto-optimal solutions to be found and m the number of objectives.

Journal ArticleDOI
TL;DR: It is proved that the NP-hard distinguishing substring selection problem has no polynomial time approximation schemes of running time f(1/@e)n^o^(^1^/^@e^) for any function f unless an unlikely collapse occurs in parameterized complexity theory.

Journal ArticleDOI
TL;DR: A fast agglomerative clustering method using an approximate nearest neighbor graph for reducing the number of distance calculations and a relatively small neighborhood size is sufficient to maintain the quality close to that of the full search.
Abstract: We propose a fast agglomerative clustering method using an approximate nearest neighbor graph for reducing the number of distance calculations. The time complexity of the algorithm is improved from O(tauN2) to O(tauN log N) at the cost of a slight increase in distortion; here, tau denotes the lumber of nearest neighbor updates required at each iteration. According to the experiments, a relatively small neighborhood size is sufficient to maintain the quality close to that of the full search

Proceedings ArticleDOI
05 Jun 2006
TL;DR: The main result of this paper is an algorithm that maintains the pairing in worst-case linear time per transposition in the ordering and uses the algorithm to compute 1-parameter families of diagrams which are applied to the study of protein folding trajectories.
Abstract: Persistent homology is the mathematical core of recent work on shape, including reconstruction, recognition, and matching. Its pertinent information is encapsulated by a pairing of the critical values of a function, visualized by points forming a diagram in the plane. The original algorithm in [10] computes the pairs from an ordering of the simplices in a triangulation and takes worst-case time cubic in the number of simplices. The main result of this paper is an algorithm that maintains the pairing in worst-case linear time per transposition in the ordering. A side-effect of the algorithm's analysis is an elementary proof of the stability of persistence diagrams [7] in the special case of piecewise-linear functions. We use the algorithm to compute 1-parameter families of diagrams which we apply to the study of protein folding trajectories.

Journal ArticleDOI
TL;DR: In this paper, a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension is presented. But the running time is not linear and the space being used is linear.
Abstract: We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near linear and the space being used is linear.

Journal ArticleDOI
01 Aug 2006
TL;DR: An O(n log3 n) time algorithm for finding shortest paths in an n-node planar graph with real weights and significantly improved data structures for reporting distances between pairs of nodes and algorithms for updating the data structures when edge weights change are presented.
Abstract: In this paper, we present an O(n log3 n) time algorithm for finding shortest paths in an n-node planar graph with real weights. This can be compared to the best previous strongly polynomial time algorithm developed by Lipton, Rose, and Tarjan in 1978 which runs in O(n3/2) time, and the best polynomial time algorithm developed by Henzinger, Klein, Subramanian, and Rao in 1994 which runs in O(n 4/3) time. We also present significantly improved data structures for reporting distances between pairs of nodes and algorithms for updating the data structures when edge weights change.

Journal ArticleDOI
TL;DR: This work considers the feasibility model of multi-agent scheduling on a single machine, where each agent's objective function is to minimize the total weighted number of tardy jobs, and presents a fully polynomial-time approximation scheme for the problem.

Journal ArticleDOI
TL;DR: Clearcut implements RNJ as a C program, which takes either a set of aligned sequences or a pre-computed distance matrix as input and produces a phylogenetic tree and can reconstruct phylogenies using an extremely fast standard NJ implementation.
Abstract: Summary: Clearcut is an open source implementation for the relaxed neighbor joining (RNJ) algorithm. While traditional neighbor joining (NJ) remains a popular method for distance-based phylogenetic tree reconstruction, it suffers from a O(N3) time complexity, where N represents the number of taxa in the input. Due to this steep asymptotic time complexity, NJ cannot reasonably handle very large datasets. In contrast, RNJ realizes a typical-case time complexity on the order of N2logN without any significant qualitative difference in output. RNJ is particularly useful when inferring a very large tree or a large number of trees. In addition, RNJ retains the desirable property that it will always reconstruct the true tree given a matrix of additive pairwise distances. Clearcut implements RNJ as a C program, which takes either a set of aligned sequences or a pre-computed distance matrix as input and produces a phylogenetic tree. Alternatively, Clearcut can reconstruct phylogenies using an extremely fast standard NJ implementation. Availability: Clearcut source code is available for download at: http://bioinformatics.hungry.com/clearcut Contact: sheneman@hungry.com Supplementary information: http://bioinformatics.hungry.com/clearcut

Proceedings ArticleDOI
12 Nov 2006
TL;DR: This work constitutes the first implementation of a synthesis algorithm for full LTL by careful optimization of all intermediate automata, and uses an incremental algorithm to compute the emptiness of nondeterministic Buchi tree automata.
Abstract: We present an approach to automatic synthesis of specifications given in linear time logic. The approach is based on a translation through universal co-Buchi tree automata and alternating weak tree automata (O. Kupferman and M. Vardi, 2005). By careful optimization of all intermediate automata, we achieve a major improvement in performance. We present several optimization techniques for alternating tree automata, including a game-based approximation to language emptiness and a simulation-based optimization. Furthermore, we use an incremental algorithm to compute the emptiness of nondeterministic Buchi tree automata. All our optimizations are computed in time polynomial in the size of the automaton on which they are computed. We have applied our implementation to several examples and show a significant improvement over the straightforward implementation. Although our examples are still small, this work constitutes the first implementation of a synthesis algorithm for full LTL. We believe that the optimizations discussed here form an important step towards making LTL synthesis practical

Journal ArticleDOI
TL;DR: A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem, and the new metric is shown to perform quite well in comparison to existing metrics when applications to a database of chemical graphs.
Abstract: A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs

Journal Article
TL;DR: A deterministic constant competitive online algorithm is devised and it is shown that the offline problem can be solved in polynomial time.
Abstract: We study scheduling problems in battery-operated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadline-based settings, in this paper we are interested in schedules that guarantee good response times. More specifically, our goal is to schedule a sequence of jobs on a variable speed processor so as to minimize the total cost consisting of the power consumption and the total flow time of all the jobs. We first show that when the amount of work, for any job, may take an arbitrary value, then no online algorithm can achieve a constant competitive ratio. Therefore, most of the paper is concerned with unit-size jobs. We devise a deterministic constant competitive online algorithm and show that the offline problem can be solved in polynomial time.

Journal ArticleDOI
TL;DR: The main result shows that the class of factor graphs with bounded degree can be learned in polynomial time and from aPolynomial number of training examples, assuming that the data is generated by a network in this class.
Abstract: We study the computational and sample complexity of parameter and structure learning in graphical models. Our main result shows that the class of factor graphs with bounded degree can be learned in polynomial time and from a polynomial number of training examples, assuming that the data is generated by a network in this class. This result covers both parameter estimation for a known network structure and structure learning. It implies as a corollary that we can learn factor graphs for both Bayesian networks and Markov networks of bounded degree, in polynomial time and sample complexity. Importantly, unlike standard maximum likelihood estimation algorithms, our method does not require inference in the underlying network, and so applies to networks where inference is intractable. We also show that the error of our learned model degrades gracefully when the generating distribution is not a member of the target class of networks. In addition to our main result, we show that the sample complexity of parameter learning in graphical models has an O(1) dependence on the number of variables in the model when using the KL-divergence normalized by the number of variables as the performance criterion.

Proceedings Article
16 Jul 2006
TL;DR: The proposed algorithm is a core tree-growing algorithm that can be combined with other scaling-up techniques to achieve further speedup and is as fast as naive Bayes but outperforms naive Baye in accuracy according to experiments.
Abstract: There is growing interest in scaling up the widely-used decision-tree learning algorithms to very large data sets. Although numerous diverse techniques have been proposed, a fast tree-growing algorithm without substantial decrease in accuracy and substantial increase in space complexity is essential. In this paper, we present a novel, fast decision-tree learning algorithm that is based on a conditional independence assumption. The new algorithm has a time complexity of O(m ċ n), where m is the size of the training data and n is the number of attributes. This is a significant asymptotic improvement over the time complexity O(m ċ n2) of the standard decision-tree learning algorithm C4.5, with an additional space increase of only O(n). Experiments show that our algorithm performs competitively with C4.5 in accuracy on a large number of UCI benchmark data sets, and performs even better and significantly faster than C4.5 on a large number of text classification data sets. The time complexity of our algorithm is as low as naive Bayes'. Indeed, it is as fast as naive Bayes but outperforms naive Bayes in accuracy according to our experiments. Our algorithm is a core tree-growing algorithm that can be combined with other scaling-up techniques to achieve further speedup.

Journal ArticleDOI
TL;DR: This work studies the problem of detecting and eliminating redundancy in a sensor network with a view to improving energy efficiency, while preserving the network's coverage, and presents efficient distributed algorithms for computing and maintaining solutions in cases of sensor failures or insertion of new sensors.
Abstract: We study the problem of detecting and eliminating redundancy in a sensor network with a view to improving energy efficiency, while preserving the network's coverage. We also examine the impact of redundancy elimination on the related problem of coverage-boundary detection. We reduce both problems to the computation of Voronoi diagrams, prove and achieve lower bounds on the solution of these problems, and present efficient distributed algorithms for computing and maintaining solutions in cases of sensor failures or insertion of new sensors. We prove the correctness and termination properties of our distributed algorithms, and analytically characterize the time complexity and traffic generated by our algorithms. Using detailed simulations, we also quantify the impact of system parameters such as sensor density, transmission range, and failure rates on network traffic.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.
Abstract: In sparse approximation theory, the fundamental problem is to reconstruct a signal AisinRn from linear measurements (A,psii) with respect to a dictionary of psii's. Recently, there is focus on the novel direction of Compressed Sensing where the reconstruction can be done with very few-O(klogn)-linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, the results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying mathematics and because of its potential applications. In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1/epsiv, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1+epsiv and improves the reconstruction time from poly(n) to poly (klogn). Our second result is a randomized construction of O(kpolylog(n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on compressed sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including learning theory, streaming algorithms and complexity theory for this case. Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: This work proposes a spreading sequences scheme based on random sparse signatures, and a detection algorithm based on belief propagation with linear time complexity, and proves that the information capacity of the system converges to Tanaka's formula for random 'dense' signatures, providing the first rigorous justification of this formula.
Abstract: We consider the CDMA (code-division multiple-access) multi-user detection problem for binary signals and additive white gaussian noise. We propose a spreading sequences scheme based on random sparse signatures, and a detection algorithm based on belief propagation (BP) with linear time complexity. In the new scheme, each user conveys its power onto a finite number of chips l, in the large system limit. We analyze the performances of BP detection and prove that they coincide with the ones of optimal (symbol MAP) detection in the l → ∞ limit. In the same limit, we prove that the information capacity of the system converges to Tanaka's formula for random 'dense' signatures, thus providing the first rigorous justification of this formula. Apart from being computationally convenient, the new scheme allows for optimization in close analogy with irregular low density parity check code ensembles.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This work proposes three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case, which is the first to be accompanied by a polynomial-time bound on the convergence time.
Abstract: We propose three new algorithms for the distributed averaging and consensus problems: two for the fixed-graph case, and one for the dynamic-topology case. The convergence rates of our fixed-graph algorithms compare favorably with other known methods, while our algorithm for the dynamic-topology case is the first to be accompanied by a polynomial-time bound on the convergence time.

Proceedings ArticleDOI
22 Jan 2006
TL;DR: This work uses a completely different, and elementary, approach to obtain a deterministic subexponential algorithm for the solution of parity games, and is almost as fast as the randomized algorithms mentioned above.
Abstract: The existence of polynomial time algorithms for the solution of parity games is a major open problem. The fastest known algorithms for the problem are randomized algorithms that run in subexponential time. These algorithms are all ultimately based on the randomized subexponential simplex algorithms of Kalai and of Matousek, Sharir and Welzl. Randomness seems to play an essential role in these algorithms. We use a completely different, and elementary, approach to obtain a deterministic subexponential algorithm for the solution of parity games. Our deterministic algorithm is almost as fast as the randomized algorithms mentioned above.

Journal ArticleDOI
TL;DR: The center-constrained MEB problem is introduced and the generalized CVM algorithm is extended, which can now be used with any linear/nonlinear kernel and can also be applied to kernel methods such as SVR and the ranking SVM.
Abstract: Kernel methods, such as the support vector machine (SVM), are often formulated as quadratic programming (QP) problems. However, given m training patterns, a naive implementation of the QP solver takes O(m 3) training time and at least O(m2) space. Hence, scaling up these QPs is a major stumbling block in applying kernel methods on very large data sets, and a replacement of the naive method for finding the QP solutions is highly desirable. Recently, by using approximation algorithms for the minimum enclosing ball (MEB) problem, we proposed the core vector machine (CVM) algorithm that is much faster and can handle much larger data sets than existing SVM implementations. However, the CVM can only be used with certain kernel functions and kernel methods. For example, the very popular support vector regression (SVR) cannot be used with the CVM. In this paper, we introduce the center-constrained MEB problem and subsequently extend the CVM algorithm. The generalized CVM algorithm can now be used with any linear/nonlinear kernel and can also be applied to kernel methods such as SVR and the ranking SVM. Moreover, like the original CVM, its asymptotic time complexity is again linear in m and its space complexity is independent of m. Experiments show that the generalized CVM has comparable performance with state-of-the-art SVM and SVR implementations, but is faster and produces fewer support vectors on very large data sets