scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2005"


Journal Article
TL;DR: This paper proposes to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation and presents a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems.
Abstract: Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval/extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.

2,292 citations


Journal ArticleDOI
TL;DR: Deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity are given and extended to integer capacities and to codes that are tolerant to edge failures.
Abstract: The famous max-flow min-cut theorem states that a source node s can send information through a network (V, E) to a sink node t at a rate determined by the min-cut separating s and t. Recently, it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to re-encode the information they receive. We demonstrate examples of networks where the achievable rates obtained by coding at intermediate nodes are arbitrarily larger than if coding is not allowed. We give deterministic polynomial time algorithms and even faster randomized algorithms for designing linear codes for directed acyclic graphs with edges of unit capacity. We extend these algorithms to integer capacities and to codes that are tolerant to edge failures.

1,046 citations


Journal ArticleDOI
TL;DR: It is proved that the sign problem is nondeterministic polynomial (NP) hard, implying that a generic solution of the sign problems would also solve all problems in the complexity class NP inPolynomial time.
Abstract: Quantum Monte Carlo simulations, while being efficient for bosons, suffer from the "negative sign problem" when applied to fermions--causing an exponential increase of the computing time with the number of particles. A polynomial time solution to the sign problem is highly desired since it would provide an unbiased and numerically exact method to simulate correlated quantum systems. Here we show that such a solution is almost certainly unattainable by proving that the sign problem is nondeterministic polynomial (NP) hard, implying that a generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time.

1,025 citations


Journal ArticleDOI
TL;DR: This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m.
Abstract: Standard SVM training has O(m3) time and O(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very large data sets. By observing that practical SVM implementations only approximate the optimal solution by an iterative strategy, we scale up kernel methods by exploiting such "approximateness" in this paper. We first show that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry. Then, by adopting an efficient approximate MEB algorithm, we obtain provably approximately optimal solutions with the idea of core sets. Our proposed Core Vector Machine (CVM) algorithm can be used with nonlinear kernels and has a time complexity that is linear in m and a space complexity that is independent of m. Experiments on large toy and real-world data sets demonstrate that the CVM is as accurate as existing SVM implementations, but is much faster and can handle much larger data sets than existing scale-up methods. For example, CVM with the Gaussian kernel produces superior results on the KDDCUP-99 intrusion detection data, which has about five million training patterns, in only 1.4 seconds on a 3.2GHz Pentium--4 PC.

1,017 citations


Journal ArticleDOI
TL;DR: It is found that sphere decoding can be efficient for some SNR and problems of moderate size, even though the number of operations required by the algorithm strictly speaking always grows as an exponential function of the problem size.
Abstract: Sphere decoding has been suggested by a number of authors as an efficient algorithm to solve various detection problems in digital communications. In some cases, the algorithm is referred to as an algorithm of polynomial complexity without clearly specifying what assumptions are made about the problem structure. Another claim is that although worst-case complexity is exponential, the expected complexity of the algorithm is polynomial. Herein, we study the expected complexity where the problem size is defined to be the number of symbols jointly detected, and our main result is that the expected complexity is exponential for fixed signal-to-noise ratio (SNR), contrary to previous claims. The sphere radius, which is a parameter of the algorithm, must be chosen to ensure a nonvanishing probability of solving the detection problem. This causes the exponential complexity since the squared radius must grow linearly with problem size. The rate of linear increase is, however, dependent on the noise variance, and thus, the rate of the exponential function is strongly dependent on the SNR. Therefore sphere decoding can be efficient for some SNR and problems of moderate size, even though the number of operations required by the algorithm strictly speaking always grows as an exponential function of the problem size.

779 citations


Journal ArticleDOI
08 Dec 2005
TL;DR: A novel method for exactly solving general constraint satisfaction optimization with at most two variables per constraint with the first exponential improvement over the trivial algorithm, which yields connections between the complexity of some (polynomial time) high-dimensional search problems and some NP-hard problems.
Abstract: We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX-2-CSP and MIN-2-CSP), which gives the first exponential improvement over the trivial algorithm. More precisely, the runtime bound is a constant factor improvement in the base of the exponent: the algorithm can count the number of optima in MAX-2-SAT and MAX-CUT instances in O(m32ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. When the constraints have arbitrary weights, there is a (1 + e)-approximation with roughly the same runtime, modulo polynomial factors. Our construction shows that improvement in the runtime exponent of either k-clique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2-CSP optimization.Our approach also yields connections between the complexity of some (polynomial time) high-dimensional search problems and some NP-hard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in l1, then there is an (2 - e)n algorithm for MAX-LIN. These results may be construed as either lower bounds on the high-dimensional problems, or hope that better algorithms exist for the corresponding hard problems.

508 citations


Proceedings Article
26 Jul 2005
TL;DR: This work addresses the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model and presents the first efficient randomized algorithm providing a constant factor (1 - 1/e – e) approximation guarantee for any e > 0 with high confidence.
Abstract: A fundamental issue in real-world systems, such as sensor networks, is the selection of observations which most effectively reduce uncertainty. More specifically, we address the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model. We present the first efficient randomized algorithm providing a constant factor (1 - 1/e – e) approximation guarantee for any e > 0 with high confidence. The algorithm leverages the theory of submodular functions, in combination with a polynomial bound on sample complexity. We furthermore prove that no polynomial time algorithm can provide a constant factor approximation better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of the effectiveness of our method on two complex real-world datasets.

371 citations


Proceedings ArticleDOI
23 Jan 2005
TL;DR: It is shown, for the first time, that these networks can be localized in polynomial time and a notion called strong localizability is introduced and shown that the SDP model will identify all strongly localizable sub-networks in the input network.
Abstract: We analyze the semidefinite programming (SDP) based model and method for the position estimation problem in sensor network localization and other Euclidean distance geometry applications. We use SDP duality and interior-point algorithm theories to prove that the SDP localizes any network or graph that has unique sensor positions to fit given distance measures. Therefore, we show, for the first time, that these networks can be localized in polynomial time. We also give a simple and efficient criterion for checking whether a given instance of the localization problem has a unique realization in R2 using graph rigidity theory. Finally, we introduce a notion called strong localizability and show that the SDP model will identify all strongly localizable subnetworks in the input network.

368 citations


Journal ArticleDOI
TL;DR: This paper shows that, for a wide range of signal-to-noise ratios, rates, and numbers of antennas, the expected complexity is polynomial, and suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can be implemented in real-time.
Abstract: In Part I, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths.

324 citations


Book ChapterDOI
23 Aug 2005
TL;DR: This paper proposes the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties and various optimizations of the basic symbolic algorithm are proposed as well as methods for obtaining time-optimal winning strategies.
Abstract: In this paper, we propose the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties The algorithm we propose is a symbolic extension of the on-the-fly algorithm suggested by Liu & Smolka [15] for linear-time model-checking of finite-state systems. Being on-the-fly, the symbolic algorithm may terminate long before having explored the entire state-space. Also the individual steps of the algorithm are carried out efficiently by the use of so-called zones as the underlying data structure.Various optimizations of the basic symbolic algorithm are proposed as well as methods for obtaining time-optimal winning strategies (for reachability games). Extensive evaluation of an experimental implementation of the algorithm yields very encouraging performance results.

316 citations


Book ChapterDOI
22 Aug 2005
TL;DR: This paper provides a number of approximation algorithms with approximation ratios that depend on either the number of categories, the maximum number of points per category or both, and gives an experimental evaluation of the proposed algorithms using both synthetic and real datasets.
Abstract: In this paper we discuss a new type of query in Spatial Databases, called the Trip Planning Query (TPQ). Given a set of points of interest P in space, where each point belongs to a specific category, a starting point S and a destination E, TPQ retrieves the best trip that starts at S, passes through at least one point from each category, and ends at E. For example, a driver traveling from Boston to Providence might want to stop to a gas station, a bank and a post office on his way, and the goal is to provide him with the best possible route (in terms of distance, traffic, road conditions, etc.). The difficulty of this query lies in the existence of multiple choices per category. In this paper, we study fast approximation algorithms for TPQ in a metric space. We provide a number of approximation algorithms with approximation ratios that depend on either the number of categories, the maximum number of points per category or both. Therefore, for different instances of the problem, we can choose the algorithm with the best approximation ratio, since they all run in polynomial time. Furthermore, we use some of the proposed algorithms to derive efficient heuristics for large datasets stored in external memory. Finally, we give an experimental evaluation of the proposed algorithms using both synthetic and real datasets.

Proceedings ArticleDOI
06 Oct 2005
TL;DR: This paper presents a bidirectional inference algorithm for sequence labeling problems such as part-of-speech tagging, named entity recognition and text chunking that can enumerate all possible decomposition structures and find the highest probability sequence together with the corresponding decomposition structure in polynomial time.
Abstract: This paper presents a bidirectional inference algorithm for sequence labeling problems such as part-of-speech tagging, named entity recognition and text chunking. The algorithm can enumerate all possible decomposition structures and find the highest probability sequence together with the corresponding decomposition structure in polynomial time. We also present an efficient decoding algorithm based on the easiest-first strategy, which gives comparably good performance to full bidirectional inference with significantly lower computational cost. Experimental results of part-of-speech tagging and text chunking show that the proposed bidirectional inference methods consistently outperform unidirectional inference methods and bidirectional MEMMs give comparable performance to that achieved by state-of-the-art learning algorithms including kernel support vector machines.

Journal ArticleDOI
TL;DR: This paper addresses the computational complexity of the batching of orders in a parallel-aisle warehouse and develops a branch-and-price optimization algorithm for the problem, model the problem as a generalized set partitioning problem and present a column generation algorithm to solve its linear programming relaxation.
Abstract: Although the picking of items may make up as much as 60% of all labor activities in a warehouse and may account for as much as 65% of all operating expenses, many order picking problems are still not well understood. Indeed, usually simple rules of thumb or straightforward constructive heuristics are used in practice, even in state-of-the-art warehouse management systems, however, it might well be that more attractive algorithmic alternatives could be developed. We address one such a fundamental materials handling problem: the batching of orders in a parallel-aisle warehouse so as to minimize the total traveling time needed to pick all items. Many heuristics have been proposed for this problem, however, a fundamental analysis of the problem is still lacking. In this paper, we first address the computational complexity of the problem. We prove that this problem is NP-hard in the strong sense but that it is solvable in polynomial time if no batch contains more than two orders. This result is not really surp...

Journal ArticleDOI
TL;DR: A quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity $2^{O(\sqrt{\log\ N})}$.
Abstract: We present a quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity $2^{O(\sqrt{\log\ N})}$. In this problem an oracle computes a function $f$ on the dihedral group $D_N$ which is invariant under a hidden reflection in $D_N$. By contrast, the classical query complexity of DHSP is $O(\sqrt{N})$. The algorithm also applies to the hidden shift problem for an arbitrary finitely generated abelian group. The algorithm begins as usual with a quantum character transform, which in the case of $D_N$ is essentially the abelian quantum Fourier transform. This yields the name of a group representation of $D_N$, which is not by itself useful, and a state in the representation, which is a valuable but indecipherable qubit. The algorithm proceeds by repeatedly pairing two unfavorable qubits to make a new qubit in a more favorable representation of $D_N$. Once the algorithm obtains certain target representations, direct measurements reveal the hidden subgroup.

Journal ArticleDOI
TL;DR: This work presents a framework for finding point correspondences in monocular image sequences over multiple frames by using a polynomial time algorithm for a restriction of the general problem of multiframe point correspondence, which is NP-hard for three or more frames.
Abstract: This work presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow the entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

Book ChapterDOI
11 Aug 2005
TL;DR: An approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol is extended and it is shown that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy.
Abstract: Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. Its goal can be either to improve the productivity or to strengthen the security. Specific identification protocols based on symmetric challenge-response have been developed in order to assure the privacy of the device bearers. Although these protocols fit the devices' constraints, they always suffer from a large time complexity. Existing protocols require O(n) cryptographic operations to identify one device among n. Molnar and Wagner suggested a method to reduce this complexity to O(log n). We show that their technique could degrade the privacy if the attacker has the possibility to tamper with at least one device. Because low-cost devices are not tamper-resistant, such an attack could be feasible. We give a detailed analysis of their protocol and evaluate the threat. Next, we extend an approach based on time-memory trade-offs whose goal is to improve Ohkubo, Suzuki, and Kinoshita's protocol. We show that in practice this approach reaches the same performances as Molnar and Wagner's method, without degrading privacy.

Journal ArticleDOI
TL;DR: This study examines the verification of linear time properties of RSMs, and easily derive algorithms for linear time temporal logic model checking with the same complexity in the model.
Abstract: Recursive state machines (RSMs) enhance the power of ordinary state machines by allowing vertices to correspond either to ordinary states or to potentially recursive invocations of other state machines. RSMs can model the control flow in sequential imperative programs containing recursive procedure calls. They can be viewed as a visual notation extending Statecharts-like hierarchical state machines, where concurrency is disallowed but recursion is allowed. They are also related to various models of pushdown systems studied in the verification and program analysis communities.After introducing RSMs and comparing their expressiveness with other models, we focus on whether verification can be efficiently performed for RSMs. Our first goal is to examine the verification of linear time properties of RSMs. We begin this study by dealing with two key components for algorithmic analysis and model checking, namely, reachability (Is a target state reachable from initial states?) and cycle detection (Is there a reachable cycle containing an accepting state?). We show that both these problems can be solved in time O(nθ2) and space O(nθ), where n is the size of the recursive machine and θ is the maximum, over all component state machines, of the minimum of the number of entries and the number of exits of each component. From this, we easily derive algorithms for linear time temporal logic model checking with the same complexity in the model. We then turn to properties in the branching time logic CTL*, and again demonstrate a bound linear in the size of the state machine, but only for the case of RSMs with a single exit node.

Journal ArticleDOI
15 Jan 2005
TL;DR: An efficient method of protein classification using multiple protein networks is proposed, and experiments on function prediction of 3588 yeast proteins show promising results: the computation time is enormously reduced, while the accuracy is still comparable to the SDP/SVM method.
Abstract: Motivation: Support vector machines (SVMs) have been successfully used to classify proteins into functional categories. Recently, to integrate multiple data sources, a semidefinite programming (SDP) based SVM method was introduced. In SDP/SVM, multiple kernel matrices corresponding to each of data sources are combined with weights obtained by solving an SDP. However, when trying to apply SDP/SVM to large problems, the computational cost can become prohibitive, since both converting the data to a kernel matrix for the SVM and solving the SDP are time and memory demanding. Another application-specific drawback arises when some of the data sources are protein networks. A common method of converting the network to a kernel matrix is the diffusion kernel method, which has time complexity of O(n3), and produces a dense matrix of size n × n. Results: We propose an efficient method of protein classification using multiple protein networks. Available protein networks, such as a physical interaction network or a metabolic network, can be directly incorporated. Vectorial data can also be incorporated after conversion into a network by means of neighbor point connection. Similar to the SDP/SVM method, the combination weights are obtained by convex optimization. Due to the sparsity of network edges, the computation time is nearly linear in the number of edges of the combined network. Additionally, the combination weights provide information useful for discarding noisy or irrelevant networks. Experiments on function prediction of 3588 yeast proteins show promising results: the computation time is enormously reduced, while the accuracy is still comparable to the SDP/SVM method. Availability: Software and data will be available on request. Contact: shin@tuebingen.mpg.de

Journal ArticleDOI
TL;DR: A new (randomized) reduction from closest vector problem (CVP) to SVP that achieves some constant factor hardness is given, based on BCH codes, that enables the hardness factor to 2/sup log n1/2-/spl epsi//.
Abstract: Let p > 1 be any fixed real. We show that assuming NP n RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in ep norm within a constant factor. Under the stronger assumption NP n RTIME(2poly(log n)), we show that there is no polynomial-time algorithm with approximation ratio 2(log n)1/2−e where n is the dimension of the lattice and e > 0 is an arbitrarily small constant.We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product, a new variant of tensor product that we introduce. This enables us to boost the hardness factor to 2(log n)1/2-e.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: A near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension is presented and this data-structure is applied to obtain improved algorithms for the following problems: Approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the Lipschitz constant of a function.
Abstract: We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: Approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near-linear and the space being used is linear.

Journal ArticleDOI
TL;DR: Two types of periodicities are defined, and a scalable, computationally efficient algorithm is proposed for each type, and the algorithms are extended in order to discover the periodic patterns of unknown periods at the same time without affecting the time complexity.
Abstract: Periodicity mining is used for predicting trends in time series data. Discovering the rate at which the time series is periodic has always been an obstacle for fully automated periodicity mining. Existing periodicity mining algorithms assume that the periodicity, rate (or simply the period) is user-specified. This assumption is a considerable limitation, especially in time series data where the period is not known a priori. In this paper, we address the problem of detecting the periodicity rate of a time series database. Two types of periodicities are defined, and a scalable, computationally efficient algorithm is proposed for each type. The algorithms perform in O(n log n) time for a time series of length n. Moreover, the proposed algorithms are extended in order to discover the periodic patterns of unknown periods at the same time without affecting the time complexity. Experimental results show that the proposed algorithms are highly accurate with respect to the discovered periodicity rates and periodic patterns. Real-data experiments demonstrate the practicality of the discovered periodic patterns.

Journal ArticleDOI
TL;DR: The results show that the proposed algorithm has a much better scaling capability than Libsvm, SVM/sup light/, and SVMTorch and the good generalization performances on several large databases have also been achieved.
Abstract: Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, SVM/sup light/, and SVMTorch. Moreover, the good generalization performances on several large databases have also been achieved.

Journal ArticleDOI
08 Dec 2005
TL;DR: The result says that the problem of computing a weighted sum of homomorphisms to a weighted graph H is in polynomial time if the adjacency matrix of H has row rank 1, and #P-hard otherwise.
Abstract: We give a complexity theoretic classification of the counting versions of so-called H-colouring problems for graphs H that may have multiple edges between the same pair of vertices. More generally, we study the problem of computing a weighted sum of homomorphisms to a weighted graph H.The problem has two interesting alternative formulations: first, it is equivalent to computing the partition function of a spin system as studied in statistical physics. And second, it is equivalent to counting the solutions to a constraint satisfaction problem whose constraint language consists of two equivalence relations.In a nutshell, our result says that the problem is in polynomial time if the adjacency matrix of H has row rank 1, and #P-hard otherwise.

Journal ArticleDOI
TL;DR: A coarse-grained algorithm is proposed, AC2001/3.1, that is worst case optimal and preserves as much as possible the ease of its integration into a solver (no heavy data structure to be maintained during search) and is competitive with the best fine- grained algorithms such as AC-6.

Proceedings ArticleDOI
Philip N. Klein1
23 Jan 2005
TL;DR: Given an n-node planar graph with nonnegative edge-lengths, the algorithm takes O(n) time to construct a data structure that supports queries of the following form in logs: given a destination node t on the boundary of the infinite face, find the s-to-t distance.
Abstract: Given an n-node planar graph with nonnegative edge-lengths, our algorithm takes O(n log n) time to construct a data structure that supports queries of the following form in O(log n) time: given a destination node t on the boundary of the infinite face, and given a start node s anywhere, find the s-to-t distance.

Journal ArticleDOI
TL;DR: It is shown that the coarse- grained and fine-grained localization problems for ad hoc sensor networks can be posed and solved as a pattern recognition problem using kernel methods from statistical learning theory, and a simple and effective localization algorithm is derived.
Abstract: We show that the coarse-grained and fine-grained localization problems for ad hoc sensor networks can be posed and solved as a pattern recognition problem using kernel methods from statistical learning theory. This stems from an observation that the kernel function, which is a similarity measure critical to the effectiveness of a kernel-based learning algorithm, can be naturally defined in terms of the matrix of signal strengths received by the sensors. Thus we work in the natural coordinate system provided by the physical devices. This not only allows us to sidestep the difficult ranging procedure required by many existing localization algorithms in the literature, but also enables us to derive a simple and effective localization algorithm. The algorithm is particularly suitable for networks with densely distributed sensors, most of whose locations are unknown. The computations are initially performed at the base sensors, and the computation cost depends only on the number of base sensors. The localization step for each sensor of unknown location is then performed locally in linear time. We present an analysis of the localization error bounds, and provide an evaluation of our algorithm on both simulated and real sensor networks.

Journal ArticleDOI
TL;DR: Two new parallel AMG coarsening schemes are proposed, that are based on solely enforcing a maximum independent set property, resulting in sparser coarse grids and the performance of the new preconditioners is examined.
Abstract: Algebraic multigrid (AMG) is a very efficient iterative solver and preconditioner for large unstructured sparse linear systems. Traditional coarsening schemes for AMG can, however, lead to computational complexity growth as problem size increases, resulting in increased memory use and execution time, and diminished scalability. Two new parallel AMG coarsening schemes are proposed that are based solely on enforcing a maximum independent set property, resulting in sparser coarse grids. The new coarsening techniques remedy memory and execution time complexity growth for various large three-dimensional (3D) problems. If used within AMG as a preconditioner for Krylov subspace methods, the resulting iterative methods tend to converge fast. This paper discusses complexity issues that can arise in AMG, describes the new coarsening schemes, and examines the performance of the new preconditioners for various large 3D problems.

Book ChapterDOI
24 Feb 2005
TL;DR: An average-case analysis for two input distributions reveals that one RSH is convergent to optimality in polynomial time, and it is shown that for both RSHs, parallel runs yield a PRAS.
Abstract: In recent years, probabilistic analyses of algorithms have received increasing attention Despite results on the average-case complexity and smoothed complexity of exact deterministic algorithms, little is known about the average-case behavior of randomized search heuristics (RSHs) In this paper, two simple RSHs are studied on a simple scheduling problem While it turns out that in the worst case, both RSHs need exponential time to create solutions being significantly better than 4/3-approximate, an average-case analysis for two input distributions reveals that one RSH is convergent to optimality in polynomial time Moreover, it is shown that for both RSHs, parallel runs yield a PRAS

Journal ArticleDOI
TL;DR: The estimates and sufficient conditions for the exponential stability of linear time delay systems are given and the functionals used make use of Lyapunov-Krasovskii functionals.
Abstract: Exponential estimates and sufficient conditions for the exponential stability of linear time delay systems are given. The proof make use of Lyapunov-Krasovskii functionals and the conditions are expressed in terms of linear matrix inequalities.

Proceedings ArticleDOI
TL;DR: A significantly improved algorithm for the problem of finding a Fourier representation R of m terms for a given discrete signal A of length N and a quadratic-in-m algorithm that works for any values of Ni's is given.
Abstract: •We study the problem of finding a Fourier representation R of m terms for a given discrete signal A of length N. The Fast Fourier Transform (FFT) can find the optimal N-term representation in time O(N log N) time, but our goal is to get sublinear time algorithms when m << N. Suppose ||A||2 ≤M||A-Ropt||2, where Ropt is the optimal output. The previously best known algorithms output R such that ||A-R||22≤(1+e))||A-Ropt||22 with probability at least 1-δ in time* poly(m,log(1/δ),log N,log M,1/e). Although this is sublinear in the input size, the dominating expression is the polynomial factor in m which, for published algorithms, is greater than or equal to the bottleneck at m2 that we identify below. Our experience with these algorithms shows that this is serious limitation in theory and in practice. Our algorithm beats this m2 bottleneck. Our main result is a significantly improved algorithm for this problem and the d-dimensional analog. Our algorithm outputs an R with the same approximation guarantees but it runs in time m•poly(log(1/δ),log N,log M,1/e). A version of the algorithm holds for all N, though the details differ slightly according to the factorization of N. For the d-dimensional problem of size N1 × N2 × •• × Nd, the linear-in-m algorithm extends efficiently to higher dimensions for certain factorizations of the Ni's; we give a quadratic-in-m algorithm that works for any values of Ni's. This article replaces several earlier, unpublished drafts.