scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2002"


Proceedings ArticleDOI
07 Nov 2002
TL;DR: This work presents their own distributed algorithm that outperforms the existing algorithms for minimum CDS and establishes the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, which is thus message-optimal.
Abstract: The connected dominating set (CDS) has been proposed as the virtual backbone or spine of a wireless ad hoc network. Three distributed approximation algorithms have been proposed in the literature for minimum CDS. We first reinvestigate their performances. None of these algorithms have constant approximation factors. Thus these algorithms can not guarantee to generate a CDS of small size. Their message complexities can be as high as O(n/sup 2/), and their time complexities may also be as large as O(n/sup 2/) and O(n/sup 3/). We then present our own distributed algorithm that outperforms the existing algorithms. This algorithm has an approximation factor of at most 8, O(n) time complexity and O(n log n) message complexity. By establishing the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDS, our algorithm is thus message-optimal.

834 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the number of actions required to approach the optimal return is lower bounded by the mixing time of the optimal policy (in the undiscounted case) or by the horizon time T in the discounted case.
Abstract: We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.

802 citations


Book ChapterDOI
19 Aug 2002
TL;DR: A new definition of distance-based outlier that considers for each point the sum of the distances from its k nearest neighbors, called weight, is proposed, which scales linearly both in the dimensionality and the size of the data set.
Abstract: In this paper we propose a new definition of distance-based outlier that considers for each point the sum of the distances from its k nearest neighbors, called weight. Outliers are those points having the largest values of weight. In order to compute these weights, we find the k nearest neighbors of each point in a fast and efficient way by linearizing the search space through the Hilbert space filling curve. The algorithm consists of two phases, the first provides an approximated solution, within a small factor, after executing at most d + 1 scans of the data set with a low time complexity cost, where d is the number of dimensions of the data set. During each scan the number of points candidate to belong to the solution set is sensibly reduced. The second phase returns the exact solution by doing a single scan which examines further a little fraction of the data set. Experimental results show that the algorithm always finds the exact solution during the first phase after d ? d + 1 steps and it scales linearly both in the dimensionality and the size of the data set.

751 citations


Journal ArticleDOI
TL;DR: This paper surveys three broad classes of very large-scale neighborhood search (VLSN) algorithms: (1) variable-depth methods in which large neighbourhoods are searched heuristically, (2) large neighborhoods in which the neighborhoods are searched using network flow techniques or dynamic programming, and (3) large neighbourhoods induced by restrictions of the original problem that are solvable in polynomial time.

660 citations


Proceedings ArticleDOI
17 May 2002
TL;DR: This paper presents a new algorithm for partial program verification that runs in polynomial time and space, and shows that property simulation scales to large programs and is accurate enough to verify meaningful properties.
Abstract: In this paper, we present a new algorithm for partial program verification that runs in polynomial time and space. We are interested in checking that a program satisfies a given temporal safety property. Our insight is that by accurately modeling only those branches in a program for which the property-related behavior differs along the arms of the branch, we can design an algorithm that is accurate enough to verify the program with respect to the given property, without paying the potentially exponential cost of full path-sensitive analysis.We have implemented this "property simulation" algorithm as part of a partial verification tool called ESP. We present the results of applying ESP to the problem of verifying the file I/O behavior of a version of the GNU C compiler (gcc, 140,000 LOC). We are able to prove that all of the 646 calls to .fprintf in the source code of gcc are guaranteed to print to valid, open files. Our results show that property simulation scales to large programs and is accurate enough to verify meaningful properties.

598 citations


Book ChapterDOI
28 May 2002
TL;DR: In computer vision, the incremental SVD is used to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations.
Abstract: We introduce an incremental singular value decomposition (SVD) of incomplete data. The SVD is developed as data arrives, and can handle arbitrary missing/untrusted values, correlated uncertainty across rows or columns of the measurement matrix, and user priors. Since incomplete data does not uniquely specify an SVD, the procedure selects one having minimal rank. For a dense p × q matrix of low rank r, the incremental method has time complexity O(pqr) and space complexity O((p + q)r)--better than highly optimized batch algorithms such as MATLAB's svd(). In cases of missing data, it produces factorings of lower rank and residual than batch SVD algorithms applied to standard missing-data imputations. We show applications in computer vision and audio feature extraction. In computer vision, we use the incremental SVD to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations.

564 citations


Proceedings ArticleDOI
09 Jun 2002
TL;DR: This paper proposes the first distributed approximation algorithm to construct a MCDS for the unit-disk-graph with a emph constant approximation ratio, and emph linear time and emphlinear message complexity.
Abstract: A connected dominating set (CDS) for a graph G(V,E) is a subset V1 of V, such that each node in V--V1 is adjacent to some node in V1, and V1 induces a connected subgraph. A CDS has been proposed as a virtual backbone for routing in wireless ad hoc networks. However, it is NP-hard to find a minimum connected dominating set (MCDS). Approximation algorithms for MCDS have been proposed in the literature. Most of these algorithms suffer from a very poor approximation ratio, and from high time complexity and message complexity. Recently, new distributed heuristics for constructing a CDS were developed, with constant approximation ratio of 8. These new heuristics are based on a construction of a spanning tree, which makes it very costly in terms of communication overhead to maintain the CDS in the case of mobility and topology changes.In this paper, we propose the first distributed approximation algorithm to construct a MCDS for the unit-disk-graph with a emph constant approximation ratio, and emph linear time and emph linear message complexity. This algorithm is fully localized, and does not depend on the spanning tree. Thus, the maintenance of the CDS after changes of topology guarantees the maintenance of the same approximation ratio. In this algorithm each node requires knowledge of its single-hop neighbors, and only a constant number of two-hop and three-hop neighbors. The message length is O( log n) bits.

420 citations


Book ChapterDOI
08 Apr 2002
TL;DR: An algorithm which takes a past time LTL formula and generates an efficient dynamic programming algorithm is presented, which is to construct a flexible framework for monitoring and analyzing program executions.
Abstract: The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. An algorithm which takes a past time LTL formula and generates an efficient dynamic programming algorithm is presented. The generated algorithm tests whether the formula is satisfied by a finite trace of events given as input and runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula. Further optimizations of the algorithm are suggested. Past time operators suitable for writing succinct specifications are introduced and shown definitionally equivalent to the standard operators. This work is part of the PathExplorer project, the objective of which it is to construct a flexible framework for monitoring and analyzing program executions.

381 citations


Proceedings Article
01 Jan 2002
TL;DR: A new algorithm suitable for matching discrete objects such as strings and trees in linear time is presented, thus obviating dynamic programming with quadratic time complexity and improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner.
Abstract: In this paper we present a new algorithm suitable for matching discrete objects such as strings and trees in linear time, thus obviating dynamic programming with quadratic time complexity. Furthermore, prediction cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner.

354 citations


Journal Article
TL;DR: In this paper, it was shown that derandomizing Polynomial Identity Testing is equivalent to proving arithmetic circuit lower bounds for NEXP, and that if one can test in polynomial time (or even non-deterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero poynomial, then either NEXP ⊄ P/poly or Permanent is not computable by polynomially-size arithmetic circuits.
Abstract: We show that derandomizing Polynomial Identity Testing is essentially equivalent to proving arithmetic circuit lower bounds for NEXP. More precisely, we prove that if one can test in polynomial time (or even nondeterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero polynomial, then either (i) NEXP ⊄ P/poly or (ii) Permanent is not computable by polynomial-size arithmetic circuits. We also prove a (partial) converse: If Permanent requires superpolynomial-size arithmetic circuits, then one can test in subexponential time whether a given arithmetic circuit of polynomially bounded degree computes an identically zero polynomial.Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or even coRP ⊆ ∩e > 0 NTIME(2ne), infinitely often), then NEXP is not computable by polynomial-size arithmetic circuits. Thus establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.We also prove unconditionally that NEXPRP does not have polynomial-size Boolean or arithmetic circuits. Finally, we show that NEXP ⊄ P/poly if both BPP = P and low-degree testing is in P; here low-degree testing is the problem of checking whether a given Boolean circuit computes a function that is close to some low-degree polynomial over a finite field.

338 citations


Proceedings ArticleDOI
07 Jan 2002
TL;DR: This work presents their own distributed algorithm that outperforms the existing algorithms for minimum CDS and establishes the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDs, which is thus message-optimal.
Abstract: Connected dominating set (CDs) has been proposed as virtual backbone or spine of wireless ad hoc networks. Three distributed approximation algorithms have been proposed in the literature for minimum CDS. We first reinvestigate their performances. None of these algorithms have constant approximation factors. Thus these algorithms can not guarantee to generate a CDs of small size. Their message complexities can be as high as O(n/sup 2/), and their time complexities may also be as large as O(n/sup 2/) and O(n/sup 3/). We then present our own distributed algorithm that outperforms the existing algorithms. This algorithm has an approximation factor of at most 8, O(n) time complexity and O(n log n) message complexity. By establishing the /spl Omega/(n log n) lower bound on the message complexity of any distributed algorithm for nontrivial CDs, our algorithm is thus message-optimal.

Journal ArticleDOI
TL;DR: Computational results show that an iterated dynasearch algorithm in which descents are performed a few random moves away from previous local minima is superior to other known local search procedures for the total weighted tardiness scheduling problem.
Abstract: This paper introduces a new neighborhood search technique, called dynasearch, that uses dynamic programming to search an exponential size neighborhood in polynomial time. While traditional local search algorithms make a single move at each iteration, dynasearch allows a series of moves to be performed. The aim is for the lookahead capabilities of dynasearch to prevent the search from being attracted to poor local optima. We evaluate dynasearch by applying it to the problem of scheduling jobs on a single machine to minimize the total weighted tardiness of the jobs. Dynasearch is more effective than traditional first-improve or best-improve descent in our computational tests. Furthermore, this superiority is much greater for starting solutions close to previous local minima. Computational results also show that an iterated dynasearch algorithm in which descents are performed a few random moves away from previous local minima is superior to other known local search procedures for the total weighted tardiness scheduling problem.

Journal ArticleDOI
TL;DR: Two destributed heuristics with constant performance ratios are proposed, which require only single-hop neighborhood knowledge, and a message length of O (1) and O(n log n), respectively.
Abstract: A connected dominating set (CDS) for a graph G(V, E) is a subset V' of V, such that each node in V — V' is adjacent to some node in V', and V' induces a connected subgraph. CDSs have been proposed as a virtual backbone for routing in wireless ad hoc networks. However, it is NP-hard to find a minimum connected dominating set (MCDS). An approximation algorithm for MCDS in general graphs has been proposed in the literature with performance guarantee of 3 + In Δ where Δ is the maximal nodal degree [1]. This algorithm has been implemented in distributed manner in wireless networks [2]–[4]. This distributed implementation suffers from high time and message complexity, and the performance ratio remains 3 + In Δ. Another distributed algorithm has been developed in [5], with performance ratio of Θ(n). Both algorithms require two-hop neighborhood knowledge and a message length of Ω (Δ). On the other hand, wireless ad hoc networks have a unique geometric nature, which can be modeled as a unit-disk graph (UDG), and thus admits heuristics with better performance guarantee. In this paper we propose two destributed heuristics with constant performance ratios. The time and message complexity for any of these algorithms is O(n), and O(n log n), respectively. Both of these algorithms require only single-hop neighborhood knowledge, and a message length of O (1).

Journal ArticleDOI
TL;DR: A polynomial-time algorithm that provably recovers the signer's secret DSA key when a few consecutive bits of the random nonces k are known for a number of DSA signatures at most linear in log q, under a reasonable assumption on the hash function used in DSA.
Abstract: We present a polynomial-time algorithm that provably recovers the signer's secret DSA key when a few consecutive bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. For most significant or least significant bits, the number of required bits is about log1/2 q , but can be decreased to log log q with a running time qO(1/log log q) subexponential in log q , and even further to two in polynomial time if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. For arbitrary consecutive bits, the attack requires twice as many bits. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who recently introduced that topic. Our attack is based on a connection with the hidden number problem (HNP) introduced at Crypto '96 by Boneh and Venkatesan in order to study the bit-security of the Diffie--Hellman key exchange. The HNP consists, given a prime number q , of recovering a number ? ? Fq such that for many known random t ? Fq a certain approximation of t ? is known. To handle the DSA case, we extend Boneh and Venkatesan's results on the HNP to the case where t has not necessarily perfectly uniform distribution, and establish uniformity statements on the DSA signatures, using exponential sum techniques. The efficiency of our attack has been validated experimentally, and illustrates once again the fact that one should be very cautious with the pseudo-random generation of the nonce within DSA.

Journal ArticleDOI
TL;DR: An O(nlog2n) upper bound on the time for deterministic distributed broadcasting in multi-hop radio networks with unknown topology is established and an O( n3/2log 2n) algorithm for gossiping in the same network model is developed.

Journal ArticleDOI
TL;DR: Since the graph nonisomorphism problem has a bounded round Arthur-Merlin game, this provides the first strong evidence that graph non isomorphism has subexponential size proofs, and establishes hardness versus randomness trade-offs for space bounded computation.
Abstract: Traditional hardness versus randomness results focus on time-efficient randomized decision procedures. We generalize these trade-offs to a much wider class of randomized processes. We work out various applications, most notably to derandomizing Arthur-Merlin games. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomial-time hierarchy (and hence the polynomial-time hierarchy collapses). Since the graph nonisomorphism problem has a bounded round Arthur-Merlin game, this provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We also establish hardness versus randomness trade-offs for space bounded computation.

Book ChapterDOI
Kousha Etessami1
20 Aug 2002
TL;DR: In this article, the authors define and provide algorithms for computing a hierarchy of simulation relations on the state-spaces of ordinary transition systems, finite automata, and Buchi automata.
Abstract: We define and provide algorithms for computing a natural hierarchy of simulation relations on the state-spaces of ordinary transition systems, finite automata, and Buchi automata.T hese simulations enrich ordinary simulation and can be used to obtain greater reduction in the size of automata by computing the automaton quotient with respect to their underlying equivalence.Sta te reduction for Buchi automata is useful for making explicit-state model checking run faster ([EH00, SB00, EWS01]).We define k-simulations, where 1-simulation corresponds to ordinary simulation and its variants for Buchi automata ([HKR97, EWS01]), and k-simulations, for k > 1, generalize the game definition of 1-simulation by allowing the Duplicator to use k pebbles instead of 1 (to "hedge its bets") in response to the Spoiler's move of a single pebble.As k increases, ksimulations are monotonically non-decreasing relations. Indeed, when k reaches n, the number of states of the automaton, the n-simulations defined for finite-automata and for labeled transition systems correspond precisely to language containment and trace containment, respectively. But for each fixed k, the maximal k-simulation relation is computable in polynomial time: nO(k).This provides a mechanism with which to trade off increased computing time for larger simulation relation size, and more potential reduction in automaton size.W e provide algorithms for computing k-simulations using a natural generalization of a prior efficient algorithm based on parity games ([EWS01]) for computing various simulations.Lastly, we observe the relationship between k-simulations and a k-variable interpretation of modal logic.

Journal ArticleDOI
TL;DR: A deterministic local search algorithm for k-SAT running in time (2-2/(k+ 1))n up to a polynomial factor is described, which is better than all previous bounds for deterministic k- SAT algorithms.

Journal ArticleDOI
TL;DR: Results indicate that the limited path heuristic is relatively insensitive to the number of constraints and is superior to the limited granularity heuristic in solving k-constrained QoS routing problems when k > 3.
Abstract: Multiconstrained quality-of-service (QoS) routing deals with finding routes that satisfy multiple independent QoS constraints. This problem is NP-hard. In this paper, two heuristics, the limited granularity heuristic and the limited path heuristic, are investigated. Both heuristics extend the Bellman-Ford shortest path algorithm and solve general k-constrained QoS routing problems. Analytical and simulation studies are conducted to compare the time/space requirements of the heuristics and the effectiveness of the heuristics in finding paths that satisfy the QoS constraints. The major results of this paper are the following. For an N-nodes and E-edges network with k (a small constant) independent QoS constraints, the limited granularity heuristic must maintain a table of size O(|N|k- 1) in each node to be effective, which results in a time complexity of O (|N|K|E|); while the limited path heuristic can achieve very high performance by maintaining O (|N|2lg(|N|)) entries in each node. These results indicate that the limited path heuristic is relatively insensitive to the number of constraints and is superior to the limited granularity heuristic in solving k-constrained QoS routing problems when k > 3.

Proceedings ArticleDOI
16 Nov 2002
TL;DR: Every subclass of the CSP defined by a set of allowed constraints is either tractable or NP-complete, and the criterion separating them is that conjectured by Bulatov et al. (2001).
Abstract: The Constraint Satisfaction Problem (CSP) provides a common framework for many combinatorial problems. The general CSP is known to be NP-complete; however, certain restrictions on the possible form of constraints may affect the complexity, and lead to tractable problem classes. There is, therefore, a fundamental research direction, aiming to separate those subclasses of the CSP which are tractable, from those which remain NP-complete. In 1978 Schaefer gave an exhaustive solution of this problem for the CSP on a 2-element domain. In this paper we generalise this result to a classification of the complexity of CSPs on a 3-element domain. The main result states that every subclass of the CSP defined by a set of allowed constraints is either tractable or NP-complete, and the criterion separating them is that conjectured by Bulatov et al. (2001). We also exhibit a polynomial time algorithm which, for a given set of allowed constraints, determines whether if this set gives rise to a tractable problem class. To obtain the main result and the algorithm we extensively use the algebraic technique for the CSP developed by Jeavons (1998) and Bulatov et al.

Journal Article
TL;DR: Scaling and Probabilistic Smoothing (SAPS) as mentioned in this paper is an efficient SAT algorithm that is conceptually closely related to ESG, and also introduces a reactive version of SAPS (RSAPS) that adaptively tunes one of the algorithm's important parameters.
Abstract: In this paper, we study the approach of dynamic local search for the SAT problem. We focus on the recent and promising Exponentiated Sub-Gradient (ESG) algorithm, and examine the factors determining the time complexity of its search steps. Based on the insights gained from our analysis, we developed Scaling and Probabilistic Smoothing (SAPS), an efficient SAT algorithm that is conceptually closely related to ESG. We also introduce a reactive version of SAPS (RSAPS) that adaptively tunes one of the algorithm's important parameters. We show that for a broad range of standard benchmark problems for SAT, SAPS and RSAPS achieve significantly better performance than both ESG and the state-of-the-art WalkSAT variant, Novelty + .

Journal ArticleDOI
TL;DR: This paper eliminates the storage of this data structure by combining the two updates into a single update of the cluster centers, which significantly affects the asymptotic runtime as the new algorithm is linear with respect to the number of clusters, while the original is quadratic.
Abstract: In this paper, we present an efficient implementation of the fuzzy c-means clustering algorithm. The original algorithm alternates between estimating centers of the clusters and the fuzzy membership of the data points. The size of the membership matrix is on the order of the original data set, a prohibitive size if this technique is to be applied to very large data sets with many clusters. Our implementation eliminates the storage of this data structure by combining the two updates into a single update of the cluster centers. This change significantly affects the asymptotic runtime as the new algorithm is linear with respect to the number of clusters, while the original is quadratic. Elimination of the membership matrix also reduces the overhead associated with repeatedly accessing a large data structure. Empirical evidence is presented to quantify the savings achieved by this new method.

Book ChapterDOI
09 Sep 2002
TL;DR: Scaling and Probabilistic Smoothing (SAPS), an efficient SAT algorithm that is conceptually closely related to ESG, is developed, and a reactive version of SAPS (RSAPS) is introduced that adaptively tunes one of the algorithm's important parameters.
Abstract: In this paper, we study the approach of dynamic local search for the SAT problem. We focus on the recent and promising Exponentiated Sub-Gradient (ESG) algorithm, and examine the factors determining the time complexity of its search steps. Basedon the insights gained from our analysis, we developed Scaling and Probabilistic Smoothing (SAPS), an efficient SAT algorithm that is conceptually closely related to ESG. We also introduce a reactive version of SAPS (RSAPS) that adaptively tunes one of the algorithm's important parameters. We show that for a broadra nge of standard benchmark problems for SAT, SAPS andR SAPS achieve significantly better performance than both ESG and the state-of-the-art WalkSAT variant, Novelty+.

Journal ArticleDOI
16 Nov 2002
TL;DR: This work gives the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution to within any constant error parameter.
Abstract: We give the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution to within any constant error parameter. We also give the first quasipolynomial time algorithm for learning any function of a polylog number of polynomial-weight halfspaces under any distribution. As special cases of these results we obtain algorithms for learning intersections and thresholds of halfspaces. Our uniform distribution learning algorithms involve a novel non-geometric approach to learning halfspaces; we use Fourier techniques together with a careful analysis of the noise sensitivity of functions of halfspaces. Our algorithms for learning under any distribution use techniques from real approximation theory to construct low degree polynomial threshold functions.

Proceedings ArticleDOI
08 May 2002
TL;DR: In this article, the authors investigated the stability of a time-controlled switched system consisting of several linear discrete-time subsystems and showed that the system is exponentially stable if the average dwell time is chosen sufficiently large and the total activation time ratio between Schur stable and unstable subsystems is not smaller than a specified constant.
Abstract: We investigate some qualitative properties for time-controlled switched systems consisting of several linear discrete-time subsystems. First, we study exponential stability of the switched system with commutation property, stable combination and average dwell time. When all subsystem matrices are commutative pairwise and there exists a stable combination of unstable subsystem matrices, we propose a class of stabilizing switching laws where Schur stable subsystems are activated arbitrarily while unstable ones are activated in sequence with their duration time periods satisfying a specified ratio. For more general switched system whose subsystem matrices are not commutative pairwise, we show that the switched system is exponentially stable if the average dwell time is chosen sufficiently large and the total, activation time ratio between Schur stable and unstable subsystems is not smaller than a specified constant. Secondly, we use an average dwell time approach incorporated with a piecewise Lyapunov function to study the /spl Lscr//sub 2/ gain of the switched system.

Journal ArticleDOI
TL;DR: It is shown that a population can have a drastic impact on an EA's average computation time, changing an exponential time to a polynomial time (in the input size) in some cases.
Abstract: Almost all analyses of time complexity of evolutionary algorithms (EAs) have been conducted for (1 + 1) EAs only. Theoretical results on the average computation time of population-based EAs are few. However, the vast majority of applications of EAs use a population size that is greater than one. The use of population has been regarded as one of the key features of EAs. It is important to understand in depth what the real utility of population is in terms of the time complexity of EAs, when EAs are applied to combinatorial optimization problems. This paper compares (1 + 1) EAs and (N + N) EAs theoretically by deriving their first hitting time on the same problems. It is shown that a population can have a drastic impact on an EA's average computation time, changing an exponential time to a polynomial time (in the input size) in some cases. It is also shown that the first hitting probability can be improved by introducing a population. However, the results presented in this paper do not imply that population-based EAs will always be better than (1 + 1) EAs for all possible problems.

Proceedings ArticleDOI
16 Nov 2002
TL;DR: This work provides the first polynomial time algorithm for the linear version of a problem defined by Irving Fisher in 1891, modeled after Kuhn's primal-dual algorithm for bipartite matching.
Abstract: Although the study of market equilibria has occupied center stage within mathematical economics for over a century, polynomial time algorithms for such questions have so far evaded researchers. We provide the first such algorithm for the linear version of a problem defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's (1995) primal-dual algorithm for bipartite matching.

Proceedings ArticleDOI
19 May 2002
TL;DR: Two combinatorial optimization problems related to efficient self-assembly of shapes in the Tile Assembly Model of self- assembly proposed by Rothemund and Winfree are studied, and it is proved that the first problem is NP-complete in general, and polynomial time solvable on trees and squares.
Abstract: Self-assembly is the ubiquitous process by which simple objects autonomously assemble into intricate complexes. It has been suggested that intricate self-assembly processes will ultimately be used in circuit fabrication, nano-robotics, DNA computation, and amorphous computing. In this paper, we study two combinatorial optimization problems related to efficient self-assembly of shapes in the Tile Assembly Model of self-assembly proposed by Rothemund and Winfree [18]. The first is the Minimum Tile Set Problem, where the goal is to find the smallest tile system that uniquely produces a given shape. The second is the Tile Concentrations Problem, where the goal is to decide on the relative concentrations of different types of tiles so that a tile system assembles as quickly as possible. The first problem is akin to finding optimum program size, and the second to finding optimum running time for a "program" to assemble the shape.Self-assembly is the ubiquitous process by which simple objects autonomously assemble into intricate complexes. It has been suggested that intricate self-assembly processes will ultimately be used in circuit fabrication, nano-robotics, DNA computation, and amorphous computing. In this paper, we study two combinatorial optimization problems related to efficient self-assembly of shapes in the Tile Assembly Model of self-assembly proposed by Rothemund and Winfree [18]. The first is the Minimum Tile Set Problem, where the goal is to find the smallest tile system that uniquely produces a given shape. The second is the Tile Concentrations Problem, where the goal is to decide on the relative concentrations of different types of tiles so that a tile system assembles as quickly as possible. The first problem is akin to finding optimum program size, and the second to finding optimum running time for a "program" to assemble the shape.We prove that the first problem is NP-complete in general, and polynomial time solvable on trees and squares. In order to prove that the problem is in NP, we present a polynomial time algorithm to verify whether a given tile system uniquely produces a given shape. This algorithm is analogous to a program verifier for traditional computational systems, and may well be of independent interest. For the second problem, we present a polynomial time $O(\log n)$-approximation algorithm that works for a large class of tile systems that we call partial order systems.

Journal ArticleDOI
TL;DR: Two simple randomized approximation algorithms are described, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum of two linear programming relaxations of the problem.
Abstract: We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completion-time formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent $\alpha$-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least $e/(e-1) \approx 1.5819$. Both algorithms may be derandomized; their deterministic versions run in O(n2) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.

Journal Article
TL;DR: A novel approach to the aesthetic drawing of undirected graphs that appears to have several advantages over classical methods, including a significantly better running time, a useful inherent capability to exhibit the graph in various dimensions, and an effective means for interactive exploration of large graphs.
Abstract: We present a novel approach to the aesthetic drawing of undirected graphs. The method has two phases: first embed the graph in a very high dimension and then project it into the 2-D plane using principal components analysis. Running time is linear in the graph size, and experiments we have carried out show the ability of the method to draw graphs of 10 5 nodes in few seconds. The new method appears to have several advantages over classical methods, including a significantly better running time, a useful inherent capability to exhibit the graph in various dimensions, and an effective means for interactive exploration of large graphs.