scispace - formally typeset
Search or ask a question

Showing papers in "Theory of Computing Systems \/ Mathematical Systems Theory in 2016"


Journal ArticleDOI
TL;DR: The 2k-barrier for Cluster Vertex Deletion is broken and an 𝓞(1.9102k(n+m)$-time branching algorithm is presented, which achieves improvement by a number of structural observations which are incorporated into the algorithm’s branching steps.
Abstract: In the family of clustering problems we are given a set of objects (vertices of the graph), together with some observed pairwise similarities (edges). The goal is to identify clusters of similar objects by slightly modifying the graph to obtain a cluster graph (disjoint union of cliques). Huffner et al. (Theory Comput. Syst. 47(1), 196---217, 2010) initiated the parameterized study of Cluster Vertex Deletion, where the allowed modification is vertex deletion, and presented an elegant ??min(2kk6logk+n3,2kkmnlogn)$\mathcal {O}\left (\min (2^{k} k^{6} \log k + n^{3}, 2^{k} km\sqrt {n} \log n)\right )$-time fixed-parameter algorithm, parameterized by the solution size. In the last 5 years, this algorithm remained the fastest known algorithm for Cluster Vertex Deletion and, thanks to its simplicity, became one of the textbook examples of an application of the iterative compression principle. In our work we break the 2k-barrier for Cluster Vertex Deletion and present an ??(1.9102k(n+m))$\mathcal {O}(1.9102^{k} (n+m))$-time branching algorithm. We achieve this improvement by a number of structural observations which we incorporate into the algorithm's branching steps.

53 citations


Journal ArticleDOI
TL;DR: This work revisits the previous work and proposes several new complexity results for new fragments and extensions of earlier bit-vector logics, providing the currently most complete overview on the complexity of common bit- vector logics.
Abstract: Bit-precise reasoning is important for many practical applications of Satisfiability Modulo Theories (SMT). In recent years, efficient approaches for solving fixed-size bit-vector formulas have been developed. From the theoretical point of view, only few results on the complexity of fixed-size bit-vector logics have been published. Some of these results only hold if unary encoding on the bit-width of bit-vectors is used. In our previous work (Kovasznai et al. 2012), we have already shown that binary encoding adds more expressiveness to various fixed-size bit-vector logics with and without quantification. In a follow-up work (Frohlich et al. 2013), we then gave additional complexity results for several fragments of the quantifier-free case. In this paper, we revisit our complexity results from (Frohlich et al. 2013; Kovasznai et al. 2012) and go into more detail when specifying the underlying logics and presenting the proofs. We give a better insight in where the additional expressiveness of binary encoding comes from. In order to do this, we bring together our previous work and propose several new complexity results for new fragments and extensions of earlier bit-vector logics. We also discuss the expressiveness of various bit-vector operations in more detail. Altogether, we provide the currently most complete overview on the complexity of common bit-vector logics.

34 citations


Journal ArticleDOI
TL;DR: This work investigates the online variant of the (Multiple) Knapsack Problem: an algorithm is to pack items, of arbitrary sizes and profits, in k knapsacks (bins) without exceeding the capacity of any bin, and studies two objective functions: the sum and the maximum of profits over all bins.
Abstract: We investigate the online variant of the (Multiple) Knapsack Problem: an algorithm is to pack items, of arbitrary sizes and profits, in k knapsacks (bins) without exceeding the capacity of any bin. We study two objective functions: the sum and the maximum of profits over all bins. With either objective, our problem statement captures and generalizes previously studied problems, e.g. Dual Bin Packing [1, 6] in case of the sum and Removable Knapsack [10, 11] in case of the maximum. Following previous studies, we consider two variants, depending on whether the algorithm is allowed to remove items (forever) from its bins or not, and two special cases where the profit of an item is a function of its size, in addition to the general setting. We study both deterministic and randomized algorithms; for the latter, we consider both the oblivious and the adaptive adversary model. We classify each variant as either admitting O(1)-competitive algorithms or not. We develop simple O(1)-competitive algorithms for some cases of the max-objective variant believed to be intrac because only 1-bin deterministic algorithms were considered before.

32 citations


Journal ArticleDOI
TL;DR: Most parameterised variants of the string morphism problem are fixed-parameter intractable and, apart from some very special cases, tractable variants can only be obtained by considering a large part of the input as parameters, namely the length of w and the number of different symbols in u.
Abstract: Given a source string u and a target string w, to decide whether w can be obtained by applying a string morphism on u (i. e., uniformly replacing the symbols in u by strings) constitutes an ����$\mathcal {NP}$-complete problem. We present a multivariate analysis of this problem (and its many variants) from the viewpoint of parameterised complexity theory, thereby pinning down the sources of its computational hardness. Our results show that most parameterised variants of the string morphism problem are fixed-parameter intractable and, apart from some very special cases, tractable variants can only be obtained by considering a large part of the input as parameters, namely the length of w and the number of different symbols in u.

28 citations


Journal Article
TL;DR: In this paper, the properties of polycrystalline gold nanowires for generating sound were evaluated as a function of frequency (from 5-120 kHz), angle from the plane of the nanwires, input power (from 0.30-2.5 W), and width of the wires in the array.
Abstract: We report the investigation of thermophones consisting of arrays of ultralong (mm scale) polycrystalline gold nanowires. Arrays of ∼4000 linear gold nanowires are fabricated at 5 μm pitch on glass surfaces using lithographically patterned nanowire electrodeposition (LPNE). The properties of nanowire arrays for generating sound are evaluated as a function of frequency (from 5–120 kHz), angle from the plane of the nanowires, input power (from 0.30–2.5 W), and the width of the nanowires in the array (from 270 to 500 nm). Classical theory for thermophones based on metal films accurately predicts the measured properties of these gold nanowire arrays. Angular “nodes” for the off-axis sound pressure level (SPL) versus frequency data, predicted by the directivity factor, are faithfully reproduced by these nanowire arrays. The maximum efficiency of these arrays (∼10–10 at 25 kHz), the power dependence, and the frequency dependence is independent of the lateral dimensions of these wires over the range from 270 to 5...

25 citations


Journal ArticleDOI
TL;DR: This paper focuses on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function and introduces a general algorithm for such sub modular matroid secretary problems and obtains constant competitive algorithms for the cases of laminarMatroids and transversal matroids.
Abstract: We study the matroid secretary problems with submodular valuation functions. In these problems, the elements arrive in random order. When one element arrives, we have to make an immediate and irrevocable decision on whether to accept it or not. The set of accepted elements must form an independent set in a predefined matroid. Our objective is to maximize the value of the accepted elements. In this paper, we focus on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function. We introduce a general algorithm for such submodular matroid secretary problems. In particular, we obtain constant competitive algorithms for the cases of laminar matroids and transversal matroids. Our algorithms can be further applied to any independent set system defined by the intersection of a constant number of laminar matroids, while still achieving constant competitive ratios. Notice that laminar matroids generalize uniform matroids and partition matroids. On the other hand, when the underlying valuation function is linear, our algorithm achieves a competitive ratio of 9.6 for laminar matroids, which significantly improves the previous result.

22 citations


Journal ArticleDOI
TL;DR: Under the model of nonatomic selfish routing, the topologies of k-commodity undirected and directed networks in which Braess’s paradox never occurs are characterized.
Abstract: Braess's paradox exposes a counterintuitive phenomenon that when travelers selfishly choose their routes in a network, removing links can improve the overall network performance. Under the model of nonatomic selfish routing, we characterize the topologies of k-commodity undirected and directed networks in which Braess's paradox never occurs. Our results strengthen Milchtaich's series-parallel characterization (Milchtaich, Games Econom. Behav. 57(2), 321---346 (2006)) for the single-commodity undirected case.

21 citations


Journal ArticleDOI
TL;DR: Three new techniques are introduced: higher dimensional iterations in interpolation; Eigenvalue Shifted Pairs, which allow us to prove that a pair of combinatorial gadgets in combination succeed in proving #P-hardness; algebraic symmetrization, which significantly lowers the symbolic complexity of the proof for computational complexity.
Abstract: We prove a complexity dichotomy theorem for Holant problems on 3-regular graphs with an arbitrary complex-valued edge function. Three new techniques are introduced: (1) higher dimensional iterations in interpolation; (2) Eigenvalue Shifted Pairs, which allow us to prove that a pair of combinatorial gadgets in combination succeed in proving #P-hardness; and (3) algebraic symmetrization, which significantly lowers the symbolic complexity of the proof for computational complexity. Using holographic reductions the classification theorem also applies to problems beyond the basic model.

20 citations


Journal ArticleDOI
TL;DR: It is shown that the number of perfect matchings in K5-free graphs can be computed in polynomial time and this work parallelizes the sequential algorithm and shows that the problem is in TC2.
Abstract: Counting the number of perfect matchings in graphs is a computationally hard problem. However, in the case of planar graphs, and even for K3,3-free graphs, the number of perfect matchings can be computed efficiently. The technique to achieve this is to compute a Pfaffian orientation of a graph. In the case of K5-free graphs, this technique will not work because some K5-free graphs do not have a Pfaffian orientation. We circumvent this problem and show that the number of perfect matchings in K5-free graphs can be computed in polynomial time. We also parallelize the sequential algorithm and show that the problem is in TC2. We remark that our results generalize to graphs without singly-crossing minor.

20 citations


Journal ArticleDOI
Ines Klimann1
TL;DR: It is proved that semigroups generated by reversible two-state Mealy automata have remarkable growth properties: they are either finite or free and an effective procedure to decide finiteness or freeness of such semig groups is given.
Abstract: We prove that semigroups generated by reversible two-state Mealy automata have remarkable growth properties: they are either finite or free. We give an effective procedure to decide finiteness or freeness of such semigroups when the generating automaton is also invertible.

19 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of the stable marriage problem is proposed, where preferences on one side of the partition are given in terms of arbitrary binary relations, which need not be transitive nor acyclic.
Abstract: We propose a generalization of the classical stable marriage problem. In our model, the preferences on one side of the partition are given in terms of arbitrary binary relations, which need not be transitive nor acyclic. This generalization is practically well-motivated, and as we show, encompasses the well studied hard variant of stable marriage where preferences are allowed to have ties and to be incomplete. As a result, we prove that deciding the existence of a stable matching in our model is NP-complete. Complementing this negative result we present a polynomial-time algorithm for the above decision problem in a significant class of instances where the preferences are asymmetric. We also present a linear programming formulation whose feasibility fully characterizes the existence of stable matchings in this special case. Finally, we use our model to study a long standing open problem regarding the existence of cyclic 3D stable matchings. In particular, we prove that the problem of deciding whether a fixed 2D perfect matching can be extended to a 3D stable matching is NP-complete, showing this way that a natural attempt to resolve the existence (or not) of 3D stable matchings is bound to fail.

Journal ArticleDOI
TL;DR: This work considers the problem of managing a bounded size First-In-First-Out (FIFO) queue buffer, where each incoming unit-sized packet requires several rounds of processing before it can be transmitted out.
Abstract: We consider the problem of managing a bounded size First-In-First-Out (FIFO) queue buffer, where each incoming unit-sized packet requires several rounds of processing before it can be transmitted out. Our objective is to maximize the total number of successfully transmitted packets. We consider both push-out (when a policy is permitted to drop already admitted packets) and non-push-out cases. We provide worst-case guarantees for the throughput performance of our algorithms, proving both lower and upper bounds on their competitive ratio against the optimal algorithm, and conduct a comprehensive simulation study that experimentally validates predicted theoretical behavior.

Journal ArticleDOI
TL;DR: This paper improves the lower bound on the price of anarchy of the corresponding game with respect to an effective welfare benchmark that takes budgets into account and extends it to settings with budget constraints.
Abstract: According to the proportional allocation mechanism from the network optimization literature, users compete for a divisible resource --- such as bandwidth --- by submitting bids. The mechanism allocates to each user a fraction of the resource that is proportional to her bid and collects an amount equal to her bid as payment. Since users act as utility-maximizers, this naturally defines a proportional allocation game. Syrgkanis and Tardos (STOC 2013) quantified the inefficiency of equilibria in this game with respect to the social welfare and presented a lower bound of 26.8 % on the price of anarchy over coarse-correlated and Bayes-Nash equilibria in the full and incomplete information settings, respectively. In this paper, we improve this bound to 50 % over both equilibrium concepts. Our analysis is simpler and, furthermore, we argue that it cannot be improved by arguments that do not take the equilibrium structure into account. We also extend it to settings with budget constraints where we show the first constant bound (between 36 and 50 %) on the price of anarchy of the corresponding game with respect to an effective welfare benchmark that takes budgets into account.

Journal ArticleDOI
TL;DR: The use of calcium oxide as a catalyst in the production of biodiesel from waste cooking oil resulted in iod number of 15.23 g/100 g KOH, density of 0.88 g/cm, viscosity of 6.00 cSt, and fatty acid value of 0.56 mg/KOH.
Abstract: Thermal decomposition of fish bones to obtain calci um oxide (CaO) was conducted at various temperature s of 400, 500, 800, 900, 1000, and 1100 °C. The calcium oxide was then characterized using X-ray diffractometer, FTIR spectrophotometer, and SEM analysis. The calcium ox ide obtained from the decomposition at 1000 °C was then used as a catalyst in the production of biodiesel from waste cooking oil. Diffraction pattern of the calcium oxi de produced from decomposition at 1000 °C showed a pattern similar t o that of the calcium oxide produced by the Joint C ommittee on Powder Diffraction Standard (JCDPS). The diffractio ns of 2θ values at 1000 °C were 32.2, 37.3, 53.8, 64.1, and 67.3 deg. The FTIR spectrum of calcium oxide decomposed at 10 00 °C has a specific vibration at wave-length 362 c m, which is similar to the specific vibration of Ca-O. SEM anal ysis of the calcium oxide indicated that the calciu m oxide’s morphology shows a smaller size and a more homogene us structure, compared to those of fish bones. The use of calcium oxide as a catalyst in the production of biodiesel from waste cooking oil resulted in iod number of 15 .23 g/100 g KOH, density of 0.88 g/cm, viscosity of 6.00 cSt, and fatty acid value of 0. 56 mg/KOH. These characteristic values meet the National Standard of Indonesia (SNI) for biodiesel.

Journal ArticleDOI
TL;DR: The Price of Anarchy (PoA) of the induced game under complete and incomplete information is studied, and it is proved that the PoA is exactly 2 for pure equilibria in the polyhedral environment.
Abstract: We study the efficiency of the proportional allocation mechanism that is widely used to allocate divisible resources. Each agent submits a bid for each divisible resource and receives a fraction proportional to her bids. We quantify the inefficiency of Nash equilibria by studying the Price of Anarchy (PoA) of the induced game under complete and incomplete information. When agents' valuations are concave, we show that the Bayesian Nash equilibria can be arbitrarily inefficient, in contrast to the well-known 4/3 bound for pure equilibria Johari and Tsitsiklis (Math. Oper. Res. 29(3), 407---435 2004). Next, we upper bound the PoA over Bayesian equilibria by 2 when agents' valuations are subadditive, generalizing and strengthening previous bounds on lattice submodular valuations. Furthermore, we show that this bound is tight and cannot be improved by any simple or scale-free mechanism. Then we switch to settings with budget constraints, and we show an improved upper bound on the PoA over coarse-correlated equilibria. Finally, we prove that the PoA is exactly 2 for pure equilibria in the polyhedral environment.

Journal ArticleDOI
TL;DR: In this article, the authors present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters and the objective is to minimize the number of queries.
Abstract: We present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters. To compute a solution, the algorithm can query the input parameters that yield more refined estimates in the form of sub-intervals and the objective is to minimize the number of queries. The previous approaches address the scenario where every query returns an exact value. Our framework is more general as it can deal with a wider variety of inputs and query responses and we establish interesting relationships between them that have not been investigated previously. Although some of the approaches of the previous restricted models can be adapted to the more general model, we require more sophisticated techniques for the analysis and we also obtain improved algorithms for the previous model. We address selection problems in the generalized model and show that there exist 2-update competitive algorithms that do not depend on the lengths or distribution of the sub-intervals and hold against the worst case adversary. We also obtain similar bounds on the competitive ratio for the MST problem in graphs.

Journal ArticleDOI
TL;DR: In this paper, the authors measured the bruise area resulted by fresh fruit bunch (FFB) falling when harves ted, loading (throwing up) FFB to truck bin, and shipping using truck.
Abstract: There are losses of production due to oil palm fiel d’s material handling. Activities that may raise th e losses are harvesting and transportation, which may cause brui se and damage to fruit. This research was aimed to learn the bruise of fresh fruit bunch (FFB) phenomenon in harvesting and transportation. Method used in this research was measuring the bruise area resulted by FFB falling when harves ted, loading (throwing up) FFB to truck bin, and tr ansporting using truck. These data, coupled with weight of bruised f ruit, were calculated to get FFB bruise index. Each FFB bruise index is related to potential free fatty acid (FFA) value . FFA is one of important quality indicator of crud e palm oil. The harvesting was conducted at mineral land and peat l and, and the loading and transportation was conduct e using wooden board truck and dump (iron board) truck. The re was a difference between bruise index and FFA of FFB fall on mineral and on peat land. FFA of mineral land harve sting was 2.19% while of peat land was 1.27%. It wa s obvious that fruit quality degradation was higher when FFB posit i ned at the bottom of bin truck layer rather than at the top. FFA of truck bin bottom layer was 2.79% while of top layer was 0.64%. It was found that there was a cumulativ e bruise on FFB within material handling, start from harvesting, lo ading up to truck bin, and transporting from field to loading ramp.

Journal ArticleDOI
TL;DR: The Generalized Serial Dictatorship Mechanism with Ties (GSDT) is introduced and it is shown that GSDT can generate all POMs using different priority orderings over the applicants, but it satisfies truthfulness only for certain such orderings.
Abstract: We consider Pareto optimal matchings (POMs) in a many-to-many market of applicants and courses where applicants have preferences, which may include ties, over individual courses and lexicographic preferences over sets of courses. Since this is the most general setting examined so far in the literature, our work unifies and generalizes several known results. Specifically, we characterize POMs and introduce the Generalized Serial Dictatorship Mechanism with Ties (GSDT) that effectively handles ties via properties of network flows. We show that GSDT can generate all POMs using different priority orderings over the applicants, but it satisfies truthfulness only for certain such orderings. This shortcoming is not specific to our mechanism; we show that any mechanism generating all POMs in our setting is prone to strategic manipulation. This is in contrast to the one-to-one case (with or without ties), for which truthful mechanisms generating all POMs do exist.

Journal ArticleDOI
TL;DR: This work investigates properties of regular languages of thin trees with main tool an algebra suitable for thin trees, and shows that in various meanings thin trees are not as rich as all infinite trees.
Abstract: An infinite tree is called thin if it contains only countably many infinite branches. Thin trees can be seen as intermediate structures between infinite words and infinite trees. In this work we investigate properties of regular languages of thin trees. Our main tool is an algebra suitable for thin trees. Using this framework we characterize various classes of regular languages: commutative, open in the standard topology, and definable in weak MSO logic among all trees. We also show that in various meanings thin trees are not as rich as all infinite trees. In particular we observe a collapse of the parity index to the level (1, 3) and a collapse of the topological complexity to co-analytic sets. Moreover, a gap property is shown: a regular language of thin trees is either weak MSO-definable among all trees or co-analytic-complete.

Journal Article
TL;DR: In this article, three-phase piezoelectric bulk composites were fabricated using a mix and cast method, which were comprised of lead zirconate titanate (PZT), aluminum (Al), and an epoxy matrix.
Abstract: Three-phase piezoelectric bulk composites were fabricated using a mix and cast method. The composites were comprised of lead zirconate titanate (PZT), aluminum (Al), and an epoxy matrix. The volume fraction of the PZT and Al was varied from 0.1 to 0.3 and 0.0 to 0.17, respectively. The influences of an electrically conductive filler (Al), polarization process (contact and Corona), and Al surface treatment, on piezoelectric and dielectric properties, were observed. The piezoelectric strain coefficient, d33, effective dielectric constant, er, capacitance, C, and resistivity were measured and compared according to polarization process, the volume fraction of constituent phases, and Al surface treatment. The maximum values of d33 were ∼3.475 and ∼1.0 pC/N for corona and contact poled samples, respectively, for samples with volume fractions of 0.40 and 0.13 of PZT and Al (surface treated), respectively. Also, the maximum dielectric constant for the surface treated Al samples was ∼411 for volume fractions of 0....



Journal ArticleDOI
TL;DR: This work presents a fully-polynomial time approximation-scheme for the problem of counting the s-t paths of length at most L, and shows that, unless P=NP, there is no finite approximation to the bi-criteria version of the problem.
Abstract: Given a directed acyclic graph with non-negative edge-weights, two vertices s and t, and a threshold-weight L, we present a fully-polynomial time approximation-scheme for the problem of counting the s-t paths of length at most L. This is best possible, as we also show that the problem is #P-complete. We then show that, unless P=NP, there is no finite approximation to the bi-criteria version of the problem: count the number of s-t paths of length at most L1 in the first criterion, and of length at most L2 in the second criterion. On the positive side, we extend the approximation scheme for the relaxed version of the problem, where, given thresholds L1 and L2, we relax the requirement of the s-t paths to have length exactly at most L1, and allow the paths to have length at most L1? : = (1+?)L1, for any ? > 0.


Journal ArticleDOI
TL;DR: An aggregated variant of the power-aware problem of scheduling non-preemptively a set of jobs on a single speed-scalable processor so as to minimize the maximum lateness, under a given budget of energy, is considered.
Abstract: We consider the power-aware problem of scheduling non-preemptively a set of jobs on a single speed-scalable processor so as to minimize the maximum lateness, under a given budget of energy. In the offline setting, our main contribution is a combinatorial polynomial time algorithm for the case in which the jobs have common release dates. In the presence of arbitrary release dates, we show that the problem becomes strongly NP$\mathcal {N}\mathcal {P}$-hard. Moreover, we show that there is no O(1)-competitive deterministic algorithm for the online setting in which the jobs arrive over time. Then, we turn our attention to an aggregated variant of the problem, where the objective is to find a schedule minimizing a linear combination of maximum lateness and energy. As we show, our results for the budget variant can be adapted to derive a similar polynomial time algorithm and an NP$\mathcal {N}\mathcal {P}$-hardness proof for the aggregated variant in the offline setting, with common and arbitrary release dates respectively. More interestingly, for the online case, we propose a 2-competitive algorithm.

Journal ArticleDOI
TL;DR: In this article, it was shown that the size minimization problem for complete OBDDs and the width minimisation problem are both NP-hard and that optimal variable orderings with respect to the OBDD size are not necessarily optimal for the complete model or the OLDD width.
Abstract: Ordered binary decision diagrams (OBDDs) are a popular data structure for Boolean functions. Some applications work with a restricted variant called complete OBDDs which is strongly related to nonuniform deterministic finite automata. One of its complexity measures is the width which has been investigated in several areas in computer science like machine learning, property testing, and the design and analysis of implicit graph algorithms. For a given function the size and the width of a (complete) OBDD is very sensitive to the choice of the variable ordering but the computation of an optimal variable ordering for the OBDD size is known to be NP-hard. Since optimal variable orderings with respect to the OBDD size are not necessarily optimal for the complete model or the OBDD width and hardly anything about the relation between optimal variable orderings with respect to the size and the width is known, this relationship is investigated. Here, using a new reduction idea it is shown that the size minimization problem for complete OBDDs and the width minimization problem are NP-hard.

Journal ArticleDOI
TL;DR: This present article proves that, with respect to symbol comparisons, QuickSelect’s average-case complexity remains Θ(n), and provides explicit expressions for the dominant constants, closely related to the probabilistic behaviour of the source.
Abstract: We revisit the analysis of the classical QuickSelect algorithm. Usually, the analysis deals with the mean number of key comparisons, but here we view keys as words produced by a source, and words are compared via their symbols in lexicographic order. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. The "realistic" cost of the algorithm is here the total number of symbol comparisons performed by the algorithm, and, in this context, the average-case analysis aims to provide estimates for the mean number of symbol comparisons. For the QuickSort algorithm, known average-case complexity results are of ?(nlogn)${\Theta } (n \log n)$ in the case of key comparisons, and ?(nlog2n)${\Theta }(n\log ^{2} n)$ for symbol comparisons. For QuickSelect algorithms, and with respect to key comparisons, the average-case complexity is ?(n). In this present article, we prove that, with respect to symbol comparisons, QuickSelect's average-case complexity remains ?(n). In each case, we provide explicit expressions for the dominant constants, closely related to the probabilistic behaviour of the source. We began investigating this research topic with Philippe Flajolet, and the short version of the present paper (the ICALP'2009 paper) was written with him. As usual, Philippe played a central role, notably on the following points: introduction of theQuickValalgorithm, tameness of sources, and use of Rice's method. He also made many experiments exhibiting the asymptotic slope ?(?) and plotted nice graphs, which are reproduced in this paper. Even though the extended abstract does not provide any proof of the analysis of the algorithmQuickQuant, Philippe also devised with us a precise plan for this proof which has now completely been written. For all these reasons, we could have added (and certainly would have liked to add) Philippe as a co-author of this paper. On the other hand, Philippe was extremely exacting of how his papers were to be written and organised, and we cannot be sure that he would have liked or validated our editing choices. In the end, this is why we have decided not to include him as a co-author, but instead, to dedicate, with deference and affection, this paper to his memory. Thank you, Philippe!

Journal Article
TL;DR: In this paper, the authors proposed a method to solve a set of problems in the context of applied mathematics, and applied it to computer graphics, including the problem of image classification.
Abstract: Department of Applied Mathematics

Journal ArticleDOI
TL;DR: It is proved that the class of deterministic Data Walking Automata is closed under all Boolean operations, and that theclass of non-deterministic Data walking Automata has decidable emptiness, universality, and containment problems.
Abstract: Data words are words with additional edges that connect pairs of positions carrying the same data value. We consider a natural model of automaton walking on data words, called Data Walking Automaton, and study its closure properties, expressiveness, and the complexity of some basic decision problems. Specifically, we show that the class of deterministic Data Walking Automata is closed under all Boolean operations, and that the class of non-deterministic Data Walking Automata has decidable emptiness, universality, and containment problems. We also prove that deterministic Data Walking Automata are strictly less expressive than non-deterministic Data Walking Automata, which in turn are captured by Class Memory Automata.

Journal ArticleDOI
TL;DR: An approximation algorithm is proposed that for any constant k, in polynomial time, delivers solutions of cost at most αk times OPT, where αk is an increasing function of k, with limk→∞αk=3$\lim_{k\to \infty } \alpha _{k} = 3$.
Abstract: We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of cost at most ?k times OPT, where ?k is an increasing function of k, with limk???k=3$\lim _{k\to \infty } \alpha _{k} = 3$. Our algorithm rounds a fractional solution to an extended LP formulation of the problem. The rounding builds upon the technique of iteratively rounding fractional solutions on trees (Garg, Konjevod, and Ravi SODA'98) originally used for the group Steiner tree problem. We improve the approximation ratio for k-level UFL for all k ? 3, in particular we obtain the ratio equal 2.02, 2.14, and 2.24 for k = 3,4, and 5. Second, we give a simple interpretation of the randomization process (Li ICALP'2011) for 1-level UFL in terms of solving an auxiliary (factor revealing) LP. Armed with this simple view point, we exercise the randomization on our algorithm for the k-level UFL. We further improve the approximation ratio for all k ? 3, obtaining 1.97, 2.09, and 2.19 for k = 3,4, and 5. Third, we extend our algorithm to the k-level UFL with penalties (k-level UFLWP), in which the setting is the same as k-level UFL except that the planner has the option to pay a penalty instead of connecting chosen clients.