scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2022"


Journal ArticleDOI
TL;DR: In this article, a hybrid of the whale optimization algorithm (WOA) with a novel method called the local minima avoidance method (LMAM), abbreviated as HWOA, is proposed to tackle multi-threshold color image segmentation by employing the Otsu method as an objective function.
Abstract: Traditional methods to address color image segmentation work efficiently for bi-level thresholding. However, for multi-level thresholding, traditional methods suffer from time complexity that increases exponentially with the increasing number of threshold levels. To overcome this problem, in this paper, a new approach is proposed to tackle multi-threshold color image segmentation by employing the Otsu method as an objective function. This approach is based on a hybrid of the whale optimization algorithm (WOA) with a novel method called the local minima avoidance method (LMAM), abbreviated as HWOA. LMAM avoids local minima by updating the whale either within the search space of the problem or between two whales selected randomly from the population-based on a certain probability. HWOA is validated on ten color images taken from the Berkeley University Dataset by measuring the objective values, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), features similarity index (FSIM), and CPU time, and compared with a number of the well-known robust meta-heuristic algorithms: the sine–cosine algorithm (SCA), WOA, modified salp swarm algorithm (MSSA), improved marine predators algorithm (IMPA), modified Cuckoo Search (CS) using McCulloch’s algorithm (CSMC), and equilibrium optimizer (EO). The experimental results show that HWOA is superior to all the other algorithms in terms of PSNR, FSIM, and objective values, and is competitive in terms of SSIM.

22 citations


Journal ArticleDOI
TL;DR: The Probabilistic Generalization of Isolation Forest (PGIF) as discussed by the authors was proposed to detect anomalies hidden between clusters more effectively, which is based on nonlinear dependence of segment-cumulated probability from the length of segment.

20 citations


Journal ArticleDOI
TL;DR: In this article , a DNA algorithm is proposed to deal with FTSP based on the Adleman-Lipton model, and the solution of the problem can be obtained by executing several basic biological manipulations on DNA molecules with O (n 2 ) computing complexity (n is the number of vertices in the problem without the origin).
Abstract: The Family Traveling Salesperson Problem (FTSP) is a variant of the Traveling Salesperson Problem (TSP), in which all vertices are divided into several different families, and the goal of the problem is to find a loop that concatenates a specified number of vertices with minimal loop overhead. As a Non-deterministic Polynomial Complete (NP-complete) problem, it is difficult to deal with it by the traditional computing. On the contrary, as a computer with strong parallel ability, the DNA computer has incomparable advantages over digital computers when dealing with NP problems. Based on this, a DNA algorithm is proposed to deal with FTSP based on the Adleman-Lipton model. In the algorithm, the solution of the problem can be obtained by executing several basic biological manipulations on DNA molecules with O ( N2 ) computing complexity ( N is the number of vertices in the problem without the origin). Through the simulation experiments on some benchmark instances, the results show that the parallel DNA algorithm has better performance than traditional computing. The effectiveness of the algorithm is verified by deducing the algorithm process in detail. Furthermore, the algorithm further proves that DNA computing, as one of the parallel computing methods, has the potential to solve more complex big data problems.

19 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper, the authors focus on reducing the physical cost of implementing quantum algorithms when using fault-tolerant quantum error correcting codes, in particular, those for which implementing the T gate consumes vastly more resources than the other gates in the gate set.
Abstract: This work focuses on reducing the physical cost of implementing quantum algorithms when using the state-of-the-art fault-tolerant quantum error correcting codes, in particular, those for which implementing the T gate consumes vastly more resources than the other gates in the gate set. More specifically, we consider the group of unitaries that can be exactly implemented by a quantum circuit consisting of the Clifford+T gate set, a universal gate set. Our primary interest is to compute a circuit for a given $n$-qubit unitary $U$, using the minimum possible number of T gates (called the T-count of $U$). We consider the problem COUNT-T, the optimization version of which aims to find the T-count of $U$. In its decision version the goal is to decide if the T-count is at most some positive integer $m$. Given an oracle for COUNT-T, we can compute a T-optimal circuit in time polynomial in the T-count and dimension of $U$. We give a provable classical algorithm that solves COUNT-T (decision) in time $O\left(N^{2(c-1)\lceil\frac{m}{c}\rceil}\poly(m,N)\right)$ and space $O\left(N^{2\lceil\frac{m}{c}\rceil}\poly(m,N)\right)$, where $N=2^n$ and $c\geq 2$. We also introduce an asymptotically faster multiplication method that shaves a factor of $N^{0.7457}$ off of the overall complexity. Lastly, beyond our improvements to the rigorous algorithm, we give a heuristic algorithm that solves COUNT-T (optimization) with both space and time $\poly(m,N)$. While our heuristic method still scales exponentially with the number of qubits (though with a lower exponent) , there is a large improvement by going from exponential to polynomial scaling with $m$. We implemented our heuristic algorithm with up to 4 qubit unitaries and obtained a significant improvement in time as well as T-count.

15 citations


Proceedings ArticleDOI
09 Apr 2022
TL;DR: This paper considers the problem of min-plus convolution between two integral sequences which are monotone and bounded by O(n), and achieves a running time upper bound of Õ(n1.5).
Abstract: In this paper, we show that the time complexity of monotone min-plus product of two n× n matrices is Õ(n(3+ω)/2)=Õ(n2.687), where ω < 2.373 is the fast matrix multiplication exponent [Alman and Vassilevska Williams 2021]. That is, when A is an arbitrary integer matrix and B is either row-monotone or column-monotone with integer elements bounded by O(n), computing the min-plus product C where Ci,j=mink{Ai,k+Bk,j} takes Õ(n(3+ω)/2) time, which greatly improves the previous time bound of Õ(n(12+ω)/5)=Õ(n2.875) [Gu, Polak, Vassilevska Williams and Xu 2021]. Then by simple reductions, this means the case that A is arbitrary and the columns or rows of B are bounded-difference can also be solved in Õ(n(3+ω)/2) time, whose previous result gives time complexity of Õ(n2.922) [Bringmann, Grandoni, Saha and Vassilevska Williams 2016]. So the case that both of A and B are bounded-difference also has Õ(n(3+ω)/2) time algorithm, whose previous results give time complexities of Õ(n2.824) [Bringmann, Grandoni, Saha and Vassilevska Williams 2016] and Õ(n2.779) [Chi, Duan and Xie 2022]. Many problems are reducible to these problems, such as language edit distance, RNA-folding, scored parsing problem on BD grammars [Bringmann, Grandoni, Saha and Vassilevska Williams 2016]. Thus, their complexities are all improved. Finally, we also consider the problem of min-plus convolution between two integral sequences which are monotone and bounded by O(n), and achieve a running time upper bound of Õ(n1.5). Previously, this task requires running time Õ(n(9+√177)/12) = O(n1.859) [Chan and Lewenstein 2015].

13 citations


Journal ArticleDOI
TL;DR: The aim is to find a feasible solution to the single-machine scheduling problem that minimizes the makespan plus the total resource consumption cost, and focuses on the design of pseudo-polynomial time and approximation algorithms.

11 citations


Journal ArticleDOI
TL;DR: In this article , the authors present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions, and provide running time lower bounds in terms of W[1]-hardness for parameter $d$ and prove that known brute-force strategies are essentially optimal.
Abstract: Understanding the computational complexity of training simple neural networks with rectified linear units (ReLUs) has recently been a subject of intensive research. Closing gaps and complementing results from the literature, we present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions. After a brief discussion of other parameters, we focus on analyzing the influence of the dimension $d$ of the training data on the computational complexity. We provide running time lower bounds in terms of W[1]-hardness for parameter $d$ and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). In comparison with previous work, our results hold for a broad(er) range of loss functions, including $\ell^p$-loss for all $p\in[0,\infty]$. In particular, we extend a known polynomial-time algorithm for constant $d$ and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases.

9 citations


Journal ArticleDOI
TL;DR: In this article , a single machine scheduling problem with bounded linear decreasing function of the amount of resource allocated to its processing operation is studied, where each job processing times are a bounded linear increasing function of its processing operations, and there is a single and fixed machine-unavailability interval that begins at time T. A solution is given by partitioning the jobs into two sets, corresponding to the set of jobs to be processed before and after the unavailability period.

9 citations


Journal ArticleDOI
TL;DR: In this article , the authors provide running time lower bounds in terms of W[1]-hardness for parameter d and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis).
Abstract: Understanding the computational complexity of training simple neural networks with rectified linear units (ReLUs) has recently been a subject of intensive research. Closing gaps and complementing results from the literature, we present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions. After a brief discussion of other parameters, we focus on analyzing the influence of the dimension d of the training data on the computational complexity. We provide running time lower bounds in terms of W[1]-hardness for parameter d and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). In comparison with previous work, our results hold for a broad(er) range of loss functions, including lp-loss for all p ∈ [0, ∞]. In particular, we improve a known polynomial-time algorithm for constant d and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases.

9 citations


Journal ArticleDOI
TL;DR: In this paper , it was shown that polynomials of degree less than n over a finite field with n elements can be multiplied in time O(n \log q \log (n \ log q)) \) , uniformly in q \) .
Abstract: Assuming a widely believed hypothesis concerning the least prime in an arithmetic progression, we show that polynomials of degree less than \( n \) over a finite field \( \mathbb {F}_q \) with \( q \) elements can be multiplied in time \( O (n \log q \log (n \log q)) \) , uniformly in \( q \) . Under the same hypothesis, we show how to multiply two \( n \) -bit integers in time \( O (n \log n) \) ; this algorithm is somewhat simpler than the unconditional algorithm from the companion paper [ 22 ]. Our results hold in the Turing machine model with a finite number of tapes.

8 citations


Proceedings ArticleDOI
27 May 2022
TL;DR: This paper will try to devise some mathematical support to the failure of minimalizing the sorting algorithms in linear time by comparing three well-known approximations of ! with detailed mathematical proofs associated with it.
Abstract: We have a lot of sorting algorithms in practice in this modern informatica world. Most of them are operating in quadratic time, some have managed to get them down to logarithmic time combined with linear, but there is no sorting algorithm that is efficient enough to sort data in linear time alone. In this paper, we will try to devise some mathematical support to the failure of minimalizing the sorting algorithms in linear time by comparing three well-known approximations of ! with detailed mathematical proofs associated with it. Further, this paper deals with detailed mathematical analysis to support the fact the minimum time complexity that can be attained by a sorting algorithm is of order O( ) without taking into account any extrapolated system advancement or computer architecture modification. There are various sorting algorithms like Merge sort, Heap sort, Interpolation sort, etc. which attains this minimum possible time complexity (worst case). But if we consider some biased cases then better time complexity can be obtained. If the elements of the array are in arithmetic progression, then interpolation search takes lesser time and there are many such biased cases where a particular algorithm gives better output but this paper deals with the general case i.e., no conditions are imposed on the elements of the array.

Book ChapterDOI
TL;DR: In this article , it was shown that determining the existence of a matching walk in a de Bruijn graph is NP-complete when substitutions are allowed to the graph, and that approximate matching under substitutions is solvable in subquadratic time, where n is the text length.
Abstract: Aligning a sequence to a walk in a labeled graph is a problem of fundamental importance to Computational Biology. For finding a walk in an arbitrary graph with |E| edges that exactly matches a pattern of length m, a lower bound based on the Strong Exponential Time Hypothesis (SETH) implies an algorithm significantly faster than $$\mathcal {O}(|E|m)$$ time is unlikely [Equi et al., ICALP 2019]. However, for many special graphs, such as de Bruijn graphs, the problem can be solved in linear time [Bowe et al., WABI 2012]. For approximate matching, the picture is more complex. When edits (substitutions, insertions, and deletions) are only allowed to the pattern, or when the graph is acyclic, the problem is again solvable in $$\mathcal {O}(|E|m)$$ time. When edits are allowed to arbitrary cyclic graphs, the problem becomes NP-complete, even on binary alphabets [Jain et al., RECOMB 2019]. These results hold even when edits are restricted to only substitutions. Despite the popularity of de Bruijn graphs in Computational Biology, the complexity of approximate pattern matching on de Bruijn graphs remained open. We investigate this problem and show that the properties that make de Bruijn graphs amenable to efficient exact pattern matching do not extend to approximate matching, even when restricted to the substitutions only case with alphabet size four. Specifically, we prove that determining the existence of a matching walk in a de Bruijn graph is NP-complete when substitutions are allowed to the graph. In addition, we demonstrate that an algorithm significantly faster than $$\mathcal {O}(|E|m)$$ is unlikely for de Bruijn graphs in the case where only substitutions are allowed to the pattern. This stands in contrast to pattern-to-text matching where exact matching is solvable in linear time, like on de Bruijn graphs, but approximate matching under substitutions is solvable in subquadratic $$\tilde{O}(n\sqrt{m})$$ time, where n is the text’s length [Abrahamson, SIAM J. Computing 1987].

Book ChapterDOI
TL;DR: In this paper , Li et al. improved the time complexity of ChaCha by reducing it to 2.5 rounds, which is the first-ever improvement over Beierle et al.'s algorithm.
Abstract: In this paper, we provide several improvements over the existing differential-linear attacks on ChaCha. ChaCha is a stream cipher which has 20 rounds. At CRYPTO 2020, Beierle et al. observed a differential in the 3.5-th round if the right pairs are chosen. They produced an improved attack using this, but showed that to achieve a right pair, we need $$2^5$$ iterations on average. In this direction, we provide a technique to find the right pairs with the help of listing. Also, we provide a strategical improvement in PNB construction, modification of complexity calculation and an alternative attack method using two input-output pairs. Using these, we improve the time complexity, reducing it to $$2^{221.95}$$ from $$2^{230.86}$$ reported by Beierle et al. for 256 bit version of ChaCha. Also, after a decade, we improve existing complexity (Shi et al. ICISC 2012) for a 6-round of 128 bit version of ChaCha by more than 11 million times and produce the first-ever attack on 6.5-round ChaCha128 with time complexity $$2^{123.04}.$$

Journal ArticleDOI
TL;DR: In this paper , the statistical and computational limits of high-order tensor clustering with planted structures were studied and the authors identified the sharp boundaries of signal-to-noise ratio for which CHC and ROHC detection/recovery are statistically possible.
Abstract: This paper studies the statistical and computational limits of high-order clustering with planted structures. We focus on two clustering models, constant high-order clustering (CHC) and rank-one higher-order clustering (ROHC), and study the methods and theory for testing whether a cluster exists (detection) and identifying the support of cluster (recovery). Specifically, we identify the sharp boundaries of signal-to-noise ratio for which CHC and ROHC detection/recovery are statistically possible. We also develop the tight computational thresholds: when the signal-to-noise ratio is below these thresholds, we prove that polynomial-time algorithms cannot solve these problems under the computational hardness conjectures of hypergraphic planted clique (HPC) detection and hypergraphic planted dense subgraph (HPDS) recovery. We also propose polynomial-time tensor algorithms that achieve reliable detection and recovery when the signal-to-noise ratio is above these thresholds. Both sparsity and tensor structures yield the computational barriers in high-order tensor clustering. The interplay between them results in significant differences between high-order tensor clustering and matrix clustering in literature in aspects of statistical and computational phase transition diagrams, algorithmic approaches, hardness conjecture, and proof techniques. To our best knowledge, we are the first to give a thorough characterization of the statistical and computational trade-off for such a double computational-barrier problem. Finally, we provide evidence for the computational hardness conjectures of HPC detection (via low-degree polynomial and Metropolis methods) and HPDS recovery (via low-degree polynomial method).

Book ChapterDOI
TL;DR: In this paper , the authors proposed a simple algorithm that, guided by an optimal solution to the cut LP, first selects a DFS tree and then finds a solution to MAP by computing an optimum augmentation of this tree.
Abstract: AbstractThe Matching Augmentation Problem (MAP) has recently received significant attention as an important step towards better approximation algorithms for finding cheap 2-edge connected subgraphs. This has culminated in a \(\frac{5}{3}\)-approximation algorithm. However, the algorithm and its analysis are fairly involved and do not compare against the problem’s well-known LP relaxation called the cut LP.In this paper, we propose a simple algorithm that, guided by an optimal solution to the cut LP, first selects a DFS tree and then finds a solution to MAP by computing an optimum augmentation of this tree. Using properties of extreme point solutions, we show that our algorithm always returns (in polynomial time) a better than 2-approximation when compared to the cut LP. We thereby also obtain an improved upper bound on the integrality gap of this natural relaxation.

Journal ArticleDOI
TL;DR: In this article , a mixed integer linear model is formulated and a metaheuristic algorithm based on variable neighborhood search (VNS) is developed to solve the time dependent orienteering problem with time windows and service time dependent profits.

Journal ArticleDOI
TL;DR: In this paper , the authors propose an acentralized planner with low computational cost to schedule the motions of multiple AGVs at the intersection of pre-defined polynomial curves.
Abstract: In this letter, we introduce acentralized planner with low computational cost to schedule the motions of multiple Automated Guided Vehicles (AGVs) at the intersection of pre-defined polynomial curves. In particular, we find that the collision conditions between two AGVs along polynomial paths can be formulated as a set of polynomial inequalities. Furthermore, by solving these inequalities, the continuous boundaries of potential collision areas for vehicles can be determined offline and stored in a table. During the online phase, by also taking into account the priority setup among AGVs, the planner will use efficient table lookup to determine a list of intermediate goals for each AGV to move toward based on their real-time position feedback. In this way, the multiple robots are able to navigate on polynomial guide paths safely and efficiently at the crossroad, and the performance of our approach is demonstrated in a number of simulated scenarios. Videos of the experiments are available at https://sites.google.com/view/zqzhang/agvsplanner .

Proceedings ArticleDOI
01 Feb 2022
TL;DR: In this paper , the authors gave a polynomial-time constant-factor approximation algorithm for maximum independent set for (axis-aligned) rectangles in the plane, based on a new form of recursive partitioning, in which faces that are constant-complexity and orthogonally convex are recursively partitioned into a constant number of such faces.
Abstract: We give a polynomial-time constant-factor approximation algorithm for maximum independent set for (axis-aligned) rectangles in the plane. Using a polynomial-time algorithm, the best approximation factor previously known is $O(\log\log n)$ . The results are based on a new form of recursive partitioning in the plane, in which faces that are constant-complexity and orthogonally convex are recursively partitioned into a constant number of such faces.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors propose a real-time semantic segmentation network (LSNet) which strikes a balance between segmentation accuracy and inference speed. But, the network requires a large number of inputs and outputs.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an optimal and computationally efficient cooperative driving strategy with the polynomial-time complexity, by modeling the conflict relations among the vehicles, the solution space of the cooperative driving problem is completely represented by a newly designed small-size state space.
Abstract: Cooperative driving at signal-free intersections, which aims to improve driving safety and efficiency for connected and automated vehicles, has attracted increasing interest in recent years. However, existing cooperative driving strategies either suffer from computational complexity or cannot guarantee global optimality. To fill this research gap, this paper proposes an optimal and computationally efficient cooperative driving strategy with the polynomial-time complexity. By modeling the conflict relations among the vehicles, the solution space of the cooperative driving problem is completely represented by a newly designed small-size state space. Then, based on dynamic programming, the globally optimal solution can be searched inside the state space efficiently. It is proved that the proposed strategy can reduce the time complexity of computation from exponential to a small-degree polynomial. Simulation results further demonstrate that the proposed strategy can obtain the globally optimal solution within a limited computation time under various traffic demand settings.

Journal ArticleDOI
TL;DR: In this paper , an end-to-end deep reinforcement learning framework is proposed to solve this type of combinatorial optimization problems, which can be applied to different problems with only slight changes of input, masks, and decoder context vectors.

Proceedings ArticleDOI
01 Oct 2022
TL;DR: In this paper , the authors studied the problem of learning a Hamiltonian H to precision with sample complexity poly(n, 1/$β$, 1/$\varepsilon$) and time complexity linear in the sample size.
Abstract: We study the problem of learning a Hamiltonian H to precision $\varepsilon$, supposing we are given copies of its Gibbs state $\rho =\exp(-\beta H)/\mathrm{Tr}(\exp(-\beta H))$ at a known inverse temperature $\beta$. Anshu, Arunachalam, Kuwahara, and Soleimanifar [AAKS21] recently studied the sample complexity (number of copies of $\rho$ needed) of this problem for geometrically local N-qubit Hamiltonians. In the high-temperature (low $\beta$) regime, their algorithm has sample complexity poly (N, 1/$\beta$, 1/$\varepsilon$) and can be implemented with polynomial, but suboptimal, time complexity. In this paper, we study the same question for a more general class of Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error $\varepsilon$ with sample complexity $S=O(\log N/(\beta\varepsilon)^{2}$) and time complexity linear in the sample size, O(SN). Furthermore, we prove a matching lower bound showing that our algorithm’s sample complexity is optimal, and hence our time complexity is also optimal. In the appendix, we show that virtually the same algorithm can be used to learn H from a real-time evolution unitary $e^{-i t H}$ in a small t regime with similar sample and time complexity.

Journal ArticleDOI
TL;DR: A new refinement of the Tsaknakis-Spirakis algorithm is proposed, resulting in a polynomial-time algorithm that computes a ( 1 3 + δ )-Nash equilibrium, for any constant δ > 0.
Abstract: Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis [38], with an approximation guarantee of (0.3393 + δ), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a \((\frac{1}{3}+\delta) \) -Nash equilibrium, for any constant δ > 0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of [38], and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.

Journal ArticleDOI
TL;DR: In this paper , the authors defined a new coverage problem in battery-free wireless sensor network (BF-WSN) which aims at maximizing coverage quality rather than prolonging network lifetime, and proposed three approximate algorithms to derive nearly optimal coverage when the sufficient conditions are unsatisfied.
Abstract: Battery-free wireless sensor network (BF-WSN) is a newly proposed network architecture to address the limitation of traditional wireless sensor networks (WSNs). The special features of BF-WSNs make the coverage problem quite different and even more challenging from and than that in traditional WSNs. This paper defines a new coverage problem in BF-WSNs which aims at maximizing coverage quality rather than prolonging network lifetime. The newly defined coverage problem is proved to be at least NP-Hard. Two sufficient conditions, under which the optimal solution of the problem can be derived in polynomial time, are given in this paper. Furthermore, three approximate algorithms are proposed to derive nearly optimal coverage when the sufficient conditions are unsatisfied. The time complexity and approximate ratio of the three algorithms are analyzed. Extensive simulations are carried out to examine the performance of the proposed algorithms. The simulation results show that these algorithms are efficient and effective.

Proceedings ArticleDOI
22 Jan 2022-Robotics
TL;DR: This work proposes the first low polynomial-time algorithm for MRPP achieving 1–1.5 asymptotic optimality guarantees on solution makespan for random instances under very high robot density, and develops effective, principled heuristics that further improve the computed optimality of the RTH algorithms.

Journal ArticleDOI
TL;DR: The best known running time for this problem is in O(P 1/4 ) time, where P is the total processing time of all n jobs in the input as mentioned in this paper .
Abstract: This paper is concerned with the \(1|| \sum p_j U_j\) problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also an important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The best known running time follows from the famous Lawler and Moore algorithm that solves a more general weighted version in \(O(P \cdot n)\) time, where P is the total processing time of all n jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for problem, each improving on Lawler and Moore’s algorithm in a different scenario. Our first algorithm runs in \({\tilde{O}}(P^{7/4})\) time, and outperforms Lawler and Moore’s algorithm in instances where \(n={\tilde{\omega }}(P^{3/4})\). Our second algorithm runs in \({\tilde{O}}(\min \{P \cdot D_{\#}, P + D\})\) time, where \(D_{\#}\) is the number of different due dates in the instance, and D is the sum of all different due dates. This algorithm improves on Lawler and Moore’s algorithm when \(n={\tilde{\omega }}(D_{\#})\) or \(n={\tilde{\omega }}(D/P)\). Further, it extends the known \({\tilde{O}}(P)\) algorithm for the single due date special case of \(1||\sum p_jU_j\) in a natural way. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, and can be easily extended to the case of a fixed number of machines. For the first algorithm we define a new “skewed” version of \((\max ,\min )\)-Convolution which is interesting in its own right.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem of deciding if an AT-free graph contains a fixed graph H as an induced topological minor admits a polynomial-time algorithm.

Journal ArticleDOI
TL;DR: In this paper , the authors show that deciding the feasibility of a PESP instance is NP-hard even when the treewidth is 2, the branchwidth is 2 or the carvingwidth is 3.
Abstract: Abstract Public transportation networks are typically operated with a periodic timetable. The periodic event scheduling problem (PESP) is the standard mathematical modeling tool for periodic timetabling. PESP is a computationally very challenging problem: For example, solving the instances of the benchmarking library PESPlib to optimality seems out of reach. Since PESP can be solved in linear time on trees, and the treewidth is a rather small graph parameter in the networks of the PESPlib, it is a natural question to ask whether there are polynomial-time algorithms for input networks of bounded treewidth, or even better, fixed-parameter tractable algorithms. We show that deciding the feasibility of a PESP instance is NP-hard even when the treewidth is 2, the branchwidth is 2, or the carvingwidth is 3. Analogous results hold for the optimization of reduced PESP instances, where the feasibility problem is trivial. Moreover, we show W[1]-hardness of the general feasibility problem with respect to treewidth, which means that we can most likely only accomplish pseudo-polynomial-time algorithms on input networks with bounded tree- or branchwidth. We present two such algorithms based on dynamic programming. We further analyze the parameterized complexity of PESP with bounded cyclomatic number, diameter, or vertex cover number. For event-activity networks with a special—but standard—structure, we give explicit and sharp bounds on the branchwidth in terms of the maximum degree and the carvingwidth of an underlying line network. Finally, we investigate several parameters on the smallest instance of the benchmarking library PESPlib.

Journal ArticleDOI
TL;DR: In this paper , the authors presented a modification of Zielonka's classic algorithm that brings its complexity down to O(n−O(log left(1+ √ log right(1 + √ √ n)) for parity games of size n with priorities.
Abstract: Zielonka's classic recursive algorithm for solving parity games is perhaps the simplest among the many existing parity game algorithms. However, its complexity is exponential, while currently the state-of-the-art algorithms have quasipolynomial complexity. Here, we present a modification of Zielonka's classic algorithm that brings its complexity down to $n^{O\left(\log\left(1+\frac{d}{\log n}\right)\right)}$, for parity games of size $n$ with $d$ priorities, in line with previous quasipolynomial-time solutions.

Journal ArticleDOI
TL;DR: In this paper , the authors study a distributionally robust parallel machines scheduling problem, minimizing the total flow time criterion, and show that the problem can be cast as a deterministic optimization problem, with the objective function composed of an expectation and a regularization term given as an ℓp norm.