scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the ACM in 2022"


Journal ArticleDOI
TL;DR: In this paper , it was shown that polynomials of degree less than n over a finite field with n elements can be multiplied in time O(n \log q \log (n \ log q)) \) , uniformly in q \) .
Abstract: Assuming a widely believed hypothesis concerning the least prime in an arithmetic progression, we show that polynomials of degree less than \( n \) over a finite field \( \mathbb {F}_q \) with \( q \) elements can be multiplied in time \( O (n \log q \log (n \log q)) \) , uniformly in \( q \) . Under the same hypothesis, we show how to multiply two \( n \) -bit integers in time \( O (n \log n) \) ; this algorithm is somewhat simpler than the unconditional algorithm from the companion paper [ 22 ]. Our results hold in the Turing machine model with a finite number of tapes.

8 citations


Journal ArticleDOI
TL;DR: This work proves that H can be PAC learned by an (approximate) differentially private algorithm if and only if it has a finite Littlestone dimension, implying a qualitative equivalence between online learnability and private PAC learnability.
Abstract: Let H be a binary-labeled concept class. We prove that H can be PAC learned by an (approximate) differentially private algorithm if and only if it has a finite Littlestone dimension. This implies a qualitative equivalence between online learnability and private PAC learnability.

8 citations


Journal ArticleDOI
TL;DR: In this article, the notion of twin width on graphs and matrices was introduced, inspired by a width invariant defined on permutations by Guillemot and Marx [SODA].
Abstract: Inspired by a width invariant defined on permutations by Guillemot and Marx [SODA’14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-wid...

8 citations


Journal ArticleDOI
TL;DR: In this article , the authors introduce a combinatorial interpretation of string diagram rewriting modulo Frobenius structures in terms of double-pushout hypergraph rewriting, and show how to derive from these results a termination strategy for Interacting Bialgebras, an important rewrite theory in the study of quantum circuits and signal flow graphs.
Abstract: String diagrams are a powerful and intuitive graphical syntax, originating in theoretical physics and later formalised in the context of symmetric monoidal categories. In recent years, they have found application in the modelling of various computational structures, in fields as diverse as Computer Science, Physics, Control Theory, Linguistics, and Biology. In several of these proposals, transformations of systems are modelled as rewrite rules of diagrams. These developments require a mathematical foundation for string diagram rewriting: whereas rewrite theory for terms is well-understood, the two-dimensional nature of string diagrams poses quite a few additional challenges. This work systematises and expands a series of recent conference papers, laying down such a foundation. As a first step, we focus on the case of rewrite systems for string diagrammatic theories that feature a Frobenius algebra. This common structure provides a more permissive notion of composition than the usual one available in monoidal categories, and has found many applications in areas such as concurrency, quantum theory, and electrical circuits. Notably, this structure provides an exact correspondence between the syntactic notion of string diagrams modulo Frobenius structure and the combinatorial structure of hypergraphs. Our work introduces a combinatorial interpretation of string diagram rewriting modulo Frobenius structures in terms of double-pushout hypergraph rewriting. We prove this interpretation to be sound and complete and we also show that the approach can be generalised to rewriting modulo multiple Frobenius structures. As a proof of concept, we show how to derive from these results a termination strategy for Interacting Bialgebras, an important rewrite theory in the study of quantum circuits and signal flow graphs.

7 citations


Journal ArticleDOI
TL;DR: A polynomial-time algorithm for atomic embeddability testing was proposed in this paper , which is a generalization of clustering and thickenability testing, and is the first algorithm for testing c-planarity.
Abstract: We study the atomic embeddability testing problem, which is a common generalization of clustered planarity ( c-planarity , for short) and thickenability testing, and present a polynomial-time algorithm for this problem, thereby giving the first polynomial-time algorithm for c-planarity. C-planarity was introduced in 1995 by Feng, Cohen, and Eades as a variant of graph planarity, in which the vertex set of the input graph is endowed with a hierarchical clustering and we seek an embedding (crossing free drawing) of the graph in the plane that respects the clustering in a certain natural sense. Until now, it has been an open problem whether c-planarity can be tested efficiently. The thickenability problem for simplicial complexes emerged in the topology of manifolds in the 1960s. A 2-dimensional simplicial complex is thickenable if it embeds in some orientable 3-dimensional manifold. Recently, Carmesin announced that thickenability can be tested in polynomial time. Our algorithm for atomic embeddability combines ideas from Carmesin’s work with algorithmic tools previously developed for weak embeddability testing. We express our results purely in terms of graphs on surfaces, and rely on the machinery of topological graph theory. Finally, we give a polynomial-time reduction from atomic embeddability to thickenability thereby showing that both problems are polynomially equivalent, and show that a slight generalization of atomic embeddability to the setting in which clusters are toroidal graphs is NP-complete.

5 citations


Journal ArticleDOI
TL;DR: For first-order queries of higher arities, it is shown that over any nowhere dense class of databases, the set of their solutions can be enumerated with constant delay after a pseudo-linear time preprocessing.
Abstract: We consider the evaluation of first-order queries over classes of databases that are nowhere dense. The notion of nowhere dense classes was introduced by Nešetřil and Ossona de Mendez as a formalization of classes of “sparse” graphs and generalizes many well-known classes of graphs, such as classes of bounded degree, bounded tree-width, or bounded expansion. It has recently been shown by Grohe, Kreutzer, and Siebertz that over nowhere dense classes of databases, first-order sentences can be evaluated in pseudo-linear time (pseudo-linear time means that for all \( \epsilon \) there exists an algorithm working in time \( O(n^{1+\epsilon }) \) , where \( n \) is the size of the database). For first-order queries of higher arities, we show that over any nowhere dense class of databases, the set of their solutions can be enumerated with constant delay after a pseudo-linear time preprocessing. In the same context, we also show that after a pseudo-linear time preprocessing we can, on input of a tuple, test in constant time whether it is a solution to the query.

4 citations


Journal ArticleDOI
TL;DR: Hardness magnification has been shown to not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms as discussed by the authors , but also implies the non-existence of efficient algorithms.
Abstract: Hardness magnification reduces major complexity separations (such as EXP ⊈ NC 1 ) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [ 11 , 13 , 14 , 40 , 42 , 43 , 46 ] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q , while Q itself is susceptible to lower bounds, but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: – Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [ 51 ] ? – Can we adapt known lower-bound techniques to establish the desired lower bound for Q ? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit-size problem imply the non-existence of natural proofs. As the non-existence of natural proofs implies the non-existence of efficient learning algorithms, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower-bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier : existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower-bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

2 citations


Journal ArticleDOI
TL;DR: In this article , a connection between adversarial robustness of streaming algorithms and the notion of differential privacy is established, which allows to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.
Abstract: A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary . We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy . This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors consider the problem of counting the number of copies of a fixed graph H within an input graph G , and show that H is easy if there is a linear-time algorithm for counting H in an input G of bounded degeneracy.
Abstract: We consider the problem of counting the number of copies of a fixed graph H within an input graph G . This is one of the most well-studied algorithmic graph problems, with many theoretical and practical applications. We focus on solving this problem when the input G has bounded degeneracy . This is a rich family of graphs, containing all graphs without a fixed minor (e.g., planar graphs), as well as graphs generated by various random processes (e.g., preferential attachment graphs). We say that H is easy if there is a linear-time algorithm for counting the number of copies of H in an input G of bounded degeneracy. A seminal result of Chiba and Nishizeki from ’85 states that every H on at most 4 vertices is easy. Bera, Pashanasangi, and Seshadhri recently extended this to all H on 5 vertices and further proved that for every \( k \gt 5 \) there is a k -vertex H which is not easy. They left open the natural problem of characterizing all easy graphs H . Bressan has recently introduced a framework for counting subgraphs in degenerate graphs, from which one can extract a sufficient condition for a graph H to be easy. Here, we show that this sufficient condition is also necessary, thus fully answering the Bera–Pashanasangi–Seshadhri problem. We further resolve two closely related problems; namely characterizing the graphs that are easy with respect to counting induced copies, and with respect to counting homomorphisms.

2 citations


Journal ArticleDOI
TL;DR: In this paper, a 1-round delegation scheme for every language computable in time t = t(n), where the running time of the prover is poly(t) and the running times of the verifier are poly(n).
Abstract: We construct a 1-round delegation scheme (i.e., argument-system) for every language computable in time t = t(n), where the running time of the prover is poly(t) and the running time of the verifier...

1 citations


Journal ArticleDOI
TL;DR: This work develops several generic tools allowing one to efficiently transform a non-robust streaming algorithm into a robust one in various scenarios, and develops adversarially robust (1+ε)-approximation algorithms whose required space matches that of the best known non-Robust algorithms.
Abstract: We investigate the adversarial robustness of streaming algorithms. In this context, an algorithm is considered robust if its performance guarantees hold even if the stream is chosen adaptively by an adversary that observes the outputs of the algorithm along the stream and can react in an online manner. While deterministic streaming algorithms are inherently robust, many central problems in the streaming literature do not admit sublinear-space deterministic algorithms; on the other hand, classical space-efficient randomized algorithms for these problems are generally not adversarially robust. This raises the natural question of whether there exist efficient adversarially robust (randomized) streaming algorithms for these problems. In this work, we show that the answer is positive for various important streaming problems in the insertion-only model, including distinct elements and more generally Fp-estimation, Fp-heavy hitters, entropy estimation, and others. For all of these problems, we develop adversarially robust (1+ε)-approximation algorithms whose required space matches that of the best known non-robust algorithms up to a poly(log n, 1/ε) multiplicative factor (and in some cases even up to a constant factor). Towards this end, we develop several generic tools allowing one to efficiently transform a non-robust streaming algorithm into a robust one in various scenarios.

Journal ArticleDOI
TL;DR: This work develops classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions, and improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving.
Abstract: We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all prior results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ2-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.

Journal ArticleDOI
TL;DR: In this article , it was shown that for any ltl formula φ, there is an LTL formula with alternation number k that cannot be verified at runtime by distributed monitors emitting verdicts from a set of cardinality smaller than k + 1.
Abstract: Runtime verification is a lightweight method for monitoring the formal specification of a system during its execution. It has recently been shown that a given state predicate can be monitored consistently by a set of crash-prone asynchronous distributed monitors observing the system, only if each monitor can emit verdicts taken from a large enough finite set. We revisit this impossibility result in the concrete context of linear-time logic ( ltl ) semantics for runtime verification, that is, when the correctness of the system is specified by an ltl formula on its execution traces. First, we show that monitors synthesized based on the 4-valued semantics of ltl ( rv-ltl ) may result in inconsistent distributed monitoring, even for some simple ltl formulas. More generally, given any ltl formula φ, we relate the number of different verdicts required by the monitors for consistently monitoring φ, with a specific structural characteristic of φ called its alternation number . Specifically, we show that, for every k ≥ 0 , there is an ltl formula φ with alternation number k that cannot be verified at runtime by distributed monitors emitting verdicts from a set of cardinality smaller than k + 1. On the positive side, we define a family of logics, called distributed ltl (abbreviated as dltl ), parameterized by k ≥ 0, which refines rv-ltl by incorporating 2k + 4 truth values. Our main contribution is to show that, for every k ≥ 0, every ltl formula φ with alternation number k can be consistently monitored by distributed monitors, each running an automaton based on a (2 ⌈ k /2 ⌉ +4)-valued logic taken from the dltl family.

Journal ArticleDOI
TL;DR: In the k-cut problem as mentioned in this paper, the goal is to find the lowest weight set of edges whose deletion breaks a given graph into k connected components, and the algorithms of Karger and Stein can solve this problem in roughly two rounds.
Abstract: In the k-cut problem, we want to find the lowest-weight set of edges whose deletion breaks a given (multi)graph into k connected components. Algorithms of Karger and Stein can solve this in roughly...

Journal ArticleDOI
TL;DR: The existence of uncoupled no-regret learning dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems as mentioned in this paper .
Abstract: The existence of simple uncoupled no-regret learning dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as imperfect information. Because of the sequential nature and presence of private information in the game, correlation in extensive-form games possesses significantly different properties than in normal-form games, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to the classical notion of correlated equilibrium in normal-form games. Compared to the latter, the constraints that define the set of EFCEs are significantly more complex, as the correlation device (a.k.a. mediator) must take into account the evolution of beliefs of each player as they make observations throughout the game. Due to that significant added complexity, the existence of uncoupled learning dynamics leading to an EFCE has remained a challenging open research question for a long time. In this article, we settle that question by giving the first uncoupled no-regret dynamics that converge to the set of EFCEs in n -player general-sum extensive-form games with perfect recall. We show that each iterate can be computed in time polynomial in the size of the game tree, and that, when all players play repeatedly according to our learning dynamics, the empirical frequency of play after T game repetitions is proven to be a \( O(1/\sqrt {T}) \) -approximate EFCE with high probability, and an EFCE almost surely in the limit.

Journal ArticleDOI
TL;DR: A surprising classification for the computational complexity of the Quantified Constraint Satisfaction Problem over a constraint language Γ, QCSP, where Γ is a finite language over 3 elements which contains all constants refutes the hitherto widely-believed Chen Conjecture.
Abstract: We give a surprising classification for the computational complexity of the Quantified Constraint Satisfaction Problem over a constraint language Γ, QCSP(Γ), where Γ is a finite language over three elements that contains all constants. In particular, such problems are in P, NP-complete, co-NP-complete, or PSpace-complete. Our classification refutes the hitherto widely believed Chen Conjecture. Additionally, we show that already on a 4-element domain there exists a constraint language Γ such that QCSP(Γ) is DP-complete (from Boolean Hierarchy), and on a 10-element domain there exists a constraint language giving the complexity class ΘP2. Meanwhile, we prove the Chen Conjecture for finite conservative languages Γ. If the polymorphism clone of such Γ has the polynomially generated powers property, then QCSP(Γ) is in NP. Otherwise, the polymorphism clone of Γ has the exponentially generated powers property and QCSP(Γ) is PSpace-complete.1

Journal ArticleDOI
TL;DR: This work revisits the probabilistic extension of Datalog and proposes a more principled approach towards defining its semantics based on stochastic kernels and Markov processes, which allows it to extend the semantics to continuous probability distributions.
Abstract: Arguing for the need to combine declarative and probabilistic programming, Bárány et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a “purely declarative probabilistic programming language.” We revisit this language and propose a more principled approach towards defining its semantics based on stochastic kernels and Markov processes—standard notions from probability theory. This allows us to extend the semantics to continuous probability distributions, thereby settling an open problem posed by Bárány et al. We show that our semantics is fairly robust, allowing both parallel execution and arbitrary chase orders when evaluating a program. We cast our semantics in the framework of infinite probabilistic databases (Grohe and Lindner, LMCS 2022) and show that the semantics remains meaningful even when the input of a probabilistic Datalog program is an arbitrary probabilistic database.

Journal ArticleDOI
TL;DR: In this article , the authors give an n O (log log n ) -time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over { ± 1} n .
Abstract: We give an n O (log log n ) -time membership query algorithm for properly and agnostically learning decision trees under the uniform distribution over { ± 1} n . Even in the realizable setting, the previous fastest runtime was n O (log n ) , a consequence of a classic algorithm of Ehrenfeucht and Haussler. Our algorithm shares similarities with practical heuristics for learning decision trees, which we augment with additional ideas to circumvent known lower bounds against these heuristics. To analyze our algorithm, we prove a new structural result for decision trees that strengthens a theorem of O’Donnell, Saks, Schramm, and Servedio. While the OSSS theorem says that every decision tree has an influential variable, we show how every decision tree can be “pruned” so that every variable in the resulting tree is influential.

Journal ArticleDOI
TL;DR: Improved deterministic algorithms for approximating shortest paths in the Congested Clique model of distributed computing are presented and a derandomization scheme of a novel variant of the hitting set problem, which might be of independent interest is presented.
Abstract: We present improved deterministic algorithms for approximating shortest paths in the Congested Clique model of distributed computing. We obtain poly(log log n)-round algorithms for the following problems in unweighted undirected n-vertex graphs: ( 1 + ϵ)-approximation of multi-source shortest paths (MSSP) from O(√ n) sources. (2 + ϵ)-approximation of all pairs shortest paths (APSP). (1 + ϵ, β)-approximation of APSP where β = O (log log n/ϵ)log log n. These bounds improve exponentially over the state-of-the-art poly-logarithmic bounds due to [Censor-Hillel et al., PODC19]. It also provides the first nearly-additive bounds for the APSP problem in sub-polynomial time. Our approach is based on distinguishing between short and long distances based on some distance threshold t = O(β/ϵ) where β = O (log log n/ϵ)log log n. Handling the long distances is done by devising a new algorithm for computing a sparse (1 + ϵ, β) emulator with O (n log log n) edges. For the short distances, we provide distance-sensitive variants for the distance tool-kit of [Censor-Hillel et al., PODC19]. By exploiting the fact that this tool-kit should be applied only on local balls of radius t, their round complexities get improved from poly (log n) to poly (log t). Finally, our deterministic solutions for these problems are based on a derandomization scheme of a novel variant of the hitting set problem, which might be of independent interest.

Journal ArticleDOI
TL;DR: It is shown that when k = k = \Theta (\log n) , any Cutting Planes refutation for random k-SAT requires exponential length in the regime where the number of clauses guarantees that the formula is unsatisfiable with high probability.
Abstract: The random k-SAT model is one of the most important and well-studied distributions over k-SAT instances. It is closely connected to statistical physics and is a benchmark for satisfiability algorithms. We show that when \( k = \Theta (\log n) \) , any Cutting Planes refutation for random k-SAT requires exponential length in the regime where the number of clauses guarantees that the formula is unsatisfiable with high probability.

Journal ArticleDOI
TL;DR: In this paper , the authors introduced the notion of a chain, which is a sequence of n points in the plane, ordered by x-coordinates, so that the edge between any two consecutive points is unavoidable as far as triangulations are concerned.
Abstract: We introduce the abstract notion of a chain, which is a sequence of n points in the plane, ordered by x-coordinates, so that the edge between any two consecutive points is unavoidable as far as triangulations are concerned. A general theory of the structural properties of chains is developed, alongside a general understanding of their number of triangulations. We also describe an intriguing new and concrete configuration, which we call the Koch chain due to its similarities to the Koch curve. A specific construction based on Koch chains is then shown to have Ω (9.08n) triangulations. This is a significant improvement over the previous and long-standing lower bound of Ω (8.65n) for the maximum number of triangulations of planar point sets.

Journal ArticleDOI
TL;DR: In this paper , it was shown that the isomorphism problem for CFI graphs over Ω2i cannot be defined in rank logic, even if the base graph is totally ordered.
Abstract: In the search for a logic capturing polynomial time the most promising candidates are Choiceless Polynomial Time (CPT) and rank logic. Rank logic extends fixed-point logic with counting by a rank operator over prime fields. We show that the isomorphism problem for CFI graphs over ℤ2i cannot be defined in rank logic, even if the base graph is totally ordered. However, CPT can define this isomorphism problem. We thereby separate rank logic from CPT and in particular from polynomial time.

Journal ArticleDOI
TL;DR: In this article , it was shown that doubling the quantum circuit depth does not make the hybrid model more powerful, and this cannot be traded by classical computation relative to an oracle, and for any depth parameter d, there exists a strong oracle that separates quantum depth d and 2d+1 in the presence of classical computation.
Abstract: Near-term quantum computers are likely to have small depths due to short coherence time and noisy gates. A natural approach to leverage these quantum computers is interleaving them with classical computers. Understanding the capabilities and limits of this hybrid approach is an essential topic in quantum computation. Most notably, the quantum Fourier transform can be implemented by a hybrid of logarithmic-depth quantum circuits and a classical polynomial-time algorithm. Therefore, it seems possible that quantum polylogarithmic depth is as powerful as quantum polynomial depth in the presence of classical computation. Indeed, Jozsa conjectured that “Any quantum polynomial-time algorithm can be implemented with only O(log n) quantum depth interspersed with polynomial-time classical computations.” This can be formalized as asserting the equivalence of BQP and “BQNCBPP.” However, Aaronson conjectured that “there exists an oracle separation between BQP and BPPBQNC.” BQNCBPP and BPPBQNC are two natural and seemingly incomparable ways of hybrid classical-quantum computation. In this work, we manage to prove Aaronson’s conjecture and in the meantime prove that Jozsa’s conjecture, relative to an oracle, is false. In fact, we prove a stronger statement that for any depth parameter d, there exists an oracle that separates quantum depth d and 2d+1 in the presence of classical computation. Thus, our results show that relative to oracles, doubling the quantum circuit depth does make the hybrid model more powerful, and this cannot be traded by classical computation.

Journal ArticleDOI
TL;DR: In this article , it was shown that strong polarization of the underlying martingale can lead to efficient capacity-achieving codes for arbitrary symmetric memoryless channels with lengths that are only inverse polynomial in the gap to capacity.
Abstract: Arikan's exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix $M$, a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the {\em polarization} of an associated $[0,1]$-bounded martingale, namely its convergence in the limit to either $0$ or $1$. Arikan showed polarization of the martingale associated with the matrix $G_2 = \left(\begin{matrix} 1& 0 1& 1\end{matrix}\right)$ to get capacity achieving codes. His analysis was later extended to all matrices $M$ that satisfy an obvious necessary condition for polarization. While Arikan's theorem does not guarantee that the codes achieve capacity at small blocklengths, it turns out that a "strong" analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with $G_2$ such a strong polarization was shown in two independent works ([Guruswami and Xia, IEEE IT '15] and [Hassani et al., IEEE IT '14]), resolving a major theoretical challenge of the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are also simpler and modular. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving codes for arbitrary symmetric memoryless channels. We show how to use our analyses to achieve exponentially small error probabilities at lengths inverse polynomial in the gap to capacity. Indeed we show that we can essentially match any error probability with lengths that are only inverse polynomial in the gap to capacity.

Journal ArticleDOI
TL;DR: In this article , a two-round MPC protocol was proposed for the dishonest majority setting, assuming the minimal assumption that 2-round oblivious transfer (OT) exists, and the protocol was shown to be secure against semi-honest adversaries.
Abstract: We provide new two-round multiparty secure computation (MPC) protocols in the dishonest majority setting assuming the minimal assumption that two-round oblivious transfer (OT) exists. If the assumed two-round OT protocol is secure against semi-honest adversaries (in the plain model) then so is our two-round MPC protocol. Similarly, if the assumed two-round OT protocol is secure against malicious adversaries (in the common random/reference string model) then so is our two-round MPC protocol. Previously, two-round MPC protocols were only known under relatively stronger computational assumptions.

Journal ArticleDOI
TL;DR: In this article , a new notion of access pattern privacy, called (ϵ , δ) -differential obliviousness, was proposed, which is based on the notion of differential privacy.
Abstract: It is well-known that a program’s memory access pattern can leak information about its input. To thwart such leakage, most existing works adopt the technique of oblivious RAM (ORAM) simulation. Such an obliviousness notion has stimulated much debate. Although ORAM techniques have significantly improved over the past few years, the concrete overheads are arguably still undesirable for real-world systems — part of this overhead is in fact inherent due to a well-known logarithmic ORAM lower bound by Goldreich and Ostrovsky. To make matters worse, when the program’s runtime or output length depend on secret inputs, it may be necessary to perform worst-case padding to achieve full obliviousness and thus incur possibly super-linear overheads. Inspired by the elegant notion of differential privacy, we initiate the study of a new notion of access pattern privacy, which we call “ (ϵ , δ) -differential obliviousness”. We separate the notion of (ϵ , δ) -differential obliviousness from classical obliviousness by considering several fundamental algorithmic abstractions including sorting small-length keys, merging two sorted lists, and range query data structures (akin to binary search trees). We show that by adopting differential obliviousness with reasonable choices of ϵ and δ , not only can one circumvent several impossibilities pertaining to full obliviousness, one can also, in several cases, obtain meaningful privacy with little overhead relative to the non-private baselines (i.e., having privacy “with little extra overhead”). On the other hand, we show that for very demanding choices of ϵ and δ , the same lower bounds for oblivious algorithms would be preserved for (ϵ, δ) -differential obliviousness.

Journal ArticleDOI
TL;DR: The Invited Article section of this issue is the article “A Framework for Adversarially Robust Streaming Algorithms” by Omri Ben-Eliezer, Rajesh Jayaram, David P. Woodruff, Eylon Yogev, and Abbas Edalat.
Abstract: The Invited Article section of this issue is the article “A Framework for Adversarially Robust Streaming Algorithms” by Omri Ben-Eliezer, Rajesh Jayaram, David P. Woodruff, Eylon Yogev, and Abbas Edalat. The article was invited from the 37th SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems (PODS’20). We want to thank the PODS’20 Program Committee for their help in selecting this invited article and editor Dan Suciu for handling the article.

Journal ArticleDOI
TL;DR: In this article , a black-box reduction from bandits to knapsack problems was proposed, where the outcomes can be chosen adversarially and regret minimization is no longer feasible.
Abstract: We consider Bandits with Knapsacks (henceforth, BwK ), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem : find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the “classic” adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio : the ratio of the benchmark reward to algorithm’s reward. We design an algorithm with competitive ratio O (log T ) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version, and use it as a subroutine to solve the latter. Our algorithm is the first “black-box reduction” from bandits to BwK: it takes an arbitrary bandit algorithm and uses it as a subroutine. We use this reduction to derive several extensions.

Journal ArticleDOI
TL;DR: To better understand the new model, it is first designed new algorithms for several important problems, such as mutual exclusion, consensus, election, and renaming, and it is shown to be useful in modeling biologically inspired distributed computing methods, especially those based on ideas from molecular biology.
Abstract: Assuming that there is an a priori agreement between processes on the names of shared memory locations, as is done in almost all the publications on concurrent shared memory algorithms, is tantamount to assuming that agreement has already been solved at a lower level. It is intriguing to figure out how coordination can be achieved without relying on such lower-level agreement. To better understand the new model, we first design new algorithms for several important problems, such as mutual exclusion, consensus, election, and renaming. Then, we prove space lower bounds, impossibility results, and resolve two foundational long-standing open problems in the context of anonymous memory systems. Using these results, we identify fundamental differences between the standard shared memory model and the strictly weaker anonymous shared memory model. Besides enabling us to understand better the intrinsic limits for coordinating the actions of asynchronous processes, the new model has been shown to be useful in modeling biologically inspired distributed computing methods, especially those based on ideas from molecular biology.

Journal ArticleDOI
TL;DR: In this paper , an O(nm) time algorithm for shortest path queries in directed graphs with n nodes, m arcs, and nonnegative integer arc costs was presented, which matches the complexity bound attained by Thorup [ 31 ] for the all-pairs problems in undirected graphs.
Abstract: We present an O(nm) algorithm for all-pairs shortest paths computations in a directed graph with n nodes, m arcs, and nonnegative integer arc costs. This matches the complexity bound attained by Thorup [ 31 ] for the all-pairs problems in undirected graphs. The main insight is that shortest paths problems with approximately balanced directed cost functions can be solved similarly to the undirected case. The algorithm finds an approximately balanced reduced cost function in an O(m √ n log n ) preprocessing step. Using these reduced costs, every shortest path query can be solved in O(m) time using an adaptation of Thorup’s component hierarchy method. The balancing result can also be applied to the ℓ ∞ -matrix balancing problem.