scispace - formally typeset
Search or ask a question

Showing papers by "Swastik Kopparty published in 2018"


Proceedings ArticleDOI
30 Nov 2018
TL;DR: In this article, the error-correcting properties of folded Reed-Solomon codes and univariate multiplicity codes were analyzed and shown to be better than previously known in the context of list decoding and local list decoding.
Abstract: In this work, we show new and improved error-correcting properties of folded Reed-Solomon codes and multiplicity codes. Both of these families of codes are based on polynomials over finite fields, and both have been the sources of recent advances in coding theory. Folded Reed-Solomon codes were the first explicit constructions of codes known to achieve list-decoding capacity; multivariate multiplicity codes were the first constructions of high-rate locally correctable codes; and univariate multiplicity codes are also known to achieve list-decoding capacity. However, previous analyses of the error-correction properties of these codes did not yield optimal results. In particular, in the list-decoding setting, the guarantees on the list-sizes were polynomial in the block length, rather than constant; and for multivariate multiplicity codes, local list-decoding algorithms could not go beyond the Johnson bound. In this paper, we show that Folded Reed-Solomon codes and multiplicity codes are in fact better than previously known in the context of list decoding and local list-decoding. More precisely, we first show that Folded RS codes achieve list-decoding capacity with constant list sizes, independent of the block length; and that high-rate univariate multiplicity codes can also be list-recovered with constant list sizes. Using our result on univariate multiplicity codes, we show that multivariate multiplicity codes are high-rate, locally list-recoverable codes. Finally, we show how to combine the above results with standard tools to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before.

27 citations


DOI
22 Jun 2018
TL;DR: The soundness analysis of the recent RS proximity testing protocol of [Ben-Sasson et al., ICALP 2018] is improved and extended to the "list-decoding" regime, bringing it closer to the Johnson bound.
Abstract: Algebraic proof systems reduce computational problems to problems about estimating the distance of a sequence of functions [EQUATION], given as oracles, from a linear error correcting code V. The soundness of such systems relies on methods that act "locally" on u and map it to a single function u* that is, roughly, as far from V as are u1, ..., uk.Motivated by these applications to efficient proof systems, we study a natural worst-case to average-case reduction of distance for linear spaces, and show several general cases in which the following statement holds: If some member of a linear space U = span(u1, ..., uk) is δ-far from (all elements) of V in relative Hamming distance, then nearly all elements of U are (1 − ϵ)δ-far from V; the value of ϵ depends only on the distance of the code V and approaches 0 as that distance approaches 1. Our results improve on the previous state-of-the-art which showed that nearly all elements of U are 1/2δ-far from V [Rothblum, Vadhan and Wigderson, STOC 2013].When V is a Reed-Solomon (RS) code, as is often the case for algebraic proof systems, we show how to boost distance via a new "local" transformation that may be useful elsewhere. Relying on the affine-invariance of V, we map a vector u to a random linear combination of affine transformations of u, and show this process amplifies distance from V. Assuming V is an RS code with sufficiently large distance, this amplification process converts a function u that is somewhat far from V to one that is (1 − ϵ)-far from V; as above, ϵ depends only on the distance of V and approaches 0 as the distance of V approaches 1.We give two concrete application of these techniques. First, we revisit the axis-parallel low-degree test for bivariate polynomials of [Polischuk-Spielman, STOC 1994] and prove a "list-decoding" type result for it, when the degree of one axis is extremely small. This result is similar to the recent list-decoding-regime result of [Chiesa, Manohar and Shinkar, RANDOM 2017] but is proved using different techniques, and allows the degree in one axis to be arbitrarily large. Second, we improve the soundness analysis of the recent RS proximity testing protocol of [Ben-Sasson et al., ICALP 2018] and extend it to the "list-decoding" regime, bringing it closer to the Johnson bound.

27 citations


Journal ArticleDOI
TL;DR: The Gilbert–Varshamov bound can be achieved by codes, which support local error-detection and error-correction algorithms, and is shown to be the first time local testability is used in the construction of a locally correctable code.
Abstract: One of the most important open problems in the theory of error-correcting codes is to determine the tradeoff between the rate $R$ and minimum distance $\delta $ of a binary code. The best known tradeoff is the Gilbert–Varshamov bound, and says that for every $\delta \in (0, 1/2)$ , there are codes with minimum distance $\delta $ and rate $R = {R_{\mathsf {GV}}}(\delta) > 0$ (for a certain simple function ${R_{\mathsf {GV}}}(\cdot)$ ). In this paper, we show that the Gilbert–Varshamov bound can be achieved by codes, which support local error-detection and error-correction algorithms. Specifically, we show the following results. 1) Local testing: for all $\delta \in (0,1/2)$ and all $R , there exist codes with length $n$ , rate $R$ , and minimum distance $\delta $ that are locally testable with $\mathsf {quasipolylog}(n)$ query complexity. 2) Local correction: for all $\epsilon > 0$ , for all $\delta sufficiently large, and all $R , there exist codes with length $n$ , rate $R$ , and minimum distance $\delta $ that are locally correctable from $({\delta }/{2}) - o(1)$ fraction errors with $O(n^{\epsilon })$ query complexity. Furthermore, these codes have an efficient randomized construction, and the local testing and local correction algorithms can be made to run in time polynomial in the query complexity. Our results on locally correctable codes also immediately give locally decodable codes with the same parameters. Our local testing result is obtained by combining Thommesen’s random concatenation technique and the best known locally testable codes by Kopparty et al. Our local correction result, which is significantly more involved, also uses random concatenation, along with a number of further ideas: the Guruswami–Sudan–Indyk list decoding strategy for concatenated codes, Alon–Edmonds–Luby distance amplification, and the local list-decodability, local list-recoverability, and local testability of Reed–Muller codes. Curiously, our final local correction algorithms go via local list-decoding and local testing algorithms; this seems to be the first time local testability is used in the construction of a locally correctable code.

13 citations



Posted Content
TL;DR: In this paper, the error-correcting properties of folded Reed-Solomon codes and univariate multiplicity codes were analyzed and shown to be better than previously known in the context of list-decoding and local listdecoding.
Abstract: In this work, we show new and improved error-correcting properties of folded Reed-Solomon codes and multiplicity codes. Both of these families of codes are based on polynomials over finite fields, and both have been the sources of recent advances in coding theory. Folded Reed-Solomon codes were the first explicit constructions of codes known to achieve list-decoding capacity; multivariate multiplicity codes were the first constructions of high-rate locally correctable codes; and univariate multiplicity codes are also known to achieve list-decoding capacity. However, previous analyses of the error-correction properties of these codes did not yield optimal results. In particular, in the list-decoding setting, the guarantees on the list-sizes were polynomial in the block length, rather than constant; and for multivariate multiplicity codes, local list-decoding algorithms could not go beyond the Johnson bound. In this paper, we show that Folded Reed-Solomon codes and multiplicity codes are in fact better than previously known in the context of list-decoding and local list-decoding. More precisely, we first show that Folded RS codes achieve list-decoding capacity with constant list sizes, independent of the block length; and that high-rate univariate multiplicity codes can also be list-recovered with constant list sizes. Using our result on univariate multiplicity codes, we show that multivariate multiplicity codes are high-rate, locally list-recoverable codes. Finally, we show how to combine the above results with standard tools to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before.

6 citations


Proceedings Article
07 Jan 2018
TL;DR: In this paper, the problem of decoding Reed-Muller codes from random errors was studied, and it was shown that the problem can be solved in poly(n) time.
Abstract: Reed-Muller codes are some of the oldest and most widely studied error-correcting codes, of interest for both their algebraic structure as well as their many algorithmic properties. A recent beautiful result of Saptharishi, Shpilka and Volk [SSV17] showed that for binary Reed-Muller codes of length n and distance d = O(1), one can correct polylog(n) random errors in poly(n) time (which is well beyond the worst-case error tolerance of O(1)). In this paper, we consider the problem of syndrome decoding Reed-Muller codes from random errors. More specifically, given the polylog(n)-bit long syndrome vector of a codeword corrupted in polylog(n) random coordinates, we would like to compute the locations of the codeword corruptions. This problem turns out to be equivalent to a basic question about computing tensor decomposition of random low-rank tensors over finite fields. Our main result is that syndrome decoding of Reed-Muller codes (and the equivalent tensor decomposition problem) can be solved efficiently, i.e., in polylog(n) time. We give two algorithms for this problem: 1. The first algorithm is a finite field variant of a classical algorithm for tensor decomposition over real numbers due to Jennrich. This also gives an alternate proof for the main result of [SSV17]. 2. The second algorithm is obtained by implementing the steps of [SSV17]'s Berlekamp-Welch-style decoding algorithm in sublinear-time. The main new ingredient is an algorithm for solving certain kinds of systems of polynomial equations.

3 citations


Posted Content
TL;DR: In this article, the rank of a tensor over GF(2) has been shown to be at least 3.52 k$ for any explicit tensor in three dimensions.
Abstract: In this paper, we prove new relations between the bias of multilinear forms, the correlation between multilinear forms and lower degree polynomials, and the rank of tensors over $GF(2)= \{0,1\}$. We show the following results for multilinear forms and tensors. 1. Correlation bounds : We show that a random $d$-linear form has exponentially low correlation with low-degree polynomials. More precisely, for $d \ll 2^{o(k)}$, we show that a random $d$-linear form $f(X_1,X_2, \dots, X_d) : \left(GF(2)^{k}\right)^d \rightarrow GF(2)$ has correlation $2^{-k(1-o(1))}$ with any polynomial of degree at most $d/10$. This result is proved by giving near-optimal bounds on the bias of random $d$-linear form, which is in turn proved by giving near-optimal bounds on the probability that a random rank-$t$ $d$-linear form is identically zero. 2. Tensor-rank vs Bias : We show that if a $d$-dimensional tensor has small rank, then the bias of the associated $d$-linear form is large. More precisely, given any $d$-dimensional tensor $$T :\underbrace{[k]\times \ldots [k]}_{\text{$d$ times}}\to GF(2)$$ of rank at most $t$, the bias of the associated $d$-linear form $$f_T(X_1,\ldots,X_d) := \sum_{(i_1,\dots,i_d) \in [k]^d} T(i_1,i_2,\ldots, i_d) X_{1,i_1}\cdot X_{1,i_2}\cdots X_{d,i_d}$$ is at least $\left(1-\frac1{2^{d-1}}\right)^t$. The above bias vs tensor-rank connection suggests a natural approach to proving nontrivial tensor-rank lower bounds for $d=3$. In particular, we use this approach to prove that the finite field multiplication tensor has tensor rank at least $3.52 k$ matching the best known lower bound for any explicit tensor in three dimensions over $GF(2)$.

3 citations


Book ChapterDOI
07 Jan 2018
TL;DR: This work gives a polynomial time approximation algorithm for simultaneous Max-Cut with an approximation factor of 0.8780 (for all constant k) and achieves the better approximation guarantee.
Abstract: In the simultaneous Max-Cut problem, we are given k weighted graphs on the same set of n vertices, and the goal is to find a cut of the vertex set so that the minimum, over the k graphs, of the cut value is as large as possible. Previous work [BKS15] gave a polynomial time algorithm which achieved an approximation factor of 1/2 − o(1) for this problem (and an approximation factor of 1/2 + ek in the unweighted case, where ek → 0 as k → ∞). In this work, we give a polynomial time approximation algorithm for simultaneous Max-Cut with an approximation factor of 0.8780 (for all constant k). The natural SDP formulation for simultaneous Max-Cut was shown to have an integrality gap of 1/2 + ek in [BKS15]. In achieving the better approximation guarantee, we use a stronger Sum-of-Squares hierarchy SDP relaxation and a rounding algorithm based on Raghavendra-Tan [RT12], in addition to techniques from [BKS15].

3 citations


Journal ArticleDOI
TL;DR: For general linear maps, the Cauchy-Davenport theorem has been used to lower the size of the sum set in terms of the sizes of the sets A and B.
Abstract: We prove a version of the Cauchy-Davenport theorem for general linear maps. For subsets A, B of the finite field $$\mathbb{F}_p $$ , the classical Cauchy-Davenport theorem gives a lower bound for the size of the sumset A + B in terms of the sizes of the sets A and B. Our theorem considers a general linear map $$L:\mathbb{F}_p^n \to \mathbb{F}_p^m $$ , and subsets $$A_1 , \ldots A_n \subseteq \mathbb{F}_p$$ , and gives a lower bound on the size of L(A1 × A2 × … × An) in terms of the sizes of the sets A1, …, An. Our proof uses Alon’s Combinatorial Nullstellensatz and a variation of the polynomial method.

2 citations


01 Jan 2018
TL;DR: In this paper, the error-correcting properties of folded Reed-Solomon codes and univariate multiplicity codes were analyzed and shown to be better than previously known in the context of list decoding and local list decoding.
Abstract: In this work, we show new and improved error-correcting properties of folded Reed-Solomon codes and multiplicity codes. Both of these families of codes are based on polynomials over finite fields, and both have been the sources of recent advances in coding theory. Folded Reed-Solomon codes were the first explicit constructions of codes known to achieve list-decoding capacity; multivariate multiplicity codes were the first constructions of high-rate locally correctable codes; and univariate multiplicity codes are also known to achieve list-decoding capacity. However, previous analyses of the error-correction properties of these codes did not yield optimal results. In particular, in the list-decoding setting, the guarantees on the list-sizes were polynomial in the block length, rather than constant; and for multivariate multiplicity codes, local list-decoding algorithms could not go beyond the Johnson bound. In this paper, we show that Folded Reed-Solomon codes and multiplicity codes are in fact better than previously known in the context of list decoding and local list-decoding. More precisely, we first show that Folded RS codes achieve list-decoding capacity with constant list sizes, independent of the block length; and that high-rate univariate multiplicity codes can also be list-recovered with constant list sizes. Using our result on univariate multiplicity codes, we show that multivariate multiplicity codes are high-rate, locally list-recoverable codes. Finally, we show how to combine the above results with standard tools to obtain capacity achieving locally list decodable codes with query complexity significantly lower than was known before.

1 citations


Posted Content
TL;DR: In this article, a polynomial time approximation algorithm for simultaneous Max-Cut with an approximation factor of 0.8780$ was given, which is the best known algorithm for this problem.
Abstract: In the simultaneous Max-Cut problem, we are given $k$ weighted graphs on the same set of $n$ vertices, and the goal is to find a cut of the vertex set so that the minimum, over the $k$ graphs, of the cut value is as large as possible. Previous work [BKS15] gave a polynomial time algorithm which achieved an approximation factor of $1/2 - o(1)$ for this problem (and an approximation factor of $1/2 + \epsilon_k$ in the unweighted case, where $\epsilon_k \rightarrow 0$ as $k \rightarrow \infty$). In this work, we give a polynomial time approximation algorithm for simultaneous Max-Cut with an approximation factor of $0.8780$ (for all constant $k$). The natural SDP formulation for simultaneous Max-Cut was shown to have an integrality gap of $1/2+\epsilon_k$ in [BKS15]. In achieving the better approximation guarantee, we use a stronger Sum-of-Squares hierarchy SDP relaxation and a rounding algorithm based on Raghavendra-Tan [RT12], in addition to techniques from [BKS15].