scispace - formally typeset
Search or ask a question
Author

Swastik Kopparty

Bio: Swastik Kopparty is an academic researcher from Rutgers University. The author has contributed to research in topics: List decoding & Reed–Muller code. The author has an hindex of 26, co-authored 118 publications receiving 2429 citations. Previous affiliations of Swastik Kopparty include Institute for Advanced Study & University of California, Riverside.


Papers
More filters
Journal Article
01 Jan 2002-Scopus
TL;DR: Simulations show that incorporating TCP proxies is beneficial in terms of improving TCP performance in ad hoc networks, and the use of proxies improves the total throughput by as much as 30% in typical scenarios and reduces unfairness significantly.
Abstract: The fairness and throughput of TCP suffer when it is used in mobile ad hoc networks. This is because TCP wrongly attributes packet losses due to link failures (a consequence of mobility) to congestion. The resulting overall degradation of throughput especially affects connections with a large number of hops, where link failures are more likely; thus, short connections enjoy an unfair advantage. Furthermore, if the IEEE 802.11 MAC protocol is used, the problems are exacerbated due to the protocol-induced capture effect, leading to greater unfairness and a further throughput degradation. We develop a scheme, called split TCP, which separates the TCP functions of congestion control and reliable packet delivery. For any TCP connection, certain nodes along the route take up the role of being proxies for that connection. The proxies buffer packets upon receipt and administer rate control. The buffering enables dropped packets to be recovered from the most recent proxy. The rate control helps in controlling congestion on inter-proxy segments. Thus, we emulate shorter TCP connections and can thereby achieve better parallelism in the network. Simulations show that the use of proxies improves the total throughput by as much as 30% in typical scenarios and reduces unfairness significantly. In terms of an unfairness metric that we introduce, the unfairness decreases from 0.8 to 0.2 (1.0 being the maximum unfairness). We conclude that incorporating TCP proxies is beneficial in terms of improving TCP performance in ad hoc networks.

174 citations

Proceedings ArticleDOI
17 Nov 2002
TL;DR: In this paper, the authors develop a scheme, called split TCP, which separates the TCP functions of congestion control and reliable packet delivery, and they conclude that incorporating TCP proxies is beneficial in terms of improving TCP performance in ad hoc networks.
Abstract: The fairness and throughput of TCP suffer when it is used in mobile ad hoc networks. This is because TCP wrongly attributes packet losses due to link failures (a consequence of mobility) to congestion. The resulting overall degradation of throughput especially affects connections with a large number of hops, where link failures are more likely; thus, short connections enjoy an unfair advantage. Furthermore, if the IEEE 802.11 MAC protocol is used, the problems are exacerbated due to the protocol-induced capture effect, leading to greater unfairness and a further throughput degradation. We develop a scheme, called split TCP, which separates the TCP functions of congestion control and reliable packet delivery. For any TCP connection, certain nodes along the route take up the role of being proxies for that connection. The proxies buffer packets upon receipt and administer rate control. The buffering enables dropped packets to be recovered from the most recent proxy. The rate control helps in controlling congestion on inter-proxy segments. Thus, we emulate shorter TCP connections and can thereby achieve better parallelism in the network. Simulations show that the use of proxies improves the total throughput by as much as 30% in typical scenarios and reduces unfairness significantly. In terms of an unfairness metric that we introduce, the unfairness decreases from 0.8 to 0.2 (1.0 being the maximum unfairness). We conclude that incorporating TCP proxies is beneficial in terms of improving TCP performance in ad hoc networks.

166 citations

Journal ArticleDOI
TL;DR: The “method of multiplicities” is extended to get results, of interest in combinatorics and randomness extraction, that show that every Kakeya set in $\mathbb{F}_q^n$, the $n$-dimensional vector space over the finite field on $q$ elements, must be of size at least $ q^n/2^n$.
Abstract: We extend the “method of multiplicities” to get the following results, of interest in combinatorics and randomness extraction. (i) We show that every Kakeya set in $\mathbb{F}_q^n$, the $n$-dimensional vector space over the finite field on $q$ elements, must be of size at least $q^n/2^n$. This bound is tight to within a $2+o(1)$ factor for every $n$ as $q\to\infty$. (ii) We give improved “randomness mergers”: Mergers are seeded functions that take as input $\ell$ (possibly correlated) random variables in $\{0,1\}^N$ and a short random seed and output a single random variable in $\{0,1\}^N$ that is statistically close to having entropy $(1-\delta)\cdot N$ when one of the $\ell$ input variables is distributed uniformly. The seed we require is only $(1/\delta)\cdot\log\ell$-bits long, which significantly improves upon previous construction of mergers. (iii) We give improved randomness extractors, based on our improved mergers. Specifically, we show how to construct randomness extractors that use logarithmic ...

103 citations

Journal ArticleDOI
TL;DR: It is shown that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e).
Abstract: We study the list-decodability of multiplicity codes. These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error correction. In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms that work even in the presence of a large error fraction. In other words, we give algorithms for recovering a polynomial given several evaluations of it and its derivatives, where possibly many of the given evaluations are incorrect. Our first main result shows that univariate multiplicity codes over fields of prime order can be list-decoded up to the so-called "list-decoding capacity." Specifically, we show that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e). This resembles the behavior of the "Folded Reed-Solomon Codes" of Guruswami and Rudra (Trans. Info. Theory 2008). The list-decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas. Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to their Johnson radius. The key ingredient of this algorithm is the construction of a special family of "algebraically-repelling" curves passing through the points of F m ; no moderate-degree multivariate polynomial over F m can simultaneously vanish on all these A version of this paper was posted online as an Electronic Colloq. on Computational Complexity Tech. Report (20). Supported in part by a Sloan Fellowship and NSF grant CCF-1253886.

96 citations

Journal ArticleDOI
TL;DR: The multiplicity codes as mentioned in this paper are based on evaluating multivariate polynomials and their derivatives, and they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.
Abstract: Locally decodable codes are error-correcting codes that admit efficient decoding algorithms; any bit of the original message can be recovered by looking at only a small number of locations of a corrupted codeword. The tradeoff between the rate of a code and the locality/efficiency of its decoding algorithms has been well studied, and it has widely been suspected that nontrivial locality must come at the price of low rate. A particular setting of potential interest in practice is codes of constant rate. For such codes, decoding algorithms with locality O(k∈) were known only for codes of rate ∈Ω(1/∈), where k is the length of the message. Furthermore, for codes of rate > 1/2, no nontrivial locality had been achieved.In this article, we construct a new family of locally decodable codes that have very efficient local decoding algorithms, and at the same time have rate approaching 1. We show that for every ∈ > 0 and α > 0, for infinitely many k, there exists a code C which encodes messages of length k with rate 1 − α, and is locally decodable from a constant fraction of errors using O(k∈) queries and time.These codes, which we call multiplicity codes, are based on evaluating multivariate polynomials and their derivatives. Multiplicity codes extend traditional multivariate polynomial codes; they inherit the local-decodability of these codes, and at the same time achieve better tradeoffs and flexibility in the rate and minimum distance.

96 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Book ChapterDOI
01 Jun 2010
TL;DR: Encryption-decryption is the most ancient cryptographic activity, but its nature has deeply changed with the invention of computers, because the cryptanalysis (the activity of the third person, the eavesdropper, who aims at recovering the message) can use their power.
Abstract: Introduction A fundamental objective of cryptography is to enable two persons to communicate over an insecure channel (a public channel such as the internet) in such a way that any other person is unable to recover their message (called the plaintext ) from what is sent in its place over the channel (the ciphertext ). The transformation of the plaintext into the ciphertext is called encryption , or enciphering. Encryption-decryption is the most ancient cryptographic activity (ciphers already existed four centuries b.c.), but its nature has deeply changed with the invention of computers, because the cryptanalysis (the activity of the third person, the eavesdropper, who aims at recovering the message) can use their power. The encryption algorithm takes as input the plaintext and an encryption key K E , and it outputs the ciphertext. If the encryption key is secret, then we speak of conventional cryptography , of private key cryptography , or of symmetric cryptography . In practice, the principle of conventional cryptography relies on the sharing of a private key between the sender of a message (often called Alice in cryptography) and its receiver (often called Bob). If, on the contrary, the encryption key is public, then we speak of public key cryptography . Public key cryptography appeared in the literature in the late 1970s.

943 citations

Journal ArticleDOI
TL;DR: This highly successful textbook, widely regarded as the “bible of computer algebra”, gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems.
Abstract: Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the “bible of computer algebra”, gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany oneor two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.

937 citations

Book
12 Dec 2012
TL;DR: Laszlo Lovasz has written an admirable treatise on the exciting new theory of graph limits and graph homomorphisms, an area of great importance in the study of large networks.
Abstract: Recently, it became apparent that a large number of the most interesting structures and phenomena of the world can be described by networks. To develop a mathematical theory of very large networks is an important challenge. This book describes one recent approach to this theory, the limit theory of graphs, which has emerged over the last decade. The theory has rich connections with other approaches to the study of large networks, such as "property testing" in computer science and regularity partition in graph theory. It has several applications in extremal graph theory, including the exact formulations and partial answers to very general questions, such as which problems in extremal graph theory are decidable. It also has less obvious connections with other parts of mathematics (classical and non-classical, like probability theory, measure theory, tensor algebras, and semidefinite optimization). This book explains many of these connections, first at an informal level to emphasize the need to apply more advanced mathematical methods, and then gives an exact development of the theory of the algebraic theory of graph homomorphisms and of the analytic theory of graph limits. This is an amazing book: readable, deep, and lively. It sets out this emerging area, makes connections between old classical graph theory and graph limits, and charts the course of the future. --Persi Diaconis, Stanford University This book is a comprehensive study of the active topic of graph limits and an updated account of its present status. It is a beautiful volume written by an outstanding mathematician who is also a great expositor. --Noga Alon, Tel Aviv University, Israel Modern combinatorics is by no means an isolated subject in mathematics, but has many rich and interesting connections to almost every area of mathematics and computer science. The research presented in Lovasz's book exemplifies this phenomenon. This book presents a wonderful opportunity for a student in combinatorics to explore other fields of mathematics, or conversely for experts in other areas of mathematics to become acquainted with some aspects of graph theory. --Terence Tao, University of California, Los Angeles, CA Laszlo Lovasz has written an admirable treatise on the exciting new theory of graph limits and graph homomorphisms, an area of great importance in the study of large networks. It is an authoritative, masterful text that reflects Lovasz's position as the main architect of this rapidly developing theory. The book is a must for combinatorialists, network theorists, and theoretical computer scientists alike. --Bela Bollobas, Cambridge University, UK

896 citations