scispace - formally typeset
Search or ask a question
JournalISSN: 1551-305X

Foundations and Trends in Theoretical Computer Science 

Now Publishers
About: Foundations and Trends in Theoretical Computer Science is an academic journal published by Now Publishers. The journal publishes majorly in the area(s): Computer science & Communication complexity. It has an ISSN identifier of 1551-305X. It is also open access. Over the lifetime, 10 publications have been published receiving 2023 citations. The journal is also known as: Theoretical computer science.

Papers
More filters
Journal ArticleDOI
TL;DR: Data Streams: Algorithms and Applications surveys the emerging area of algorithms for processing data streams and associated applications, which rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity.
Abstract: In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerged for reasoning about algorithms that work within these constraints on space, time, and number of passes. Some of the methods rely on metric embeddings, pseudo-random computations, sparse approximation theory and communication complexity. The applications for this scenario include IP network traffic analysis, mining text message streams and processing massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [1].

1,598 citations

Journal ArticleDOI
TL;DR: This set of notes gives several applications of the following paradigm, to design a probabilistic algorithm described by a sequence of random variables so that the analysis is valid assuming limited independence between the random variables.
Abstract: This set of notes gives several applications of the following paradigm. The paradigm consists of two complementary parts. The first part is to design a probabilistic algorithm described by a sequence of random variables so that the analysis is valid assuming limited independence between the random variables. The second part is the design of a small probability space for the random variables such that they are somewhat independent of each other. Thus, the analysis of the algorithm holds even when the random variables used by the algorithm are generated according to the small space.

160 citations

Journal ArticleDOI
TL;DR: In this article, the authors survey the average-case complexity of problems in NP and present completeness results due to Impagliazzo and Levin, and discuss various notions of good-on-average algorithms.
Abstract: We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hard-on-average problems in NP can be based on the P ≠ NP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different "degrees" of average-case complexity. We discuss some of these "hardness amplification" results.

105 citations

Journal ArticleDOI
TL;DR: This monograph presents techniques to approximate real functions such as xs; x—1 and e—x by simpler functions and shows how these results can be used for the design of fast algorithms.
Abstract: This monograph presents techniques to approximate real functions such as xs; x—1 and e—x by simpler functions and shows how these results can be used for the design of fast algorithms. The key lies in the fact that such results imply faster ways to approximate primitives such as Asv; A—1v and exp(—A)v, and to compute matrix eigenvalues and eigenvectors. Indeed, many fast algorithms reduce to the computation of such primitives, which have proved useful for speeding up several fundamental computations such as random walk simulation, graph partitioning and solving linear systems of equations.

79 citations

Journal ArticleDOI
Sergey Yekhanin1
TL;DR: Locally decodable codes as mentioned in this paper are codes intended to address this seeming conflict between efficient retrievability and reliability by allowing reliable reconstruction of an arbitrary data bit from looking at only a small number of randomly chosen codeword bits.
Abstract: Over 60 years of research in coding theory, that started with the works of Shannon and Hamming, have given us nearly optimal ways to add redundancy to messages, encoding bit strings representing messages into longer bit strings called codewords, in a way that the message can still be recovered even if a certain fraction of the codeword bits are corrupted. Classical error-correcting codes, however, do not work well when messages are modern massive datasets, because their decoding time increases (at least) linearly with the length of the message. As a result in typical applications large datasets are first partitioned into small blocks, each of which is then encoded separately. Such encoding allows efficient randomaccess retrieval of the data, but yields poor noise resilience. Locally decodable codes are codes intended to address this seeming conflict between efficient retrievability and reliability. They are codes that simultaneously provide efficient random-access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary data bit from looking at only a small number of randomly chosen codeword bits. Apart from the natural application to data transmission and storage such codes have important applications in cryptography and computational complexity theory. This review introduces and motivates locally decodable codes, and discusses the central results of the subject. Locally Decodable Codes assumes basic familiarity with the properties of finite fields and is otherwise self-contained. It will benefit computer scientists, electrical engineers, and mathematicians with an interest in coding theory.

54 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20223
20161
20141
20131
20111
20062