scispace - formally typeset
Search or ask a question
Author

Kenneth Zeger

Bio: Kenneth Zeger is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Linear network coding & Linear code. The author has an hindex of 45, co-authored 168 publications receiving 8539 citations. Previous affiliations of Kenneth Zeger include University of California, Santa Barbara & University of California, Los Angeles.


Papers
More filters
Journal ArticleDOI
TL;DR: An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented and is shown to be substantially faster than other known methods.
Abstract: In this semitutorial paper, a comprehensive survey of closest point search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closest point search algorithm, based on the Schnorr-Euchner (1995) variation of the Pohst (1981) method, is implemented. Given an arbitrary point x /spl isin/ /spl Ropf//sup m/ and a generator matrix for a lattice /spl Lambda/, the algorithm computes the point of /spl Lambda/ that is closest to x. The algorithm is shown to be substantially faster than other known methods, by means of a theoretical comparison with the Kannan (1983, 1987) algorithm and an experimental comparison with the Pohst (1981) algorithm and its variants, such as the Viterbo-Boutros (see ibid. vol.45, p.1639-42, 1999) decoder. Modifications of the algorithm are developed to solve a number of related search problems for lattices, such as finding a shortest vector, determining the kissing number, computing the Voronoi (1908)-relevant vectors, and finding a Korkine-Zolotareff (1873) reduced basis.

1,616 citations

Journal ArticleDOI
TL;DR: It is shown that the network coding capacity of this counterexample network is strictly greater than the maximum linear coding capacity over any finite field, so the network is not even asymptotically linearly solvable.
Abstract: It is known that every solvable multicast network has a scalar linear solution over a sufficiently large finite-field alphabet. It is also known that this result does not generalize to arbitrary networks. There are several examples in the literature of solvable networks with no scalar linear solution over any finite field. However, each example has a linear solution for some vector dimension greater than one. It has been conjectured that every solvable network has a linear solution over some finite-field alphabet and some vector dimension. We provide a counterexample to this conjecture. We also show that if a network has no linear solution over any finite field, then it has no linear solution over any finite commutative ring with identity. Our counterexample network has no linear solution even in the more general algebraic context of modules, which includes as special cases all finite rings and Abelian groups. Furthermore, we show that the network coding capacity of this network is strictly greater than the maximum linear coding capacity over any finite field (exactly 10% greater), so the network is not even asymptotically linearly solvable. It follows that, even for more general versions of linearity such as convolutional coding, filter-bank coding, or linear time sharing, the network has no linear solution.

422 citations

Journal ArticleDOI
TL;DR: The binary switching algorithm is introduced, based on the objective of minimizing a useful upper bound on the average system distortion, which yields a significant reduction in average distortion, and converges in reasonable running times.
Abstract: A pseudo-Gray code is an assignment of n-bit binary indexes to 2" points in a Euclidean space so that the Hamming distance between two points corresponds closely to the Euclidean distance. Pseudo-Gray coding provides a redundancy-free error protection scheme for vector quantization (VQ) of analog signals when the binary indexes are used as channel symbols on a discrete memoryless channel and the points are signal codevectors. Binary indexes are assigned to codevectors in a way that reduces the average quantization distortion introduced in the reproduced source vectors when a transmitted index is corrupted by channel noise. A globally optimal solution to this problem is generally intractable due to an inherently large computational complexity. A locally optimal solution, the binary switching algorithm, is introduced, based on the objective of minimizing a useful upper bound on the average system distortion. The algorithm yields a significant reduction in average distortion, and converges in reasonable running times. The sue of pseudo-Gray coding is motivated by the increasing need for low-bit-rate VQ-based encoding systems that operate on noisy channels, such as in mobile radio speech communications. >

411 citations

Journal ArticleDOI
TL;DR: This work defines principal curves as continuous curves of a given length which minimize the expected squared distance between the curve and points of the space randomly chosen according to a given distribution, making it possible to theoretically analyze principal curve learning from training data and it also leads to a new practical construction.
Abstract: Principal curves have been defined as "self-consistent" smooth curves which pass through the "middle" of a d-dimensional probability distribution or data cloud. They give a summary of the data and also serve as an efficient feature extraction tool. We take a new approach by defining principal curves as continuous curves of a given length which minimize the expected squared distance between the curve and points of the space randomly chosen according to a given distribution. The new definition makes it possible to theoretically analyze principal curve learning from training data and it also leads to a new practical construction. Our theoretical learning scheme chooses a curve from a class of polygonal lines with k segments and with a given total length to minimize the average squared distance over n training points drawn independently. Convergence properties of this learning scheme are analyzed and a practical version of this theoretical algorithm is implemented. In each iteration of the algorithm, a new vertex is added to the polygonal line and the positions of the vertices are updated so that they minimize a penalized squared distance criterion. Simulation results demonstrate that the new algorithm compares favorably with previous methods, both in terms of performance and computational complexity, and is more robust to varying data models.

359 citations

Journal ArticleDOI
TL;DR: A progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques and effectively no degradation due to channel noise can be detected.
Abstract: We cascade an existing image coder with carefully chosen error control coding, and thus produce a progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques. The main idea is to trade off the available transmission rate between source coding and channel coding in an efficient manner. This coding system is easy to implement and has acceptably low complexity. Furthermore, effectively no degradation due to channel noise can be detected; instead, the penalty paid due to channel noise is a reduction in source coding resolution. Detailed numerical comparisons are given that can serve as benchmarks for comparisons with future encoding schemes. For example, for the 512/spl times/512 Lena image, at a transmission rate of 1 b/pixel, and for binary symmetric channels with bit error probabilities 10/sup -3/, 10/sup -2/, and 10/sup -1/, the proposed system outperforms previously reported results by at least 2.6, 2.8, and 8.9 dB, respectively.

342 citations


Cited by
More filters
Book
01 Jan 1996
TL;DR: The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Abstract: Preface * Introduction * The Bayes Error * Inequalities and alternatedistance measures * Linear discrimination * Nearest neighbor rules *Consistency * Slow rates of convergence Error estimation * The regularhistogram rule * Kernel rules Consistency of the k-nearest neighborrule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection* The maximum likelihood principle * Parametric classification *Generalized linear discrimination * Complexity regularization *Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitutionestimate * Deleted estimates of the error probability * Automatickernel rules * Automatic nearest neighbor rules * Hypercubes anddiscrete spaces * Epsilon entropy and totally bounded sets * Uniformlaws of large numbers * Neural networks * Other error estimates *Feature extraction * Appendix * Notation * References * Index

3,598 citations

Journal ArticleDOI
TL;DR: This work presents a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks, and shows that this approach can take advantage of redundant network capacity for improved success probability and robustness.
Abstract: We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks

2,806 citations

Book
16 Jan 2012
TL;DR: In this article, a comprehensive treatment of network information theory and its applications is provided, which provides the first unified coverage of both classical and recent results, including successive cancellation and superposition coding, MIMO wireless communication, network coding and cooperative relaying.
Abstract: This comprehensive treatment of network information theory and its applications provides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon's point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless communication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia.

2,442 citations

Journal ArticleDOI

2,415 citations

Journal ArticleDOI
16 Dec 2011-Science
TL;DR: A measure of dependence for two-variable relationships: the maximal information coefficient (MIC), which captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination of the data relative to the regression function.
Abstract: Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R2) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships.

2,414 citations