scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 1978"


Journal ArticleDOI
TL;DR: The proposed concept of compressibility is shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences.
Abstract: Compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated. These encoders can operate in a variable-rate mode as well as a fixed-rate one, and they allow for any finite-state scheme of variable-length-to-variable-length coding. For every individual infinite sequence x a quantity \rho(x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical data-compression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of \rho(x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.

3,753 citations


Journal ArticleDOI
TL;DR: Given two discrete memoryless channels (DMC's) with a common input, a single-letter characterization is given of the achievable triples where R_{e} is the equivocation rate and the related source-channel matching problem is settled.
Abstract: Given two discrete memoryless channels (DMC's) with a common input, it is desired to transmit private messages to receiver 1 rate R_{1} and common messages to both receivers at rate R_{o} , while keeping receiver 2 as ignorant of the private messages as possible. Measuring ignorance by equivocation, a single-letter characterization is given of the achievable triples (R_{1},R_{e},R_{o}) where R_{e} is the equivocation rate. Based on this channel coding result, the related source-channel matching problem is also settled. These results generalize those of Wyner on the wiretap channel and of Korner-Marton on the broadcast Channel.

3,570 citations


Journal ArticleDOI
TL;DR: Wyner's results for discrete memoryless wire-tap channels are extended and it is shown that the secrecy capacity Cs is the difference between the capacities of the main and wire.tap channels.
Abstract: Wyner's results for discrete memoryless wire-tap channels are extended to the Gaussian wire-tap channel. It is shown that the secrecy capacity Cs is the difference between the capacities of the main and wire.tap channels. It is further shown that Rd= Cs is the upper boundary of the achievable rate-equivocation region.

2,079 citations


Journal ArticleDOI
TL;DR: The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
Abstract: MEMBER, IEEE, AND HENK C. A. V~ TILBORG The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.

1,541 citations


Journal ArticleDOI
TL;DR: An improved algorithm is derived which requires O =(\log^{2} p) complexity if p - 1 has only small prime factors and such values of p must be avoided in the cryptosystem.
Abstract: A cryptographic system is described which is secure if and only if computing logarithms over GF(p) is infeasible. Previously published algorithms for computing this function require O(p^{1/2}) complexity in both time and space. An improved algorithm is derived which requires O =(\log^{2} p) complexity if p - 1 has only small prime factors. Such values of p must be avoided in the cryptosystem. Constructive uses for the new algorithm are also described.

1,292 citations


Journal ArticleDOI
TL;DR: Specific instances of the knapsack problem that appear very difficult to solve unless one possesses "trapdoor information" used in the design of the problem are demonstrated.
Abstract: The knapsack problem is an NP-complete combinatorial problem that is strongly believed to be computationally difficult to solve in general. Specific instances of this problem that appear very difficult to solve unless one possesses "trapdoor information" used in the design of the problem are demonstrated. Because only the designer can easily solve problems, others can send him information hidden in the solution to the problems without fear that an eavesdropper will be able to extract the information. This approach differs from usual cryptographic systems in that a secret key is not needed. Conversely, only the designer can generate signatures for messages, but anyone can easily check their authenticity.

828 citations


Journal ArticleDOI
TL;DR: General bounds on the capacity region are obtained for discrete memoryless interference channels and for linear-superposition interference channels with additive white Gaussian noise.
Abstract: An interference channel is a communication medium shared by M sender-receiver pairs. Transmission of information from each sender to its corresponding receiver interferes with the communications between the other senders and their receivers. This corresponds to a frequent situation in communications, and defines an M -dimensional capacity region. In this paper, we obtain general bounds on the capacity region for discrete memoryless interference channels and for linear-superposition interference channels with additive white Gaussian noise. The capacity region is determined in special cases.

804 citations


Journal ArticleDOI
TL;DR: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states.
Abstract: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states. For cyclic codes, the trellis is periodic. When this technique is applied to the decoding of product codes, the number of states in the trellis can be much fewer than q^{n-k} . For a binary (n,n - 1) single parity check code, the Viterbi algorithm is equivalent to the Wagner decoding algorithm.

612 citations


Journal ArticleDOI
TL;DR: Four new results about Huffman codes are presented and a simple algorithm for adapting a Huffman code to slowly varying esthnates of the source probabilities is presented.
Abstract: In honor of the twenty-fifth anniversary of Huffman coding, four new results about Huffman codes are presented. The first result shows that a binary prefix condition code is a Huffman code iff the intermediate and terminal nodes in the code tree can be listed by nonincreasing probability so that each node in the list is adjacent to its sibling. The second result upper bounds the redundancy (expected length minus entropy) of a binary Huffman code by P_{1}+ \log_{2}[2(\log_{2}e)/e]=P_{1}+0.086 , where P_{1} is the probability of the most likely source letter. The third result shows that one can always leave a codeword of length two unused and still have a redundancy of at most one. The fourth result is a simple algorithm for adapting a Huffman code to slowly varying esthnates of the source probabilities. In essence, one maintains a running count of uses of each node in the code tree and lists the nodes in order of these counts. Whenever the occurrence of a message increases a node count above the count of the next node in the list, the nodes, with their attached subtrees, are interchanged.

593 citations


Journal ArticleDOI
TL;DR: Levin has shown that if tilde{P}'_{M}(x) is an unnormalized form of this measure, and P( x) is any computable probability measure on strings, x, then \tilde{M}'_M}\geqCP (x) where C is a constant independent of x .
Abstract: In 1964 the author proposed as an explication of {\em a priori} probability the probability measure induced on output strings by a universal Turing machine with unidirectional output tape and a randomly coded unidirectional input tape. Levin has shown that if tilde{P}'_{M}(x) is an unnormalized form of this measure, and P(x) is any computable probability measure on strings, x , then \tilde{P}'_{M}\geqCP(x) where C is a constant independent of x . The corresponding result for the normalized form of this measure, P'_{M} , is directly derivable from Willis' probability measures on nonuniversal machines. If the conditional probabilities of P'_{M} are used to approximate those of P , then the expected value of the total squared error in these conditional probabilities is bounded by -(1/2) \ln C . With this error criterion, and when used as the basis of a universal gambling scheme, P'_{M} is superior to Cover's measure b\ast . When H\ast\equiv -\log_{2} P'_{M} is used to define the entropy of a rmite sequence, the equation H\ast(x,y)= H\ast(x)+H^{\ast}_{x}(y) holds exactly, in contrast to Chaitin's entropy definition, which has a nonvanishing error term in this equation.

427 citations


Journal ArticleDOI
TL;DR: The quantum analog of the classical paraxial diffraction theory for quasimonochromatic scalar waves is developed, which describes the propagation of arbitrary quantum states as a boundary-value problem suitable for communication system analysis.
Abstract: Recent theoretical work has shown that novel quantum states, called two-photon coherent states (TCS), have significant potential for improving free-space optical communications. The first part of a three-part study of the communication theory of TCS radiation is presented. The issues of quantum-field propagation and optimum quantum-state generation are addressed. In particular, the quantum analog of the classical paraxial diffraction theory for quasimonochromatic scalar waves is developed. This result, which describes the propagation of arbitrary quantum states as a boundary-value problem suitable for communication system analysis, is used to treat a number of quantum transmitter optimization problems. It is shown that, under near-field propagation conditions, a TCS transmitter maximizes field-measurement signal-to-noise ratio among all transmitter quantum states; the performance of the TCS system exceeds that for a conventional (coherent state) transmitter by a factor of N_{s} + 1 , where N_{s} is the average number of signal photons (transmitter energy constraint). Under far-field propagation conditions, it is shown that use of a TCS local oscillator in the receiver can, in principle, attenuate field-measurement quantum noise by a factor equal to the diffraction loss of the channel, if appropriate spatial mode mixing can be achieved. These communcation results are derived by assuming that field-quadrature quantum measurement is performed. In part II of this study, photoemissive reception of TCS radiation will be considered; it will be shown therein that homodyne detection of TCS fields can realize the field-quadrature signal - to-noise ratio performance of part I. In part III, the relationships between photoemissive detection and general quantum measurements will be explored. In particular, a synthesis procedure will be obtained for realizing all the measurements described by arbitrary TCS.

Journal ArticleDOI
TL;DR: Given a finite number of quantum states with {\em a priori} probabilities, the positive operator-valued measure that maximizes the Shannon mutual information is investigated and the group covariant case is examined in detail.
Abstract: Given a finite number of quantum states with {\em a priori} probabilities, the positive operator-valued measure that maximizes the Shannon mutual information is investigated The group covariant case is examined in detail

Journal ArticleDOI
TL;DR: A new class of complementary codes, similar to the complementary series of Golay but having multiphase elements, have been found to exist with specific complementary aperiodic complex autocorrelation functions.
Abstract: A new class of complementary codes, similar to the complementary series of Golay but having multiphase elements, have been found to exist with specific complementary aperiodic complex autocorrelation functions. These new codes, called multiphase complementary codes, form a class of generalized complementary codes, of which the Golay complementary series can be considered to be a particular biphase subclass. Unlike Golay pairs, kernels of the new codes exist for odd length and can be synthesized. These new codes, like Golay pairs, are characterized by mathematical symmetries that may not be initially obvious because of apparent disorder. Multiphase complementary codes can be recursively expanded and used to form orthogonal or complementary sets of sequences. These complementary sets are not constrained to have even length or even cardinality. In addition, certain generalized Barker codes can also be utilized to form complementary sets with unique properties.

Journal ArticleDOI
TL;DR: An outer bound to the capacity region of broadcast channels for the transmission of separate messages is derived by taking into consideration the total throughput information rate and the fact that thecapacity region depends only on the marginal transition probabilities.
Abstract: An outer bound to the capacity region of broadcast channels for the transmission of separate messages is derived. This bound improves upon Cover's outer bound by taking into consideration the total throughput information rate and the fact that the capacity region depends only on the marginal transition probabilities.

Journal ArticleDOI
TL;DR: The fact that the capacity region of the discrete memoryless physically degraded broadcast channel is not increased by feedback is established.
Abstract: The fact that the capacity region of the discrete memoryless physically degraded broadcast channel is not increased by feedback is established.

Journal ArticleDOI
TL;DR: The finite-state complexity of a sequence plays a role similar to that of entropy in classical information theory (which deals with probabilistic ensembles of sequences rather than an individual sequence).
Abstract: A quantity called the {\em finite-state} complexity is assigned to every infinite sequence of elements drawn from a finite sot. This quantity characterizes the largest compression ratio that can be achieved in accurate transmission of the sequence by any finite-state encoder (and decoder). Coding theorems and converses are derived for an individual sequence without any probabilistic characterization, and universal data compression algorithms are introduced that are asymptotically optimal for all sequences over a given alphabet. The finite-state complexity of a sequence plays a role similar to that of entropy in classical information theory (which deals with probabilistic ensembles of sequences rather than an individual sequence). For a probabilistic source, the expectation of the finite state complexity of its sequences is equal to the source's entropy. The finite state complexity is of particular interest when the source statistics are unspecified.

Journal ArticleDOI
TL;DR: In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text by the Shannon-McMillan-Breiman theorem.
Abstract: In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon's technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. If S_{n} denotes the subject's capital after n bets at 27 for 1 odds, and if it is assumed that the subject knows the underlying probability distribution for the process X , then the entropy estimate is \hat{H}_{n}(X)=(1-(1/n) \log_{27}S_{n}) \log_{2} 27 bits/symbol. If the subject does not know the true probability distribution for the stochastic process, then \hat{H}_{n}(X) is an asymptotic upper bound for the true entropy. If X is stationary, E\hat{H}_{n}(X)\rightarrowH(X), H(X) being the true entropy of the process. Moreover, if X is ergodic, then by the Shannon-McMillan-Breiman theorem \hat{H}_{n}(X)\rightarrowH(X) with probability one. Preliminary indications are that English text has an entropy of approximately 1.3 bits/symbol, which agrees well with Shannon's estimate. In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon's technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. If S_{n} denotes the subject's capital after n bets at 27 for 1 odds, and if it is assumed that the subject knows the underlying probability distribution for the process X , then the entropy estimate is \hat{H}_{n}(X)=(1-(1/n) \log_{27}S_{n}) \log_{2} 27 bits/symbol. If the subject does not know the true probability distribution for the stochastic process, then \hat{H}_{n}(X) is an asymptotic upper bound for the true entropy. If X is stationary, E\hat{H}_{n}(X)\rightarrowH(X), H(X) being the true entropy of the process. Moreover, if X is ergodic, then by the Shannon-McMillan-Breiman theorem \hat{H}_{n}(X)\rightarrowH(X) with probability one. Preliminary indications are that English text has an entropy of approximately 1.3 bits/symbol, which agrees well with Shannon's estimate. In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon's technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. If S_{n} denotes the subject's capital after n bets at 27 for 1 odds, and if it is assumed that the subject knows the underlying probability distribution for the process X , then the entropy estimate is \hat{H}_{n}(X)=(1-(1/n) \log_{27}S_{n}) \log_{2} 27 bits/symbol. If the subject does not know the true probability distribution for the stochastic process, then \hat{H}_{n}(X) is an asymptotic upper bound for the true entropy. If X is stationary, E\hat{H}_{n}(X)\rightarrowH(X), H(X) being the true entropy of the process.Moreover, if X is ergodic, then by the Shannon-McMillan-Breiman theorem \hat{H}_{n}(X)\rightarrowH(X) with probability one. Preliminary indications are that English text has an entropy of approximately 1.3 bits/symbol, which agrees well with Shannon's estimate. In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon's technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. If S_{n} denotes the subject's capital after n bets at 27 for 1 odds, and if it is assumed that the subject knows the underlying probability distribution for the process X , then the entropy estimate is \hat{H}_{n}(X)=(1-(1/n) \log_{27}S_{n}) \log_{2} 27 bits/symbol. If the subject does not know the true probability distribution for the stochastic process, then \hat{H}_{n}(X) is an asymptotic upper bound for the true entropy. If X is stationary, E\hat{H}_{n}(X)\rightarrowH(X), H(X) being the true entropy of the process. Moreover, if X is ergodic, then by the Shannon-McMillan-Breiman theorem \hat{H}_{n}(X)\rightarrowH(X) with probability one. Preliminary indications are that English text has an entropy of approximately 1.3 bits/symbol, which agrees well with Shannon's estimate.

Journal ArticleDOI
TL;DR: A table is given of differential entropies for various continuous probability distributions that are of use in the calculation of rate-distortion functions and in some statistical applications.
Abstract: A table is given of differential entropies for various continuous probability distributions. The formulas, some of which are new, are of use in the calculation of rate-distortion functions and in some statistical applications.


Journal ArticleDOI
TL;DR: Improved bounds for A(n,d,w) and A(m,w,d) were given in this paper, where the maximum number of codewords in a (linear or nonlinear) binary code of word length n and minimum distance d was obtained.
Abstract: Improved bounds for A(n,d) , the maximum number of codewords in a (linear or nonlinear) binary code of word length n and minimum distance d , and for A(n,d,w) , the maximum number of binary vectors of length n , distance d , and constant weight w in the range n \leq 24 and d \leq 10 are presented. Some of the new values are A

Journal ArticleDOI
TL;DR: A unified approach to weak universal block source coding is obtained by constructing a universal sequence of block codes for coding a class of ergodic sources with assumptions made on the alphabets, distortion measures, and class of sources.
Abstract: A new method of constructing a universal sequence of block codes for coding a class of ergodic sources is given. With this method, a weakly universal sequence of codes is constructed for variable-rate noise. less coding and for fixed- and variable-rate coding with respect to a fidelity criterion. In this way a unified approach to weak universal block source coding is obtained. For the noiseless variable-rate coding and the fixed-rate coding with respect to fidelity criterion, the assumptions made on the alphabets, distortion measures, and class of sources are both necessary and sufficient. For fixed-rate coding with respect to a fidelity criterion, the sample distortion of the universal code sequence converges in L^{l} norm for each source to the optimum distortion for that source. For both variable-rate noiseless coding and variable-rate coding with respect to a fidelity criterion, the sample rate of the universal code sequence converges in L^{1} norm for each source to the optimum rate for that source. Using this fact, a universal sequence of codes for fixed-rate noiseless coding is obtained. Some applications to stationary nonergodic sources are also considered. The results of Davisson, Ziv, Neuhoff, Gray, Pursley, and Mackenthun are extended.

Journal ArticleDOI
TL;DR: A new class of codes in signal space is presented, and their error and spectral properties are investigated, and power spectral density curves show that this type of coding does not increase the transmitted signal bandwidth.
Abstract: A new class of codes in signal space is presented, and their error and spectral properties are investigated. A constant-amplitude continuous-phase signal carries a coded sequence of linear-phase changes; the possible signal phases form a cylindrical trellis in phase and time. Simple codes using 4-16 phases, together with a Viterbi algorithm decoder, allow transmitter power savings of 2-4 dB over binary phase-shift keying in a narrower bandwidth. A method is given to compute the free distance, and the error rates of all the useful codes are given. A software-instrumented decoder is tested on a simulated Gaussian channel to determine multiple error patterns. The error parameter R_{o} is computed for a somewhat more general class of codes and is shown to increase rapidly when mere phases are employed. Finally, power spectral density curves are presented for several codes, which show that this type of coding does not increase the transmitted signal bandwidth.

Journal ArticleDOI
Luc Devroye1
TL;DR: Under various noise conditions, it is shown that the estimates are strongly uniformly consistent and can be exploited to design a simple random search algorithm for the global minimization of the regression function.
Abstract: A class of nonparametric regression function estimates generalizing the nearest neighbor estimate of Cover [ 12] is presented. Under various noise conditions, it is shown that the estimates are strongly uniformly consistent. The uniform convergence of the estimates can be exploited to design a simple random search algorithm for the global minimization of the regression function.

Journal ArticleDOI
TL;DR: It is proved that for the two-class case, the I_{2} bound is sharper than many of the previously known bounds.
Abstract: The basic properties of Renyi's entropy are reviewed, and its concavity properties are characterized. New bounds (referred to as I_{\alpha} bounds) on the probability of error are derived from Renyi's entropy and are compared with known bounds. It is proved that for the two-class case, the I_{2} bound is sharper than many of the previously known bounds. The difference between the I_{2} bound and the real value of the probability of error is at most 0.09.

Journal ArticleDOI
TL;DR: An outer bound utilizing the capacity region of the corresponding broadcast channel is obtained and a region including both is introduced by using frequency division multiplexing.
Abstract: Several bounds to the capacity region of a degraded Gaussian channel are studied. An outer bound utilizing the capacity region of the corresponding broadcast channel is obtained. Two achievable regions obtained previously are compared, and a region including both is introduced by using frequency division multiplexing.

Journal ArticleDOI
TL;DR: Various criteria for a given sampling scheme to be alias-free in the new sense are developed and the relationship of the new definition to the question of estimating the spectral density function \Phi(\lambda) of the continuous-time process X from its samples is discussed.
Abstract: A new concept of alias-free sampling of continuous-time processes X = \{ X(t), -\infty is introduced. The new concept is shown to be distinct from the traditional concept [1]-[3]. various criteria for a given sampling scheme \{t_{n}\} to be alias-free in the new sense are developed. The relationship of the new definition to the question of estimating the spectral density function \Phi(\lambda) of the continuous-time process X from its samples \{X(t_{t})\} is discussed.

Journal ArticleDOI
TL;DR: It is shown that, for all computable probability distributions, the universal prefix codes associated with the conditional Chaitin complexity have expected codeword length within a constant of the Shannon entropy.
Abstract: It is known that the expected codeword length L_{UD} of the best uniquely decodable (UD) code satisfies H(X)\leqL_{UD} . Let X be a random variable which can take on n values. Then it is shown that the average codeword length L_{1:1} for the best one-to-one (not necessarily uniquely decodable) code for X is shorter than the average codeword length L_{UD} for the best uniquely decodable code by no more than (\log_{2} \log_{2} n)+ 3 . Let Y be a random variable taking on a finite or countable number of values and having entropy H . Then it is proved that L_{1:1}\geq H-\log_{2}(H + 1)-\log_{2}\log_{2}(H + 1 )-\cdots -6 . Some relations are established among the Kolmogorov, Chaitin, and extension complexities. Finally it is shown that, for all computable probability distributions, the universal prefix codes associated with the conditional Chaitin complexity have expected codeword length within a constant of the Shannon entropy.

Journal ArticleDOI
TL;DR: It is shown that the covering radius r_{m} of the first-order Reed-Muller code of lenglh 2^{m} satisfies 2-2 m-l-2 lceil m/2 r-rceil -1 r-m, which is equivalent to 2.m-1-2.m/2-1.
Abstract: Upper bounds on the covering radius of binary codes are studied. In particular it is shown that the covering radius r_{m} of the first-order Reed-Muller code of lenglh 2^{m} satisfies 2^{m-l}-2^{\lceil m/2 \rceil -1} r_{m} \leq 2^{m-1}-2^{m/2-1} .

Journal ArticleDOI
TL;DR: The estimates are shown to be consistent under mild smoothness conditions on the spectral density and it is shown that the periodograms of the two classes have distinct statistics.
Abstract: A class of spectral estimates of continuous-time stationary stochastic processes X(t) from a finite number of observations \{X(t_{n})\}^{N}_{n}=l taken at Poisson sampling instants \{t_{n}\} is considered. The asymptotic bias and covariance of the estimates are derived, and the influence of the spectral windows and the sampling rate on the performance of the estimates is discussed. The estimates are shown to be consistent under mild smoothness conditions on the spectral density. Comparison is made with a related class of spectral estimates suggested in [15] where the number of observations is {\em random}. It is shown that the periodograms of the two classes have distinct statistics.

Journal ArticleDOI
TL;DR: A new way of treating extraneous or nuisance parameters in applying the Cramer-Rao bound is presented and new bounds are obtained for the variance of estimates of arrival times for the Rayleigh channel.
Abstract: A new way of treating extraneous or nuisance parameters in applying the Cramer-Rao bound is presented. The modified method duces substantially fighter bounds in two important applications. In particular, new bounds are obtained for the variance of estimates of arrival times for the Rayleigh channel.