scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Theory in 1966"


Journal Article
TL;DR: Run-length encodings are used for determining explicit form of Huffman coding when applied to geometric distribution in the context of discrete geometry.
Abstract: Run-length encodings, determining explicit form of Huffman coding when applied to geometric distribution

980 citations


Journal ArticleDOI
TL;DR: To obtain the reduction, the authors use operational relations [lo] to get The integral with the special parameters of (11) has been previously recognized as a Q function [12]-[14] so that the reduction is essentially complete.
Abstract: explicitly evaluable functions. For example, the M-ary error probability is expressed as a quadrature in Lindsey's equation (17), PE(M) = 1 [I-2 lrn Qi(h, $;) exp (-g) dz] z/d eeL s m =22/;;moe-(1+d)s~41-'2@3(1, 1 + M, s, sL) d.s, (5) where, following Lindsey, h2/2 has been replaced by L to simplify the notation. From the series form of @3, it is obvious that the integral gives an additional double series numerator parameter: PE(M) = di eeL z ~1 (1 + d)-\"-1'2 i (61 A complete set of recursion relations for F1 when one parameter at a time changes has been given by Le Vavasseur [S]. It is a simple matter to derive the necessary change for this two-parameter case but Le Vavasseur has included this as one of several examples, so that we have at once (8) (9)-I, e (a \"-' a \" '-'_ [eLP,(l)] = (1 + &)Y,(l), which is equivalent to a result of Price [9], who has derived a number of expressions for these and related integrals. Note that the derivation above is, thus far, much simpler and more straightforward than the admirably executed tours de force of previous derivations. However, the last step, viz., recognizing the form of the result, is automatically accomplished in the other derivations, and is much the harder part in the hypergeometric case. To obtain the reduction, we use operational relations [lo] to get The integral with the special parameters of (11) has been previously recognized as a Q function [12]-[14] so that the reduction is essentially complete.

814 citations


Journal ArticleDOI
TL;DR: A new distance measure is introduced which permits likelihood information to be used in algebraic minimum distance decoding techniques and an efficient decoding algorithm is given, and exponential bounds on the probability of not decoding correctly are developed.
Abstract: We introduce a new distance measure which permits likelihood information to be used in algebraic minimum distance decoding techniques. We give an efficient decoding algorithm, and develop exponential bounds on the probability of not decoding correctly. In one application, this technique yields the same probability of error as maximum likelihood decoding.

614 citations


Journal ArticleDOI
TL;DR: This paper presents a coding scheme that exploits the feedback to achieve considerable reductions in coding and decoding complexity and delay over what would be needed for comparable performance with the best known (simplex) codes for the one-way channel.
Abstract: In some communication problems, it is a good assumption that the channel consists of an additive white Gaussian noise forward link and an essentially noiseless feedback link. In this paper, we study channels where no bandwidth constraint is placed on the transmitted signals. Such channels arise in space communications. It is known that the availability of the feedback link cannot increase the channel capacity of the noisy forward link, but it can considerably reduce the coding effort required to achieve a given level of performance. We present a coding scheme that exploits the feedback to achieve considerable reductions in coding and decoding complexity and delay over what would be needed for comparable performance with the best known (simplex) codes for the one-way channel. Our scheme, which was motivated by the Robbins-Monro stochastic approximation technique, can also be used over channels where the additive noise is not Gaussian but is still independent from instant to instant. An extension of the scheme for channels with limited signal bandwidth is presented in a companion paper (Part II).

579 citations


Journal ArticleDOI
TL;DR: This paper extends the scheme for effectively exploiting a noiseless feedback link associated with an additive white Gaussian noise channel to a band-limited channel with signal bandwidth restricted to (- W, W) and achieves the well-known channel capacity.
Abstract: In Part I of this paper, we presented a scheme for effectively exploiting a noiseless feedback link associated with an additive white Gaussian noise channel with {\em no} signal bandwidth constraints. We now extend the scheme for this channel, which we shall call the wideband (WB) scheme, to a band-limited (BL) channel with signal bandwidth restricted to (- W, W) . Our feedback scheme achieves the well-known channel capacity, C = W \ln (1 +P_{u,v}/N_{0} W) , for this system and, in fact, is apparently the first deterministic procedure for doing this. We evaluate the fairly simple exact error probability for our scheme and find that it provides considerable improvements over the best-known results (which are lower bounds on the performance of sphere-packed codes) for the one-way channel. We also study the degradation in performance of our scheme when there is noise in the feedback link.

262 citations


Journal ArticleDOI
TL;DR: This is a review paper discussing and comparing most of the approaches to learning without a teacher which have been suggested to date, and divided into six classes: guessing a sequence of hypotheses, modifications of this first approach, approximating probability densities by others more easily computed, estimating parameters of a known decision rule, theoretically exact methods which require approximations in implementation, and some miscellaneous approaches.
Abstract: This is a review paper discussing and comparing most of the approaches to learning without a teacher which have been suggested to date. The approaches discussed are divided into six classes: guessing a sequence of hypotheses, modifications of this first approach, approximating probability densities by others more easily computed, estimating parameters of a known decision rule, theoretically exact methods which require approximations in implementation, and some miscellaneous approaches. At present, all these approaches are known to be feasible, but few useful methods of comparing them to see which are best have been developed.

176 citations


Journal ArticleDOI
TL;DR: Receiver structures are developed for making jointly optimum decisions about L consecutive symbols on the basis of the complete message received and the decision statistics are computed by a sequential procedure, and the number of computations increases only linearly with the message length.
Abstract: This paper is concerned with m -ary communication channels (m \geq 2) having intersymbol interference between L time periods (L \geq 2) . Receiver structures are developed for making jointly optimum (minimum probability of error) decisions about L consecutive symbols on the basis of the complete message received. The decision statistics are computed by a sequential procedure, and the number of computations increases only linearly with the message length. The method can be applied to the general problem of making decisions about the states of a discrete-state Markov information source which is observable only through a channel with additive Gaussian or non-Gaussian noise.

162 citations


Journal ArticleDOI
TL;DR: A criterion of minimum average error probability is derived from a method for specifying an optimum linear, time invariant receiving filter for a digital data transmission system and the optimum filter is found to be representable as a matched filter followed by a tapped delay line--the same form as that of the least mean square estimator of the pulse amplitude.
Abstract: Using a criterion of minimum average error probability we derive a method for specifying an optimum linear, time invariant receiving filter for a digital data transmission system. The transmitted data are binary and coded into pulses of shape \pm s(t) . The linear transmission medium introduces intersymbol interference and additive Gaussian noise. Because the intersymbol interference is not Gaussian and can be correlated with the binary digit being detected, our problem is one of deciding which of two waveforms is present in a special type of correlated, non-Gaussian noise. For signal-to-noise ratios in a range of practical interest, the optimum filter is found to be representable as a matched filter followed by a tapped delay line--the same form as that of the least mean square estimator of the pulse amplitude. The performance (error probability vs. S/N ) of the optimum filter is compared with that of a matched-filter receiver in an example.

126 citations


Journal ArticleDOI
TL;DR: It is shown that the two-category classifier derived by least-mean-square-error adaption using an equal number of sample patterns from each category is equivalent to the optimal statistical classifier if the patterns are multivariate Gaussian random variables having the same covariance matrix for both pattern categories.
Abstract: This paper develops a relationship between two traditional statistical methods of pattern classifier design, and an adaption technique involving minimization of the mean-square error in the output of a linear threshold device. It is shown that the two-category classifier derived by least-mean-square-error adaption using an equal number of sample patterns from each category is equivalent to the optimal statistical classifier if the patterns are multivariate Gaussian random variables having the same covariance matrix for both pattern categories. It is also shown that the classifier is always equivalent to the classifier derived by R. A. Fisher. A simple modification of the least-mean-square-error adaption procedure enables the adaptive structure to converge to a nearly-optimal classifier, even though the numbers of sample patterns are not equal for the two categories. The use of minimization of mean-square error as a technique for designing classifiers has the added advantage that it leads to the optimal classifier for patterns even when the covariance matrix is singular.

116 citations


Journal ArticleDOI
TL;DR: The objective is to describe some of the interrelations that affect the performance of a communication system used with probabilistic coding and their implication with regard to the evaluation of modulation and demodulation systems.
Abstract: Research in coding theory has resulted in the determination of bounds, as a function of the rate of communication, on the probability of error that can be attained over a memoryless transmission facility. These results are reviewed, and their implication with regard to the evaluation of modulation and demodulation systems is discussed. The objective is to describe some of the interrelations that affect the performance of a communication system used with probabilistic coding.

97 citations


Journal ArticleDOI
TL;DR: Tobin has solved the signal design problem for extreme values of the peak-to-average power constraint ratio for both sequential and nonsequential detection.
Abstract: The design of signals for binary communication systems employing feedback has previously been considered by Turin. A delayless, infinite-bandwidth forward channel disturbed by additive, white, Gaussian noise is assumed. At each instant of time, the log likelihood ratio of the two possible signals is fed back to the transmitter via a noiseless and delayless feedback channel. The forward-channel signals are said to be optimally designed when the feedback information is so utilized that the average (for sequential detection) or fixed (for nonsequential detection) transmission time is minimized, subject to a specified probability of error. Average and peak power constraints are also placed on the signals. Turin has solved the signal design problem for extreme values (i.e., very large or equal to one) of the peak-to-average power constraint ratio. These results are extended in this paper to arbitrary values of the power constraint ratio, for both sequential and nonsequential detection.


Journal ArticleDOI
TL;DR: The output of a simple statistical categorizer is used to improve recognition performance on a homogeneous data set and a fivefold average decrease over the initial rates is obtained in both errors and rejects.
Abstract: The output of a simple statistical categorizer is used to improve recognition performance on a homogeneous data set. An array of initial weights contains a coarse description of the various classes; as the system cycles through a set of characters from the same source (a typewritten or printed page), the weights are modified to correspond more closely with the observed distributions. The true identifies of the characters remain inaccessible throughout the training cycle. This experimental study of the effect of the various parameters in the algorithm is based on \sim 30 000 characters from fourteen different font styles. A fivefold average decrease over the initial rates is obtained in both errors and rejects.

Journal ArticleDOI
TL;DR: It is proved that for every graph G there exists an m and T such that G is doable, and for every value of T there exists a graph G which is not T doable.
Abstract: Given a graph G of n nodes. We wish to assign to each node i(i = 1, 2, \cdots n) a unique binary code c_{i} of length m such that, if we denote the Hannuing distance between c_{i} and c_{j} as H(c_{i}, c_{j}) , then H(c_{i}, c_{j})\leq T if nodes i and j are adjacent (i.e., connected by a single branch), and H(c_{i}, c_{j}) \geq T+1 otherwise. If such a code exists, then we say that G is doable for the value of T and tn associated with this code. In this paper we prove various properties relevent to these codes. In particular we prove 1) that for every graph G there exists an m and T such that G is doable, 2) for every value of T there exists a graph G which is not T doable, 3) if G is T' doable, then it is T'+ 2p doable for p = 0, 1, 2, \cdots , and is doable for all T \geq 2T' if T' is odd, and is doable for all T \geq 2T' + 1 if T' is even. In theory, the code can be synthesized by employing integer linear programming where either T and/or m can be minimized; however, this procedure is computationally infeasible for values of n and m in the range of about 10 or greater.

Journal ArticleDOI
TL;DR: This paper presents a meta-analyses of the derivation of mertn time to loss of track for B certain AFC system in the presence of two fluctuating targets by E. Baghdady and M. Buchner, Jr.
Abstract: REFERENCES [l] E. J. Baghdady, Lectures on Communication System Theory. New York: McGraw-Hill, ch. 19, sec. 5.3. [2] M. M. Buchner, Jr., “Derivation of mertn time to loss of track for B certain AFC system in the presence of two fluctuating targets,” The Johns Hopkins University. Brtltimore, Md., Applied Physics Lab. memo BSA-1-038, August 14. 1964 (memo clmmfied &8 confidential, title unclassified). [3] S. 0. Rice, “Distribution of the duration of fades in radio transmission,” Bell Sys. Tech. J., vol. 37. pp. 581-635. May 1958. [4] -, “Mathematical amlysis of random noise,” Bell Sys. Tech. J.. vol. 24, pp. 46-156. January 1945. [5] D. Middleton. “Spurious signals caused by noise in triggered circuits,” J. Appl. Phys., vol. 19, pp. 817-830, September 1948.

Journal ArticleDOI
TL;DR: An upper bound on the insurable average error probability for block codes of length n is obtained which exponentially approaches zero for all rates less than capacity.
Abstract: A channel which is selected for each use (without knowledge of past history) to be one of a given set of discrete memoryless channels is to be used by an ignorant communicator, i.e., the transmitter and receiver are assumed to have no knowledge of the particular channels selected. For this situation an upper bound on the insurable average error probability for block codes of length n is obtained which exponentially approaches zero for all rates less than capacity. Communication design techniques for achieving these results are discussed.

Journal ArticleDOI
TL;DR: In the binary situation of testing for signals of one class against signals of another in noise, the LOBD is seen to be a linearly weighted sum of characteristics like (1), and a criterion of asymptotic relative efficiency (ARE) is developed.
Abstract: A general canonical theory is developed for the systematic approximation of optimum, or Bayes, detection procedures in the critical limiting threshold mode of operation. The approximations to Bayes detectors introduced here are called locally optimum Bayes detectors (LOBD's) and are defined by the condition that they produce the same value of average risk and its derivative for vanishingly small input signals (\theta \rightarrow 0) as do the corresponding Bayes systems. The LOBD, x , in the binary (i.e., two-alternative cases: H_{1} , signal and noise, vs. H_{0} , noise alone) is found to be the expansion of the logarithm of the optimum or likelihood ratio (functional) form \Lambda , including a suitable bias term, e.g., x = \log \mu + B_{n}(\theta)+\theta \left( \frac{\delta \log \Lambda_{n}}{\delta \theta} \right)_{\theta = 0} \mbox{(1)} Suitable choices of bias B can frequently be made so that x is asymptotically as efficient as x^{\ast} , the corresponding Bayes detector. The principal advantages of the LOBD are that 1) its comparative simplicity vis-a-vis the Bayes system, x^{\ast}; 2 ) its structure can always be found, even when the structure of the Bayes system cannot be obtained explicitly; and 3) expected performance (average risk and error probabilities) can often be determined for the LOBD when such is not possible for the Bayes system. In addition, the LOBD is canonically derived, i.e., the form of x , (1), is independent of the particular noise statistics and signal, structure. New results include 1) the fact that the logarithm of the likelihood ratio is the function of the received data to be approximated, 2) correlated samples (including continuous sampling on an interval), 3) structures analogous to (1) which hold for the LOBD in binary sequential detection, and 4) similar expansions of the logarithm of the likelihood ratios, etc., which are required for the LOBD in multiple alternative cases. Moreover, in the binary situation of testing for signals (S_{1}) of one class against signals (S_{2}) of another in noise, the LOBD is seen to be a linearly weighted sum of characteristics like (1). To account for the effects of correlated samples and limiting distributions that do not obey the Central Limit Theorem, i.e., are not asymptotically normal (as illustrated in several of the examples), a criterion of asymptotic relative efficiency (ARE) _{\theta \geq 0 is developed. Sufficient, and necessary and sufficient conditions on the bias and on \^{\theta} itself are established, which when satisfied insure that the LOBD is at least asymptotically as efficient, or equivalent to, the corresponding Bayes detector in the limit.

Journal ArticleDOI
TL;DR: This work presents solutions for two classes of "non-rational" kernels--triangular kernels of the form R(t) = \max [0, 1 - |t|] and Gauss-Markov kernels of a similar form, which are combinations of the two types.
Abstract: Fredholm integral equations of the first and second kind arise in many problems in statistical communication theory. However, almost all cases in which solutions are known, for equations over finite intervals, involve covariance kernels with rational Fourier transforms. We present solutions for two classes of "non-rational" kernels--triangular kernels of the form R(t) = \max [0, 1 - |t|] and Gauss-Markov kernels of the form R(t, s) = f(t)g(s), t \leq s, R(t, s) = f(s)g(t), s \leq t . We also treat kernels that are combinations of the two types.

Journal ArticleDOI
TL;DR: The problem of finding the best linear discriminant function for several different performance criteria is presented, and a powerful method of finding suchlinear discriminant functions is described.
Abstract: In many systems for pattern recognition or automatic decision making, decisions are based on the value of a discriminant function, a real-valued function of several observed or measured quantities. The design of such a system requires the selection of a good discriminant function, according to some particular performance criterion. In this paper, the problem of finding the best linear discriminant function for several different performance criteria is presented, and a powerful method of finding such linear discriminant functions is described. The problems to which this method may be applicable are summarized in a theorem; the problems include several involving the performance criteria of Bayes, Fisher, Kullback, and others, and many involving multidimensional probability density functions other than the usual normal functions.

Journal ArticleDOI
TL;DR: Chernoff bounds and tilted distribution arguments are applied to obtain error probability bounds for binary signaling on the slowly-fading Rician channel with L diversity and it is found that antipodal signals should be used if a > b^{2}(1 + b) , where a is the signal-to-noise ratio of the specular components and b is that of the fading components.
Abstract: Chernoff bounds and tilted distribution arguments are applied to obtain error probability bounds for binary signaling on the slowly-fading Rician channel with L diversity. For the maximum likelihood receiver, the CB-optimum [optimum in the sense of minimizing the Chernoff (upper) bound on error probability] signal correlation is determined and plotted; it is found that antipodal signals should be used if a > b^{2}(1 + b) , where a is the signal-to-noise ratio of the specular components and b is that of the fading components. The CB-optimum number of diversity paths is then obtained. If a/b > 0.2 , antipodal signaling with unlimited diversity is CB-optimum; whereas, if a/b , orthogonal signaling with properly chosen diversity is very nearly CB-optimum. If restricted to orthogonal signaling, unlimited diversity is CB-optimum whenever a/b > 1.0 . Similar results are obtained for the generally nonoptimum square-law-combining receiver. In this case, orthogonal signaling with finite diversity is always CB-optimum.

Journal ArticleDOI
TL;DR: A Bayes approach to nonsupervised pattern recognition is given where n l -dimensional vector samples X_{1, X_{2}, X_{n} are received unclassified, i.e., any one of M pattern sources, caused each sample X_{s}, s=1,2, \cdots, n .
Abstract: A Bayes approach to nonsupervised pattern recognition is given where n l -dimensional vector samples X_{1}, X_{2}, \cdots , X_{n} are received unclassified, i.e., any one of M pattern sources \omega_{1}, \omega_{2}, \cdots, \omega_{M} , with corresponding probabilities of occurrence Q_{1_{o}}, Q_{2_{o}} , \cdots , Q_{M_{o}} , caused each sample X_{s}, s=1,2, \cdots , n . The approach utilizes the fact that the cumulative distribution function (c.d.f.) of X_{s} is a mixture c.d.f., F(X_{s})= \sum_{i=1}^{M} F(X_{s}|\omega_{i}) Q_{i_{o}} . It is assumed that available a priori knowledge includes knowledge of M and the family \{F(X_{s}|\omega_{i})\} , where F(X_{s}|\omega_{i}) is characterized by a vector B_{i_{o}} . In general, B_{i_{o}} and Q_{i_{o}}, i = 1,2, \cdots , M are considered fixed but unknown, and conditional probability of error in deciding which source caused X_{n} is minimized. When the functional form of F(X_{s}|\omega_{i}) in terms of B_{i_{o}} is unknown, the family \{F(X_{s}|\omega_{i})\} is taken to be the family of multinomial c.d.f.'s--an application of the histogram concept to the nonsupervisory problem. Additional nonparameteric a priori knowledge about the family--such as F(X_{s}|\omega_{i}) is symmetrical, and/or F(X_{s}|\omega_{i}) differs from F(X_{s}|\omega_{j}) only by a translational vector--can be utilized in the Bayes solution.

Journal ArticleDOI
TL;DR: A block code which corrects a single synchronization error per block is presented, and it is shown that this code has, at most, three bits more redundancy than that of an optimal code for this class of errors.
Abstract: A synchronization error is said to occur when either a bit which does not belong is detected in a channel between bits which were transmitted, or a bit which was transmitted is never detected at the output. A block code which corrects a single synchronization error per block is presented, and it is shown that this code has, at most, three bits more redundancy than that of an optimal code for this class of errors. The code has the beneficial property that it is possible to separate the information positions from the check positions, and an appropriate method of encoding is shown.

Journal ArticleDOI
TL;DR: It is shown that as M is allowed to become very large, the performance depends only on the probability distribution of received energy per bit, and that this limiting behavior is the same as for optimum detection based on full knowledge of instantaneous channel properties.
Abstract: One of M frequency translates of a common signal is sent over a fading channel and is received in additive white noise. The receiver decides which signal was sent using energy measurements only. It is shown that as M is allowed to become very large, the performance depends only on the probability distribution of received energy per bit, and that this limiting behavior is the same as for optimum detection based on full knowledge of instantaneous channel properties. If sufficiently many independent copies of the channel can be achieved (i.e., independent diversity branches), the performance can be made to approach as closely as desired that of the nonfading infinite-bandwidth channel, independently of the exact channel statistics.

Journal ArticleDOI
TL;DR: Results of computer experiments using irregular spectra are presented which indicate the superiority of the cepstrum over autocovariance.
Abstract: We consider cepstrum analysis of a model consisting of a detector receiving a Gaussian single complex echoed signal in Gaussian noise of the form x(t) + \beta[\cos \theta x(t- \tau) + \sin \theta x_{H}(t- \tau)] + n(t), where \beta is the echo amplitude, \theta an arbitrary constant, and x_{H}(t - \tau) is the Hilbert transform of x(t - \tau) . We assume no a priori knowledge of the spectra of the signal and noise except that they are smooth. The cepstrum is obtained as follows. We compute the log power spectral density estimate Â(f) of the received echoed signal and noise. The estimate Â(f) equals the "true" log spectral density Â(f) plus the sampling fluctuation of the log spectral density estimates. The echo produces a ripple of frequency \tau in the log spectral density. Â(f) is filtered to remove the slowly varying spectrum components and the result is analyzed by power spectrum estimation procedures to yield the cepstrum in which there is a peak due to the echo. To assess performance, we compare the cepstrum due to the log spectrum sampling fluctuation with the cepstrum peak due to the echo. The echo detectability for a given value of \tau can then be found by assuming the cepstrum estimates due to the log spectrum sampling fluctuations alone to he distributed as a scaled chi-square, and when an echo is present the distribution is assumed to be a scaled noncentral chi-square. We compare a special case of these results to an adaptation of a maximum likelihood procedure discussed by Whittle, for which \theta = 0 , and find that the cepstrum is only 1.8 dB worse for the echo amplitude and data length assumed. This degradation is not surprising since the cepstrum is equally effective for a complex echo in which \theta is not known a priori. Results of computer experiments using irregular spectra are presented which indicate the superiority of the cepstrum over autocovariance.

Journal ArticleDOI
M. Kanefsky1
TL;DR: Polarity Coincidence Array detectors are considered for testing the hypothesis that a random signal is common to an array of receivers which contain noise processes that are independent representations of a given class of stochastic processes.
Abstract: Polarity Coincidence Array detectors (PCA) are considered for testing the hypothesis that a random signal is common to an array of receivers which contain noise processes that are independent representations of a given class of stochastic processes. A standard procedure is to reduce the received data by sampling and then hard limiting. Hard limiting is shown to introduce an inherent loss in input signal power of 1.96 dB when the input data is a sequence of independent samples from a stationary Gaussian process. However, when the stationary and/or Gaussian assumptions are violated, the relative efficiencies of the PCA detectors can greatly improve. When the input samples are dependent, it is necessary to assume Ganssian inputs in order to analyze the PCA detectors. However, these devices are still unaffected by a nonstationary noise level that is slowly varying relative to the inverse bandwidth of the pre-filter. Furthermore, the loss due to clipping is considerably reduced as the sample dependence (i.e., sampling rate) increases. For rapid sampling rates, the spectral shapes of the inputs must be known accurately in order to fix the false-alarm rate at some pre-assigned value.

Journal ArticleDOI
TL;DR: By allowing unequal word lengths in the code, it is demonstrated that a substantial saving in average word length and information rate can be accomplished over other recently proposed codes having synchronization capability.
Abstract: A synchronizable (SC_{s}) code has the property that the punctuation (comma or no comma, comma indicating that the next symbol is the beginning of a new code word) at a given position in a code symbol stream can always be determined by observing at most s code symbols in the neighborhood of the position in question. The construction of SC_{s} dictionaries and the mechanization of synchronizers using nonlinear shift registers are explained in detail. Necessary and sufficient conditions for the existence of SC_{s} codes with specified word lengths are derived. By allowing unequal word lengths in the code, it is demonstrated that a substantial saving in average word length and information rate can be accomplished over other recently proposed codes having synchronization capability.

Journal ArticleDOI
TL;DR: Biorthogonal codes are generalized in a natural way to a class of codes, called N -orthogonal code, where signals in different sets are uncorrelated or orthogonal, and simplified integral expressions for the probability of error are derived.
Abstract: In this paper biorthogonal codes are generalized in a natural way to a class of codes, called N -orthogonal codes. N -orthogonal codes consist of N^{M} signals divided into N^{M-1} disjoint sets of N signals where signals in different sets are uncorrelated or orthogonal. One instance of the N -orthogonal codes is realized as a class of polyphase, constant-power modulated signals, the admissible phase set being denoted by N^{th} roots of unity. Matched filter receiver criteria for the Gaussian channel are developed and simplified integral expressions for the probability of error are derived.

Journal ArticleDOI
TL;DR: Optimal amplitude modulations for a radar signal are derived and then used to calculate the efficiencies of various sub-optimal modulations, with the criterion of optimality based on the error variances of estimates of the range motion parameters of a reflecting body.
Abstract: Optimal amplitude modulations for a radar signal are derived and then used to calculate the efficiencies of various sub-optimal modulations. The choice of modulation is constrained by the total energy transmitted and the peak power (amplitude) of the transmitted signal. The peak power constraint is handled by the use of Pontryagin's Maximum Principle, an extension of the calculus of variations recently developed in the U.S.S.R. that is enjoying wide application in optimal control theory. The criterion of optimality is based on the error variances of estimates of the range motion parameters of a reflecting body, where the errors are caused by additive, white, zero mean, Gaussian noise. Explicit results are provided for bodies with constant velocity and bodies with constant acceleration. The analysis covers: 1) incoherent processing of a sequence of many range measurements; 2) coherent processing assuming the RF phase is known, 3) certain aspects of coherent processing assuming the RF phase is unknown. The optimal modulations turn out to be of the "on-off" type, requiring either no transmission or transmission at the maximum allowable power level.

Journal ArticleDOI
TL;DR: The idea of varying the stopping rules as a function of time enables us to investigate the behavior of a modified sequential test as compared to the standard Wald test with constant stopping boundaries.
Abstract: The problem of optimally terminating the sequential recognition procedure at a finite time prespecified by the designer is considered. The application arises, in practice, when the receptor (feature extraction) part of a sequential recognition machine has only a finite number of suitable features available to the categorizer (decision) part, or the cost of taking observation is found too high as the recognition process exceeds a certain time limit. In either case, the urgency to terminate the recognition procedure becomes greater when the available measurements are to be exhausted. The problem is studied by considering time-varying stopping boundaries for the sequential procedure such that by a preassigned length of time, the acceptance and rejection regions meet and, therefore, one of the pattern classes has to be accepted as the terminal decision. The idea of varying the stopping rules as a function of time enables us to investigate the behavior of a modified sequential test as compared to the standard Wald test with constant stopping boundaries. Computer simulation of English character recognition using the modified sequential test procedure indicates very satisfactory results.

Journal ArticleDOI
TL;DR: The purpose of this correspondence is to prove that the condition of high background radiation is unnecessary for the derivation of the result and the same result is derived for an arbitrary input signal-to-noise ratio.
Abstract: The problem of receiving intensity modulated light, over a distance at which signal levels have deteriorated to a point where the arrival of individual photons may be counted, is treated in terms of the likelihood ratio for time varying Poisson processes.‘,2 Various authorsr,zJ have shown that under an average energy constraint and conditions of high background radiation, a narrow pulse maximizes the output “signal-to-noise” ratio of a photon detector. The purpose of this correspondence is to prove that the condition of high background2J is unnecessary for the derivation. The same result is derived for an arbitrary input signal-to-noise ratio.