# Showing papers in "IEEE Transactions on Communications in 1981"

â€˘â€˘

TL;DR: The motion compensation is applied for analysis and design of a hybrid coding scheme and the results show a factor of two gain at low bit rates.

Abstract: A new technique for estimating interframe displacement of small blocks with minimum mean square error is presented. An efficient algorithm for searching the direction of displacement has been described. The results of applying the technique to two sets of images are presented which show 8-10 dB improvement in interframe variance reduction due to motion compensation. The motion compensation is applied for analysis and design of a hybrid coding scheme and the results show a factor of two gain at low bit rates.

1,883Â citations

â€˘â€˘

Bell Labs

^{1}TL;DR: A frequencydependent quadrature model is proposed whose parameters are obtainable from single-tone measurements and is shown to fit measured data very well.

Abstract: Simple two-parameter formulas are presented for the functions involved in the amplitude-phase and the quadrature nonlinear models of a TWT amplifier, and are shown to fit measured data very well. Also, a closed-form expression is derived for the output signal of a TWT amplifier excited by two phase-modulated carriers, and an expression containing a single integral is given when more than two such earriers are involved. Finally, a frequencydependent quadrature model is proposed whose parameters are obtainable from single-tone measurements.

1,442Â citations

â€˘â€˘

Bell Labs

^{1}TL;DR: It is shown that, for the important and commonly implemented policy of complete sharing, a simple one-dimensional recursion can be developed which eliminates all difficulty in computing quantities of interest-regardless of both the size and dimensionality of the underlying model.

Abstract: In recent years, considerable effort has focused on evaluating the blocking experienced by "customers" in contending for a commonly shared "resource." The customers and resource in question have typically been messages and storage space in message storage applications or data streams and bandwidth in data multiplexing applications. The model employed in these studies, a multidimensional generalization of the classical Erlang loss model, has been limited to exponentially distributed storage (or data transmission) times, questions concerning efficient computational schemes have largely been ignored, and the class of resource sharing policies considered has been unnecessarily restricted. The contribution of this paper is threefold. We first show that the state distribution (obtained by previous authors) is valid for the large class of residency time distributions which have rational Laplace transforms. Second, we show that, for the important and commonly implemented policy of complete sharing, a simple one-dimensional recursion can be developed which eliminates all difficulty in computing quantities of interest-regardless of both the size and dimensionality of the underlying model. Third, we show that the state distribution holds for completely arbitrary resource sharing policies.

1,029Â citations

â€˘â€˘

TL;DR: A self-starting, distributed algorithm is proposed and developed that establishes and maintains a reliable structure that is especially suited to the needs of the HF Intra-Task Force (ITF) communication network, which is discussed in the paper.

Abstract: In this paper we consider the problem of organizing a set of mobile, radio-equipped nodes into a connected network. We require that a reliable structure be acquired and maintained in the face of arbitrary topological changes due to node motion and/or failure. We also require that such a structure be achieved without the use of a central controller. We propose and develop a self-starting, distributed algorithm that establishes and maintains such a connected architecture. This algorithm is especially suited to the needs of the HF Intra-Task Force (ITF) communication network, which is discussed in the paper.

870Â citations

â€˘â€˘

TL;DR: In this article, a digital modulation for future mobile radio telephone services is proposed, and its fundamental properties are clarified with the aid of machine computation, and the constitution of modulator and demodulator is discussed from the viewpoints of mobile radio applications.

Abstract: This paper is concerned with digital modulation for future mobile radio telephone services. First, the specific requirements on the digital modulation for mobile radio use are described. Then, premodulation Gaussian filtered minimum shift keying (GMSK) with coherent detection is proposed as an effective digital modulation for the present purpose, and its fundamental properties are clarified with the aid of machine computation. The constitution of modulator and demodulator is then discussed from the viewpoints of mobile radio applications. The superiority of this modulation is supported by some experimental test results.

720Â citations

â€˘â€˘

Lund University

^{1}TL;DR: Comparisons are made with minimum shift keying (MSK) and systems have been found which are significantly better in E_{b}/N_{0} for a large signal-to-noise ratio (SNR) without expanded bandwidth, and schemes with the same bit error probability as MSK but with considerably smaller bandwidth have also been found.

Abstract: The continuous phase modulation (CPM) signaling scheme has gained interest in recent years because of its attractive spectral properties. Data symbol pulse shaping has previously been studied with regard to spectra, for binary data and modulation index 0.5. In this paper these results have been extended to the M -ary case, where the pulse shaping is over a one symbol interval, the so-called full response systems. Results are given for modulation indexes of practical interest, concerning both performance and spectrum. Comparisons are made with minimum shift keying (MSK) and systems have been found which are significantly better in E_{b}/N_{0} for a large signal-to-noise ratio (SNR) without expanded bandwidth. Schemes with the same bit error probability as MSK but with considerably smaller bandwidth have also been found. Significant improvement in both power and bandwidth are obtained by increasing the number of levels M from 2 to 4.

545Â citations

â€˘â€˘

NEC

^{1}TL;DR: This paper provides a novel digital signal processing method based on an N /2-point DFT processing in the O-QAM system that is more economical than the digitally implemented conventional single-channel data transmission system.

Abstract: An orthogonally multiplexed QAM (O-QAM) system is a multichannel system with a baud rate spacing between adjacent carrier frequencies; this property is desirable to digitally implement the system using the discrete Fourier transformation (DFT). This paper provides a novel digital signal processing method based on an N /2-point DFT processing in the O-QAM system. A complexity comparison between a digital O-QAM system and a digital singlechannel QAM system shows that the digital O-QAM system using the new method is more economical than the digitally implemented conventional single-channel data transmission system.

544Â citations

â€˘â€˘

IBM

^{1}TL;DR: The problem of optimally choosing message rates for users of a store-and-forward network is analyzed and a generalized definition of ideal tradeoff is introduced to provide more flexibility in the choice of message rates.

Abstract: The problem of optimally choosing message rates for users of a store-and-forward network is analyzed. Multiple users sharing the links of the network each attempt to adjust their message rates to achieve an ideal network operating point or an "ideal tradeoff point between high throughput and low delay." Each user has a fixed path or virtual circuit. In this environment, a basic definition of "ideal delay-throughput tradeoff" is given and motivated. This definition concentrates on a fair allocation of network resources at network bottlenecks. This "ideal policy" is implemented via a decentralized algorithm that achieves the unique set of optimal throughputs. All sharers constrained by the same bottleneck are treated fairly by being assigned equal throughputs. A generalized definition of ideal tradeoff is then introduced to provide more flexibility in the choice of message rates. With this definition, the network may accommodate users with different types of message traffic. A transformation technique reduces the problem of optimizing this performance measure to the problem of optimizing the basic measure.

483Â citations

â€˘â€˘

IBM

^{1}TL;DR: A new approach for black and white image compression is described, with which the eight CCITT test documents can be compressed in a lossless manner 20-30 percent better than with the best existing compression algorithms.

Abstract: A new approach for black and white image compression is described, with which the eight CCITT test documents can be compressed in a lossless manner 20-30 percent better than with the best existing compression algorithms. The coding and the modeling aspects are treated separately. The key to these improvements is an efficient binary arithmetic code. The code is relatively simple to implement because it avoids the multiplication operation inherent in some earlier arithmetic codes. Arithmetic coding permits the compression of binary sequences where the statistics change on a bit-to-bit basis. Model statistics are studied from stationary, stationary adaptive, and nonstationary adaptive assumptions.

424Â citations

â€˘â€˘

IBM

^{1}TL;DR: A comparative evaluation of the performance of ring and bus systems constituting subnetworks of local-area networks, based on analytic models which describe the various topologies and access mechanisms to a sufficient level of detail.

Abstract: This paper provides a comparative evaluation of the performance of ring and bus systems constituting subnetworks of local-area networks. Performance is measured in terms of the delaythroughput characteristics. Systems investigated include token-controlled and slotted rings as well as random-access buses (CSMA with collision detection) and ordered-access buses. The investigation is based on analytic models which describe the various topologies and access mechanisms to a sufficient level of detail. The paper includes a comprehensive discussion of how the performance of the different networks is affected by system parameters like transmission rate, cable length, packet lengths, and control overhead.

412Â citations

â€˘â€˘

TL;DR: It is argued that flooding schemes have significant drawbacks for such networks, and a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology is proposed.

Abstract: We consider the problem of maintaining communication between the nodes of a data network and a central station in the presence of frequent topological changes as, for example, in mobile packet radio networks. We argue that flooding schemes have significant drawbacks for such networks, and propose a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology. By virtue of built-in redundancy, the algorithms are typically activated very infrequently and, even when they are, they do not involve any communication within the portion of the network that has not been materially affected by a topological change.

â€˘â€˘

Lund University

^{1}TL;DR: It is concluded that partial response CPM systems have spectrum compaction properties and at equal or even smaller bandwidth than minimum shift keying (MSK), a considerable gain in transmitter power can be obtained.

Abstract: An analysis of constant envelope digital partial response continuous Phase modulation (CPM) systems is reported. Coherent detection is assumed and the channel is Gaussian. The receiver observes the received signal over more than one symbol interval to make use of the correlative properties of the transmitted signal. The Systems are M -ary, and baseband pulse shaping over several symbol intervals is considered. An optimum receiver based on the Viterbi algorithm is presented. Constant envelope digital modulation schemes with excellent spectral tail properties are given. The spectra have extremely low sidelobes. It is concluded that partial response CPM systems have spectrum compaction properties. Furthermore, at equal or even smaller bandwidth than minimum shift keying (MSK), a considerable gain in transmitter power can be obtained. This gain increases with M . Receiver and transmitter configurations are presented.

â€˘â€˘

TL;DR: Numerical results indicate that appropriate use of multiaccess coding can provide utilization-delay characteristics superior to that of ALOHA.

Abstract: Analytical techniques for performance evaluation of synchronous random access packet switching in code division multiple access (CDMA) systems are presented. Steady-state throughput characteristics using several packet generation models are obtained. A number of example random access CDMA systems are compared in terms of their throughput versus offered traffic and utilization-delay characteristics. Numerical results indicate that appropriate use of multiaccess coding can provide utilization-delay characteristics superior to that of ALOHA. System stability is evaluated using a general finite user model, and the dynamic behavior of some example random access CDMA schemes is investigated.

â€˘â€˘

Xerox

^{1}TL;DR: This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes.

Abstract: This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes. A large percentage of the electronic output marking processes utilize a binary mode of operation. The history and rationale for this are reviewed and thus the economic justification for the tradeoff is presented. A set of image quality and processing complexity metrics are then defined. Next, a set of algorithms including fixed and adaptive thresholding, orthographic pictorial fonts, electronic screening, ordered dither, and error diffusion are defined and evaluated relative to their ability to reproduce continuous tone input. Finally, these algorithms, along with random nucleated halftoning, the alias reducing image enhancement system (ARIES), and a new algorithm, selective halftone rescreening (SHARE), are defined and evaluated as to their ability to reproduce halftone pictorial input.

â€˘â€˘

Bell Labs

^{1}TL;DR: Simple algebraic expressions for this modulation noise and its spectrum in terms of the input amplitude are derived and can be useful for designing oversampled analog to digital converters that use sigma-delta modulation for the primary conversion.

Abstract: When the sampling rate of a sigma-delta modulator far exceeds the frequencies of the input signal, its modulation noise is highly correlated with the amplitude of the input. We derive simple algebraic expressions for this noise and its spectrum in terms of the input amplitude. The results agree with measurements taken on a breadboard circuit. This work can be useful for designing oversampled analog to digital converters that use sigma-delta modulation for the primary conversion.

â€˘â€˘

Bell Labs

^{1}TL;DR: Perceptual considerations indicate that packet lengths most robust to losses are in the range 16-32 ms, irrespective of whether interpolation is used or not, whereas tolerable P L values can be as high as 2 to 5 percent without interpolation and 5 to 10 percent with interpolation.

Abstract: We have studied the effects of random packet losses in digital speech systems based on 12-bit PCM and 4-bit adaptive DPCM coding. The effects are a function of packet length B and probability of packet loss P L . We have also studied tbe benefits of an odd-even sample-interpolation procedure that mitigates these effects (at the cost of increased decoding delay). The procedure is based on arranging a 2B -block of codewords into two B -sample packets, an odd-sample packet and an even-sample packet. If one of these packets is lost, the odd (or even) samples of the 2B -block are estimated from the even (or odd) samples by means of adaptive interpolation. Perceptual considerations indicate that packet lengths most robust to losses are in the range 16-32 ms, irrespective of whether interpolation is used or not. With these packet lengths, tolerable P L values, which are strictly input-speech-dependent, can be as high as 2 to 5 percent without interpolation and 5 to 10 percent with interpolation. These observations are based on a computer simulation with three sentence-length speech inputs, and on informal listening tests.

â€˘â€˘

Bell Labs

^{1}TL;DR: This paper discusses word recognition as a classical pattern-recognition problem and shows how some fundamental concepts of signal processing, information theory, and computer science can be combined to give us the capability of robust recognition of isolated words and simple connected word sequences.

Abstract: The art and science of speech recognition have been advanced to the state where it is now possible to communicate reliably with a computer by speaking to it in a disciplined manner using a vocabulary of moderate size. It is the purpose of this paper to outline two aspects of speech-recognition research. First, we discuss word recognition as a classical pattern-recognition problem and show how some fundamental concepts of signal processing, information theory, and computer science can be combined to give us the capability of robust recognition of isolated words and simple connected word sequences. We then describe methods whereby these principles, augmented by modern theories of formal language and semantic analysis, can be used to study some of the more general problems in speech recognition. It is anticipated that these methods will ultimately lead to accurate mechanical recognition of fluent speech under certain controlled conditions.

â€˘â€˘

TL;DR: Results obtained in the encoding of a band-limied Gaussian source and a rasterscanned black and white still image reveal that an NSE/RLE or NSPC/ RLE system exhibits performance superior to that of an adaptive delta modulation system.

Abstract: A nonuniform sampling approach to digital encoding of analog sources is proposed. The nonuniform sampler is basically a level crossing detector (LCD) which produces a sample whenever the input to the LCD crosses a threshold level. The information about the source signal is contained in the time intervals between level crossings and in the directions of level crossings. By assigning strings of the 2-tuple "00" to represent the time between level crossings and "01" and "10" to denote the directions of level crossings, the output binary sequence of the nonuniform sampling encoder (NSE) contains a high probability of the 0 symbol, which makes it suitable for further simple run-length encoding (RLE) to attain a "good" overall compression ratio. Introduction of prediction converts the NSE to a nonuniform sampling predictive coding (NSPC) scheme, which, depending on the source, can potentially improve the compression ratio. Results obtained in the encoding of a band-limied Gaussian source and a rasterscanned black and white still image reveal that an NSE/RLE or NSPC/ RLE system exhibits performance superior to that of an adaptive delta modulation system.

â€˘â€˘

TL;DR: In this paper, the theory of error rates for narrow-band digital FM with limiter-discriminator detection and integrate and dump postdetection filtering is considered, and a new approach in the time domain, within certain ranges of frequency deviation ratio, and time-bandwidth product, leads to a theory which gives Simple closed form solutions for the relevant system parameters.

Abstract: The theory of error rates for narrow-band digital FM with limiter-discriminator detection and integrate and dump postdetection filtering is considered. The goal is that of simplifying the existing theory, which has evolved primarily in the frequency domain, and requires numerical integration and differentiation of Fourier series expansions, numerical convolution, and multibit pattern averaging. A fresh approach in the time domain, within certain ranges of frequency deviation ratio, and time-bandwidth product, leads to a theory which 1) gives Simple closed form solutions for the relevant system parameters, 2) allows for arbitrary IF filtering, 3) does not require numerical integration nor differentiation of Fourier series or computer convolutions, and 4) provides greater insight into the underlying error mechanisms. The subsequent, less involved BER calculations using the new approach compare favorably with previously published results. An unexpected bonus, resulting from the analysis, is that nonlinear IF filter phase characteristics can, in many cases, be shown to be unimportant in affecting the bit error rate.

â€˘â€˘

TL;DR: A dynamic mathematical model of rain attenuation has been developed and is presented and the application of the model to the statistical analysis of the performance of communications systems is illustrated in this paper.

Abstract: A dynamic mathematical model of rain attenuation has been developed and is presented in this paper. This model permits the expression of analytic relationships between parameters commonly used to describe the properties of interest for communication. The dynamic model is based on the lognormal distribution of rain attenuation and utilizes a memoryless nonlinear device to transform attenuation and rain intensity into a one-dimensional Gaussian stationary Markov process. Hence, only one parameter is required to introduce the dynamic properties of rain attenuation into the model. Experimental results and the known properties of rain have been used to derive and to verify the model; comparative results are presented and demonstrate good correspondence. The application of the model to the statistical analysis of the performance of communications systems is illustrated in the paper. The use of a dynamic rain attenuation model is necessary in order to analyze radio communication systems with transmit power control to offset the effects of rain attenuation, and where the finite response time of the control system affects the performance. An advantage of the model is the simplicity with which it allows simulation of communication link performance under the influence of rain attenuation. Such simulations are of great interest for complex models of adaptive networks where several deteriorating effects, including finite response times, are present.

â€˘â€˘

TL;DR: In this paper, necessary and sufficient conditions are given and proven for the use of process ordering and generalized resource ordering techniques to avoid deadlocks in arbitrary systems of interacting processes.

Abstract: This paper first Surveys a number of potential deadlocks inherent in store-and-forward networks and outlines corresponding countermeasures. It then goes on to a more detailed treatment of the most important deadlock types. Finally, necessary and sufficient conditions are given and proven for the use of process ordering and generalized resource ordering techniques to avoid deadlocks in arbitrary systems of interacting processes.

â€˘â€˘

TL;DR: A memory organization which overcomes the need to read and rewrite long words in a Viterbi decoder, and the techniques described here are not novel, but neither are they widely known.

Abstract: Management of the memory contents in a Viterbi decoder is a major design problem for both hardware and software realizations. In a naive implementation, every bit in the memory must be changed (read, modified, and rewritten) for each message bit decoded, and, in addition, some double buffering is required. An especially annoying feature is the need to read and rewrite long words, forty bits in a typical case. In this note we describe a memory organization which overcomes these problems. The techniques described here are not novel, but neither are they widely known.

â€˘â€˘

TL;DR: In this article, it was shown that when Pierce's pulse-position modulation scheme with 2L positions is used on a self-noise-limited direct-detection optical communication channel, there results a 2L-ary erasure channel that is equivalent to the parallel combination of L "completely correlated" binary erasure channels.

Abstract: It is shown that When Pierce's pulse-position modulation scheme with 2Lpositions is used on a self-noise-limited directdetection optical communication channel, there results a 2L-ary erasure channel that is equivalent to the parallel combination of L "completely correlated" binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.

â€˘â€˘

IBM

^{1}TL;DR: An optimal time slot assignment algorithm for any M, N, K, and any traffic matrix is presented, where optimality means achieving the minimal possible total duration for the given traffic matrix.

Abstract: In this paper we consider an SS/TDMA system with M uplink beams, N downlink beams, and K, 1 \leq K \leq \min (M, N) transponders. An optimal time slot assignment algorithm for any M, N, K, and any traffic matrix is presented, where optimality means achieving the minimal possible total duration for the given traffic matrix. The number of switching matrices generated by the algorithm is bounded above by N^{2} - N + 1 for K = M = N and MN + K + 1 otherwise. Extensive simulation results on randomly generated matrices are carried out, showing that the average number of switching matrices generated is substantially lower than the bounds.

â€˘â€˘

TL;DR: The approach described here provides a rationale for combined source-channel coding which provides improved quality image reconstruction without sacrificing transmission bandwidth and is shown to result in a relatively robust design which is reasonably insensitive to channel errors and yet provides performance approaching theoretical performance limits.

Abstract: An approach is described for exploiting the tradeoffs between source and channel coding in the context of image transmission. The source encoder employs two-dimensional (2-D) block transform coding using the discrete cosine transform (DCT). This technique has proven to be an efficient and readily implementable source coding technique in the absence of channel errors. In the presence of channel errors, however, the performance degrades rapidly, requiring some form of error-control protection if high quality image reconstruction is to be achieved. This channel coding can be extremely wasteful of channel bandwidth if not applied judiciously. The approach described here provides a rationale for combined source-channel coding which provides improved quality image reconstruction without sacrificing transmission bandwidth. This approach is shown to result in a relatively robust design which is reasonably insensitive to channel errors and yet provides performance approaching theoretical performance limits. Analytical results are provided for assumed 2-D autoregressive image models, while simulation results are provided for real-world images.

â€˘â€˘

TL;DR: This paper shows how the least squares lattice algorithms, recently introduced by Morf and Lee, can be adapted to the equalizer adjustment algorithm, which has a number of desirable features which should prove useful in many applications.

Abstract: In many applications of adaptive data equalization, rapid initial convergence of the adaptive equalizer is of paramount importance. Apparently, the fastest known equalizer adaptation algorithm is based on a recursive least squares estimation algorithm. In this paper we show how the least squares lattice algorithms, recently introduced by Morf and Lee, can be adapted to the equalizer adjustment algorithm. The resulting algorithm, although computationally more complex than certain other equalizer algorithms (including the fast Kalman algorithm), has a number of desirable features which should prove useful in many applications.

â€˘â€˘

TL;DR: In this paper, the authors analyzed a class of mixed-mode ARQ protocol models which incorporate a selective-repeat mode with finite receiver buffer and showed that it is desirable for best throughput performance in practical systems that at least the first retransmission of a block following an error should be in the selectiverepeat mode to obtain superior performance over GoBack N schemes.

Abstract: In high bit rate data transmission systems with ARQ error control, the throughput efficiency is a function of bit error rate, block or packet size, and the effect of significant round trip delays such as may be experienced in satellite communication systems. The selective-repeat ARQ scheme is capable of providing superior throughput performance independent of round trip delay, but requires excessively large receiver buffers; as a result the inferior GoBack N procedure is commonly adopted. This paper analyzes a class of mixed-mode ARQ protocol models which incorporate a selectiverepeat mode with finite receiver buffer. The protocol models are shown to be amenable to exact throughput analysis, but do assume that the round trip delay is constant and known, blocks are of constant length, and the ACK/NAK signals are returned error free. These assumptions might create difficulties for practical implementation. However, the analytical model results highlight those aspects of ARQ protocols which affect throughput performance as round trip delays increase. The results show that it is desirable for best throughput performance in practical systems that at least the first retransmission of a block following an error should be in the selective-repeat mode to obtain superior performance over GoBack N schemes. Furthermore, alternative secondary retransmission modes are considered which ensure that reliable transmission can be achieved without receiver buffer overflow, even if the selective-repeat mode retransmissions fail. It is shown that the choice of secondary mode does not have a significant effect on the throughput efficiency but has a bearing on complexity.

â€˘â€˘

IBM

^{1}TL;DR: Flow control is proposed as a means of obtaining an "optimal tradeoff" between low delay and high throughput in computer networks and a class of algorithms which attempt to optimize network performance are investigated.

Abstract: Flow control is proposed as a means of obtaining an "optimal tradeoff" between low delay and high throughput in computer networks Several versions of "optimal tradeoff" are defined based on network power A class of algorithms which attempt to optimize network performance are investigated These algorithms operate on the design principles of dynamic, distributed execution and use of local information These design principles force the algorithms to be suboptimal, and we thus investigate the relative performance of each in different network configurations Several properties of power as a network performance objective function are examined In certain configurations, two variations of network power are unfair to certain users by not permitting them to send any messages A version of network power ("product of powers") corrects this deficiency Other properties discussed include the nonconvexity of the generalized power function

â€˘â€˘

IBM

^{1}TL;DR: With any strong cryptographic algorithm, such as the data encryption standard (DES), it is possible to devise protocols for authentication, which allows arbitrary, time-invariant quantities to be authenticated based upon a secret cryptographic key residing in a host processor.

Abstract: With any strong cryptographic algorithm, such as the data encryption standard (DES), it is possible to devise protocols for authentication. One technique, which allows arbitrary, time-invariant quantities (such as encrypted keys and passwords) to be authenticated, is based upon a secret cryptographic (master) key residing in a host processor. Each quantity to be authenticated has a corresponding precomputed test pattern. At any later time, the test pattern can be used together with the quantity to be authenticated to generate a nonsecret verification pattern. The verification pattern can in turn be used as the basis for accepting or rejecting the quantity to be authenticated.

â€˘â€˘

TL;DR: The results indicate that a desirable length of talkspurt "hangover" of about 200 ms will accomplish this without unduly affecting speech activity, and that, under these circumstances, the perceptable threshold of variable talkpurt delay can be as high as about 200ms average.

Abstract: This paper focuses on network delays as they apply to voice traffic. First the nature of the delay problem is discussed and this is followed by a review of enhanced circuit, packet, and hybrid switching techniques: these include fast circuit switching (FCS), virtual circuit switching (VCS), buffered speech interpolation (SI), packetized virtual circuit (PVC), cut-through switching (CTS), composite packets, and various frame management strategies for hybrid switching. In particular, the concept of introducing delay to resolve contention in SI is emphasized, and when applied to both voice talkspurts and data messages, forms a basis for a relatively new approach to network design called transparent message switching (TMS). This approach and its potential performance advantages are reviewed in terms of packet structure, multiplexing scheme, network topology, and network protocols. The paper then deals more specifically with the impact of variable delays on voice traffic. In this regard the importance of generating and preserving appropriate length speech talkspurts in order to mitigate the effects of variable network delay is emphasized. The results indicate that a desirable length of talkspurt "hangover" of about 200 ms will accomplish this without unduly affecting speech activity, and that, under these circumstances, the perceptable threshold of variable talkspurt delay can be as high as about 200 ms average. As such, the results provide a useful guideline for integrated services system designers. Finally, suggestions are made for further studies on performance analysis and subjective evaluation of advanced integrated services systems.