scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Communications in 1983"


Journal ArticleDOI
TL;DR: A technique for image encoding in which local operators of many scales but identical shape serve as the basis functions, which tends to enhance salient image features and is well suited for many image analysis tasks as well as for image compression.
Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.

6,975 citations


Journal ArticleDOI
TL;DR: From a simulation of the DCT coding System it is shown that the assumption that the coefficients are Laplacian yields a higher actual output signal-to-noise ratio and a much better agreement between theory and simulation than the Gaussian assumption.
Abstract: For a two-dimensional discrete cosine transform (DCT) image coding system, there have been different assumptions concerning the distributions of the transform coefficients. This paper presents results of distribution tests that indicate that for many images the statistics of the coefficients are best approximated by a Gaussian distribution for the DC coefficient and a Laplacian distribution for the other coefficients. Furthermore, from a simulation of the DCT coding System it is shown that the assumption that the coefficients are Laplacian yields a higher actual output signal-to-noise ratio and a much better agreement between theory and simulation than the Gaussian assumption.

545 citations


Journal ArticleDOI
F. Schoute1
TL;DR: Dynamic frame length ALOHA achieves a throughput (expected number of successful packets per timeslot) of 0.426 which compares favorably with the 1/e (\approx0.368) upper bound of ordinary slotted AlOHA.
Abstract: Adding frame structure to slotted ALOHA makes it very convenient to control the ALOHA channel and eliminate instability. The frame length is adjusted dynamically according to the number of garbled, successful, and empty timeslots in the past. Each terminal that has a packet to transmit selects at random one of the n timeslots of a frame. Dynamic frame length ALOHA achieves a throughput (expected number of successful packets per timeslot) of 0.426 which compares favorably with the 1/e (\approx0.368) upper bound of ordinary slotted ALOHA.

483 citations


Journal ArticleDOI
K. Bharath-Kumar1, Jeffrey M. Jaffe1
TL;DR: A detailed, empirical study of the "average" performance of the algorithms on typical, randomly chosen networks reveals that simpler heuristics are almost as effective.
Abstract: Algorithms for effectively routing messages from a source to multiple destination nodes in a store-and-forward computer network are studied. The focus is on minimizing the network cost (NC), which is the sum of weights of the links in the routing path. Several heuristic algorithms are studied for finding the NC minimum path (which is an NP-complete problem). Among them are variations of a minimum spanning tree (MST) heuristic and heuristics for the traveling salesman problem, both of which use global network information. Another set of heuristics examined are based on using only the shortest paths to the destinations. While the MST algorithm has the best worst case performance among all algorithms, a detailed, empirical study of the "average" performance of the algorithms on typical, randomly chosen networks reveals that simpler heuristics are almost as effective. The NC cost measure is also compared to the destination cost (DC), which is the sum of weights of the shortest path distances to all destinations. A scheme of algorithms is given which trades off between NC and DC.

406 citations


Journal ArticleDOI
TL;DR: A generalization of the slotted ALOHA random access scheme is considered in which a user transmits multiple copies of the same packet and it is found that under light traffic, multiple transmission gives better delay performance.
Abstract: A generalization of the slotted ALOHA random access scheme is considered in which a user transmits multiple copies of the same packet. The multiple copies can be either transmitted simultaneously on different frequency channels (frequency diversity) or they may be transmitted on a single high-speed channel but spaced apart by random time intervals (time diversity). In frequency diversity, two schemes employing channel selections with and without replacements have been considered. In time diversity, two schemes employing a fixed number of copies or a random number of copies for each packet have been considered. In frequency diversity, activity factor-throughput tradeoffs and in time diversity, delay-throughput tradeoffs for various diversity orders have been compared. It is found that under light traffic, multiple transmission gives better delay performance. If the probability that a packet fails a certain number or more times is specified not to exceed some time limit (realistic requirement for satellite systems having large round trip propagation delay), then usually multiple transmission gives higher throughput.

266 citations


Journal ArticleDOI
TL;DR: The throughput efficiency of the pure selective-repeat ARQ for any receiver buffer size can be obtained and it is shown that the modified scheme achieves the same order of reliability as a pure ARQ scheme.
Abstract: The hybrid ARQ scheme with parity retransmission for error control, recently proposed by Lin and Yu [1], [2], is quite robust. This scheme provides both high system throughput and high system reliability. In this paper, a modified Lin-Yu hybrid ARQ scheme is presented. The modified scheme provides a slightly better throughput performance than the original Lin-Yu scheme; however, it is more flexible in utilizing the error-correction power of a code. The modified scheme can be incorporated with a rate 1/2 convolutional code using Viterbi decoding. Furthermore, the pure selectiverepeat ARQ is a degenerated case of the modified scheme in selective mode. Lin and Yu analyzed their scheme only for a receiver buffer of size N where N is the number of data blocks that can be transmitted in a round-trip delay interval. No analysis for other buffer sizes was given. In this paper, the throughput performance of the modified Lin-Yu scheme is analyzed for any size of receiver buffer. Consequently, the throughput efficiency of the pure selective-repeat ARQ for any receiver buffer size can be obtained. We also show that the modified scheme achieves the same order of reliability as a pure ARQ scheme.

215 citations


Journal ArticleDOI
B. Glance1, L. Greenstein1
TL;DR: The effects of frequency-selective fading in a cellular mobile radio system that uses phase-shift keying with cosine rolloff pulses, and space diversity with maximal-radio combining is analyzed, highlighting the importance of the ratio \tau_{0}/T, where T is the digital symbol period.
Abstract: we analyze the effects of frequency-selective fading in a cellular mobile radio system that uses 1) phase-shift keying (PSK) with cosine rolloff pulses, and 2) space diversity with maximal-radio combining. The distorting phenomena with which we deal are multipath fading (which produces the frequency selectivity), shadow fading, and cochannel interference. The relevant quality measure is defined to be the bit error rate averaged over the multipath fading, denoted by (BER). The relevant system performance characteristic is defined to be the probability distribution for (BER), taken over the ensemble of shadow fadings and locations of the desired and interfering mobiles. To obtain numerical results, we use a combination of analysis and Monte Carlo simulation, invoke widely accepted models for the multipath and shadow fadings, and assume a cellular system with seven channel sets and centrally located base stations. The outcome is a set of performance curves that reveal the influences of various system and channel parameters. These include: the number of modulation levels (two or four), the diversity order, the shape of the multipath delay spectrum, and the standard deviation (or delay spread, τ 0 ) of the multipath delay spectrum. Practical factors accounted for in these assessments include fading- and interference-related timing recovery errors and combiner imperfections. Our results highlight the importance of the ratio \tau_{0}/T , where T is the digital symbol period. They show that the delay spectrum shape is of no importance for \tau_{0}/T \leq 0.2 , but can have a profound influence for \tau_{0}/T \geq 0.3 . We also find that using 4-PSK leads to better detection performance, in certain cases, than using 2-PSK.

199 citations


Journal ArticleDOI
TL;DR: Under certain conditions it is shown that discrete-time sequences carry redundant information which then allow for the detection and correction of errors.
Abstract: The relationship between the discrete Fourier transform and error-control codes is examined. Under certain conditions we show that discrete-time sequences carry redundant information which then allow for the detection and correction of errors. An application of this technique to impulse noise cancellation for pulse amplitude modulation transmission is described.

185 citations


Journal ArticleDOI
TL;DR: It is shown that the baseband signal of the modulator, the P_{e} = f(E_{b}/N_{0}) performance, and the spectral characteristics of nonlinearly amplified (hard-limited or saturated) radio systems of XPSK and tamed frequency modems (TFM) are practically the same.
Abstract: A new modulation technique, cross-correlated phase-shift keying ( XPSK ), is introduced. XPSK is a band-limited offset QPSK modulation technique which has an almost constant envelope. In XPSK modulators, a controlled amount of cross correlation between the in-phase ( I ) and quadrature ( Q ) channels is introduced. I and Q cross correlation reduce the envelope fluctuation Of the intersymbol-interference and jitter-free OQPSK (IJF-OQPSK) modulation scheme, introduced by Feher et al. [1], [2], from 3 dB to approximately 0 dB, thus further improving the performance of IJF-OQPSK systems in nonlinear radio systems [7], [14]. It is shown that the baseband signal of the modulator, the P_{e} = f(E_{b}/N_{0}) performance, and the spectral characteristics of nonlinearly amplified (hard-limited or saturated) radio systems of XPSK and tamed frequency modems (TFM) are practically the same. The XPSK demodulator is a conventional OQPSK demodulator, the TFM demodulator requires a somewhat more complex signal processor. For this reason, the XPSK approach may lead to significant demodulator hardware cost savings, particularly in point-to-multipoint distribution systems such as broadcast systems. Simulation results for linear and nonlinear (saturated amplifier) systems operated in an adjacent-channel interference environment (in addition to thermal noise) are presented. Measurement results performed on a 128 kbit/s rate hardware-prototype modem are also reported. Experimental eye diagram and power spectrum density measurement results are in close agreement with the computer simulation results.

173 citations


Journal ArticleDOI
TL;DR: A new form of image estimator, which takes account of linear features, is derived using a signal equivalent formulation and shows that the method can improve the quality of noisy images even when the signal-to-noise ratio is very low.
Abstract: A new form of image estimator, which takes account of linear features, is derived using a signal equivalent formulation. The estimator is shown to be a nonstationary linear combination of three stationary estimators. The relation of the estimator to human visual physiology is discussed. A method for estimating the nonstationary control information is described and shown to be effective when the estimation is made from noisy data. A suboptimal approach which is computationally less demanding is presented and used in the restoration of a variety of images corrupted by additive white noise. The results show that the method can improve the quality of noisy images even when the signal-to-noise ratio is very low.

164 citations


Journal ArticleDOI
TL;DR: In this paper, a design methodology based on correspondence between performance requirements, mathematical parameters, and circuit parameters of a sigma-delta modulator is presented, which will guide a design engineer in selecting the circuit parameters based on system requirements, in translating paper design directly into LSI design, in predicting the effect of component sensitivity, and in analyzing the operations of the sigmoid modulator, which is viewed as a device which distributes the noise power, determined by peak SNR, over a much broader band, compared to signal bandwidth, shapes and amplifies it,
Abstract: The paper presents a design methodology based on correspondence between performance requirements, mathematical parameters, and circuit parameters of a sigma-delta modulator. This methodology will guide a design engineer in selecting the circuit parameters based on system requirements, in translating paper design directly into LSI design, in predicting the effect of component sensitivity, and in analyzing the operations of the sigma-delta modulator. The sigma-delta modulator is viewed as a device which distributes the noise power, determined by peak SNR, over a much broader band, compared to signal bandwidth, shapes and amplifies it, and allows filtering of the out-of-band noise. The shaping and amplification are quantified by two parameters, F and P , whose product is analogous to the square of step size of a uniform coder. These two parameters are related, on one hand, to the time constants or location of zero and poles. On the other hand, inequalities are set up between performance parameters, like signal-to-noise ratio and dynamic range, and F and P .

Journal ArticleDOI
TL;DR: This paper investigates the capacity of networks with a regular structure operating under the slotted ALOHA access protocol, first considering circular and linear networks and then two-dimensional networks and investigates some of the peculiarities of routing in these networks.
Abstract: In this paper we investigate the capacity of networks with a regular structure operating under the slotted ALOHA access protocol. We first consider circular (loop) and linear (bus) networks and then proceed to two-dimensional networks. For one-dimensional networks we find that the capacity is basically independent of the network average degree and is almost constant with respect to network size. For two-dimensional networks we find that the capacity grows in proportion to the square root of the number of nodes in the network provided that the average degree is kept small. Furthermore, we find that reducing the average degree (with certain connectivity restrictions) allows a higher throughput to be achieved. We also investigate some of the peculiarities of routing in these networks.

Journal ArticleDOI
TL;DR: It is shown that the multiple dwell procedure can significantly reduce the expected acquisition time from that obtained with a single dwell system, and by using a two-dwell system.
Abstract: The technique of multiple dwell serial search is described and analyzed. The advantage of the multiple dwell procedure is that the examination interval need not be fixed, allowing incorrect cells to be quickly discarded, which in turn results in a shorter search time than is possible with a fixed dwell time procedure. This type of search scheme is particularly useful for direct sequence code acquisition in a spread-spectrum communication system. An expression for the generating function is obtained from a flow graph representation of the multiple dwell technique. The generating function is used to develop expressions for the mean and variance of the search time in terms of the following parameters: the dwell times, the detection probability, the false alarm probability, and the false alarm penalty time. Coherent detector characteristics are then used to investigate the performance of the multiple dwell technique for direct sequence code acquisition. It is shown that the multiple dwell procedure can significantly reduce the expected acquisition time from that obtained with a single dwell system. The most significant improvement is obtained by using a two-dwell system. Additional but nominal improvement is gained when more than two dwells are employed.

Journal ArticleDOI
G.J. Foschini1, B. Gopinath
TL;DR: The structure of optimal policies for the model considered with three types of users is determined, which consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.
Abstract: Efficient design of service facilities, such as data or computer networks that meet random demands, often leads to the sharing of resources among users. Contention for the use of a resource results in queueing. The waiting room is a part of any such service facility. The number of accepted service requests per unit of time (throughput), or the fraction of the time the servers are busy (utilization), are often used as performance measures to compare designs. Most common models in queueing theory consider the design of the waiting rooms with the assumption that, although individual requests may differ from one another, they are statistically indistinguishable. However, there are several instances where available information allows us to classify the requests for service into different types. In such cases the design of the service facility not only involves the determination of an optimum size for the waiting room but also the rules of sharing it among the different types. Even with a fixed set of resources, the rules of sharing them can influence performance. In data networks (or computer networks) the "waiting room" consists of memory of one kind or another. Messages (jobs) destined for different locations (processors) sharing common storage is an important example of shared use of memory. Recently, Kleinrock and Kamoun have modeled such use of memory and computed the performance of various policies for managing the allocation of memory to several types of users. Decisions to accept or reject a demand for service were based on the number of waiting requests of each type. However, the optimal policy was not determined even in the case where there were only two types of users. We determine the structure of optimal policies for the model considered with three types of users. The optimal policy consists of limiting the number of waiting requests of each type, and reserving a part of the memory to each type.

Journal ArticleDOI
I. Garrett1
TL;DR: This paper analyzes the receiver sensitivity of an optical PPM system over a slightly dispersive channel, i.e., where both "wrong slot" and "false alarm" errors are important.
Abstract: The best monomode optical fiber links have bandwidths orders of magnitude greater than that of the information currently transmitted over them. This excess bandwidth can be exploited using digital PPM to improve receiver sensitivity. This paper analyzes the receiver sensitivity of an optical PPM system over a slightly dispersive channel, i.e., where both "wrong slot" and "false alarm" errors are important. It is shown that receiver sensitivity of better than 100 photons per binary bit-time is theoretically possible using direct detection and uncoded PPM. Ideal heterodyne detection should reduce this to below 5 photons per binary bit-time. Timing extraction and a digital modulation method are discussed.

Journal ArticleDOI
TL;DR: A tight lower bound is obtained on the minimal expected delay as well as sets of feasible solutions for the problem of selecting a set of routes which minimizes the expected network end-to-end queueing and transmission delay.
Abstract: The problem of selecting a single route for each class of service and each pair of communicating nodes in an SNA network is considered. The nodes, links, sets of candidate routes, and traffic characteristics are given. The goal is to select a set of routes which minimizes the expected network end-to-end queueing and transmission delay. Queueing is modeled as a network of M/M/1 queues which leads to a nonlinear combinatorial optimization problem. Using Lagrangean relaxation and subgradient optimization techniques, we obtain a tight lower bound on the minimal expected delay as well as sets of feasible solutions for the problem. An experimental interactive system has been used to evaluate the procedure; very favorable results have been obtained on a variety of networks.

Journal ArticleDOI
Brent Hailpern1, Susan S. Owicki
TL;DR: This paper discusses the application of modular program verification techniques to protocols, and uses two data transfer protocols from the literature: the alternating bit protocol and a protocol proposed by Stenning.
Abstract: Programs that implement computer communications protocols can exhibit extremely complicated behavior, and neither informal reasoning nor testing is reliable enough to establish their correctness. In this paper we discuss the application of modular program verification techniques to protocols. This approach is more reliable than informal reasoning, but has an advantage over formal reasoning based on finite-state models, the complexity of the proof need not grow unmanageably as the size of the program increases. Certain tools of concurrent program verification that are especially useful for protocols are presented, history variables that record sequences of input and output values, temporal logic for expressing properties that must hold in a future system state such as eventual receipt of a message), and module specification and composition rules. The use of these techniques is illustrated by verifying two data transfer protocols from the literature: the alternating bit protocol and a protocol proposed by Stenning.

Journal ArticleDOI
TL;DR: A recursion formula is described, useful for calculating steady-state probabilities in a lost-call-cleared service facility carrying a mixture of message traffics with different peakedness factors and/or capacity requirements.
Abstract: A recursion formula is described. It is useful for calculating steady-state probabilities in a lost-call-cleared service facility carrying a mixture of message traffics with different peakedness factors and/or capacity requirements. In conjunction with the notion of equivalent trunk groups, these probabilities can then be used to evaluate the blocking probability perceived by each traffic stream.

Journal ArticleDOI
TL;DR: Over a large class of benign and hostile environments, e.g., Gaussian IF filter, partial-band noise jamming, the differential detector offers no theoretical performance advantage over the limiter-discriminator receiver with integrate-and-dump postdetection filtering.
Abstract: The error probability performance of differential detection of narrow-band FM is determined and compared with the analogous results for limiter-discriminator detection of the same modulation. It is shown that over a large class of benign and hostile environments, e.g., Gaussian IF filter, 1 \leq BT \leq 3, h \leq 1 , AWGN, partial-band noise jamming, the differential detector offers no theoretical performance advantage over the limiter-discriminator receiver with integrate-and-dump postdetection filtering.

Journal ArticleDOI
TL;DR: This work considers several classes of interfering queues that appear in packet-radio networks and calculates the average packet waiting time and queue lengths, and develops a method to approximate these quantities.
Abstract: We consider several classes of interfering queues that appear in packet-radio networks. We analyze the class of systems where one of the queues is given full priority and obtain an expression for the joint probability distribution of the queue lengths. For ALOHA-type systems with two symmetric queues we calculate the average packet waiting time and queue lengths, and for symmetric systems with an arbitrary number of subscribers we develop a method to approximate these quantities. The approximation turns out to be close to the analysis and simulation results.

Journal ArticleDOI
TL;DR: In this paper, a distributed algorithm for constructing minimum weight directed spanning trees (arborescences), each with a distinct root node, in a strongly connected directed graph is presented, and the amount of information exchanged and the time to completion are O(|N|^{2}).
Abstract: A distributed algorithm is presented for constructing minimum weight directed spanning trees (arborescences), each with a distinct root node, in a strongly connected directed graph. A processor exists at each node. Given the weights and origins of the edges incoming to their nodes, the processors follow the algorithm and exchange messages with their neighbors until all arborescences are constructed. The amount of information exchanged and the time to completion are O(|N|^{2}) .

Journal ArticleDOI
TL;DR: A comprehensive study of the stability and optimization of the infinite population, slotted, nonpersistent CSMA and CSMA/ CD channels is presented, and provides a number of new results including robustness in stability and performance in the presence of channel and control parameter variations.
Abstract: A comprehensive study of the stability and optimization of the infinite population, slotted, nonpersistent CSMA and CSMA/ CD channels is presented. The approach to both stability and performance optimization differs significantly from previous work, and provides a number of new results including robustness in stability and performance in the presence of channel and control parameter variations. It is first shown that both channels are unstable under the usual assumption of random retransmission delay. Pake's lemma is then applied to study the properties of a type of distributed retransmission control which provides stable channels. Basic results are in the form of inequalities which define stability regions in the space of channel and control parameters, and further permit one to specify controls which maximize channel throughput as a function of packet length and CD time with stability guaranteed. The delay versus throughput characteristic for the stabilized channels is derived and used to demonstrate the performance achievable with these channels.

Journal ArticleDOI
David Cox1
TL;DR: In this article, the authors determined the cumulative distribution of signal-to-noise ratio (S/N) for antenna diversity using realistic orientation and multipath propagation models, and showed that two-branch selection diversity with two perpendicular antennas yields an S/N distribution with the same slope as two-anchor selection diversity in the fixed-oriented mobile radio environment.
Abstract: Antenna diversity can mitigate signal impairments caused by random angular orientation and multipath radio propagation when using portable radiotelephones. Cumulative distributions of signal-to-noise ratio ( S/N ) were determined for antenna diversity using realistic orientation and multipath propagation models. In a random orientation and multipath propagation environment with -6 dB average crosspolarization coupling, two-branch selection diversity with two perpendicular antennas yields an S/N distribution with the same slope as two-branch selection diversity in the fixed-oriented mobile radio environment. The distribution for random orientation is about 4.5 dB worse, however, than the mobile radio distribution.

Journal ArticleDOI
Werner Bux1, M. Schlatter
TL;DR: A new analytic approach for the performance evaluation of buffer insertion rings is proposed which overcomes the basic limitations of former analyses and its accuracy turns out to be high, as comparison with simulation results shows.
Abstract: A new analytic approach for the performance evaluation of buffer insertion rings is proposed which overcomes the basic limitations of former analyses. Key to our analysis is that the global performance model of a buffer insertion ring is decomposed into submodels in such a way that the basic interdependence of interarrival and transmission times of the frames transmitted is taken into account. The analysis itself is relatively straightforward and yields simple and explicit results for the mean delays of the frames. For special traffic patterns, the analysis is exact, for the general case, it is approximate but its accuracy turns out to be high, as comparison with simulation results shows.

Journal ArticleDOI
TL;DR: A digital image is approximate as a sum of outer products dxyT where d is a real number but the vectors x and y have elements +1, -1, or 0 only and the expansion gives a least squares approximation.
Abstract: We approximate a digital image as a sum of outer products dxyTwhere d is a real number but the vectors x and y have elements +1, -1, or 0 only. The expansion gives a least squares approximation. Work is proportional to the number of pixels; reconstruction involves only additions.

Journal ArticleDOI
TL;DR: It is shown that the truncated block is well approximated by wide-sense Markoff statistics; a signal having these characteristics has a high probability of belonging to the root signal set of median filters.
Abstract: In this paper we source encode the truncated block used in block truncation coding. It is shown that the truncated block is well approximated by wide-sense Markoff statistics; a signal having these characteristics has a high probability of belonging to the root signal set of median filters. Because the root signal space is much smaller than the binary space, it takes fewer bits to specify the truncated block in the root signal space, obtaining in this manner rate compression. Using two-dimensional filtering we can reduce the standard BTC rate of 1.63 bits/pel to 1.31 bits/pel. Using one-dimensional filtering along with a trellis encoder, rates close to 1.1 bits/pel are obtained with this fixed-length coding method.

Journal ArticleDOI
TL;DR: The non-Gaussian nature of CW interference is exploited to suppress the CW via the A/D converter following chip demodulation in a direct sequence pseudonoise (DSPN) communication link and the modulation is coherent BPSK.
Abstract: The non-Gaussian nature of CW interference is exploited to suppress the CW via the A/D converter following chip demodulation in a direct sequence pseudonoise (DSPN) communication link. The modulation is coherent BPSK. The A/D converter quantizes to 2 bits, sign and magnitude. A scheme of threshold adaptation and postquantization weighting gains great advantage from making very reliable decisions on a relatively small percentage of the demodulated chips. The bit error rate performance in CW interference generally surpasses that of an ideal analog DSPN correlator. The performance in Gaussian noise is within 0.6 dB of ideal analog.

Journal ArticleDOI
J. Namiki1
TL;DR: A new predistorter control technique is introduced, and the nonlinear compensation capability of a third-order predistorters incorporating this technique is assessed.
Abstract: In digital microwave transmission, the nonlinear characteristics in a high power amplifier, such as a TWT (traveling-wave tube) inhibit efficient output use. This note introduces a new predistorter control technique, and assesses the nonlinear compensation capability of a third-order predistorter incorporating this technique. Concerning 16-QAM (quadrature amplitude modulation) a 10 dB reduction in out-of-band emission and larger than 8 dB C/N improvement with respect to symbol error rate can be achieved at 3 dB TWT average output power backoff.

Journal ArticleDOI
TL;DR: The performance of direct sequence QPSK spread-spectrum systems using complex adaptive filters in the presence of pulsed CW interference is analyzed and it is shown that the performance of the two-sided transversal filter is better than that of the prediction error filter.
Abstract: In this paper, the performance of direct sequence QPSK spread-spectrum systems using complex adaptive filters in the presence of pulsed CW interference is analyzed. Both adaptive prediction error filters and adaptive transversal filters with two-sided taps are considered. It is shown that the time constant of the tap weight adaptation in the interference off-interval is usually much greater than the time constant in the on-interval, and that this is beneficial for the system since it results in retaining the rejection property of the filter. Under steady-state conditions, the tap weights are calculated. Analytical expressions for the signal-to-noise ratio improvement under the least favorable interference condition are given. It is shown that the performance of the two-sided transversal filter is better than that of the prediction error filter.

Journal ArticleDOI
TL;DR: A new approach to PSK signal detection over a slow nonselective Rayleigh fading channel which does not require a carrier recovery loop is considered and coherent demodulation is achieved by making use of estimates of the quadrature amplitudes of the received PSK signals.
Abstract: We consider here a new approach to PSK signal detection over a slow nonselective Rayleigh fading channel which does not require a carrier recovery loop. The receiver achieves coherent demodulation by making use of estimates of the quadrature amplitudes of the received PSK signals in its likelihood ratio test. The receiver is assumed to have a memory containing information on the past received signals which enables it to generate the estimates. The error rate of the receiver can be evaluated analytically and computer simulation results are presented to verify the predicted performance.