scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 2000"


Journal ArticleDOI
TL;DR: In this paper, a simple but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions, is presented.
Abstract: The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA/CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS/CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS/CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.

8,072 citations


Journal ArticleDOI
TL;DR: The basic concept of OBS is described and a general architecture of optical core routers and electronic edge routers in the OBS network is presented and a nonperiodic time-interval burst assembly mechanism is described.
Abstract: Optical burst switching (OBS) is a promising solution for building terabit optical routers and realizing IP over WDM. In this paper, we describe the basic concept of OBS and present a general architecture of optical core routers and electronic edge routers in the OBS network. The key design issues related to the OBS are also discussed, namely, burst assembly (burstification), channel scheduling, burst offset-time management, and some dimensioning rules. A nonperiodic time-interval burst assembly mechanism is described. A class of data channel scheduling algorithms with void filling is proposed for optical routers using a fiber delay line buffer. The LAUC-VF (latest available unused channel with void filling) channel scheduling algorithm is studied in detail. Initial results on the burst traffic characteristics and on the performance of optical routers in the OBS network with self-similar traffic as inputs are reported in the paper.

961 citations


Journal ArticleDOI
TL;DR: A transmission scheme for exploiting diversity given by two transmit antennas when neither the transmitter nor the receiver has access to channel state information and requires no channel state side information at the receiver is presented.
Abstract: We present a transmission scheme for exploiting diversity given by two transmit antennas when neither the transmitter nor the receiver has access to channel state information. The new detection scheme can use equal energy constellations and encoding is simple. At the receiver, decoding is achieved with low decoding complexity. The transmission provides full spatial diversity and requires no channel state side information at the receiver. The scheme can be considered as the extension of differential detection schemes to two transmit antennas.

884 citations


Journal ArticleDOI
TL;DR: The main focus of this paper is to show the accuracy of the derived analytical model and its applicability to the analysis and optimization of an entire video transmission system.
Abstract: A theoretical analysis of the overall mean squared error (MSE) in hybrid video coding is presented for the case of error prone transmission. Our model covers the complete transmission system including the rate-distortion performance of the video encoder, forward error correction, interleaving, and the effect of error concealment and interframe error propagation at the video decoder. The channel model used is a 2-state Markov model describing burst errors on the symbol level. Reed-Solomon codes are used for forward error correction. Extensive simulation results using an H.263 video codec are provided for verification. Using the model, the optimal tradeoff between INTRA and INTER coding as well as the optimal channel code rate can be determined for given channel parameters by minimizing the expected MSE at the decoder. The main focus of this paper is to show the accuracy of the derived analytical model and its applicability to the analysis and optimization of an entire video transmission system.

833 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to summarize the basic optical networking approaches, report on the WDM deployment strategies of two major US carriers, and outline the current research and development trends on WDM optical networks.
Abstract: While optical-transmission techniques have been researched for quite some time, optical "networking" studies have been conducted only over the past dozen years or so. The field has matured enormously over this time: many papers and Ph.D. dissertations have been produced, a number of prototypes and testbeds have been built, several books have been written, a large number of startups have been formed, and optical WDM technology is being deployed in the marketplace at a very rapid rate. The objective of this paper is to summarize the basic optical networking approaches, report on the WDM deployment strategies of two major US carriers, and outline the current research and development trends on WDM optical networks.

731 citations


Journal ArticleDOI
TL;DR: This work proposes an algorithm to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment and recursively computes the total decoder distortion at pixel level precision to accurately account for spatial and temporal error propagation.
Abstract: Resilience to packet loss is a critical requirement in predictive video coding for transmission over packet-switched networks, since the prediction loop propagates errors and causes substantial degradation in video quality. This work proposes an algorithm to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. The method recursively computes the total decoder distortion at pixel level precision to accurately account for spatial and temporal error propagation. The accuracy of the estimate is demonstrated via simulation results. The estimate is integrated into a rate-distortion (RD)-based framework for optimal switching between intra-coding and inter-coding modes per macroblock. The cost in computational complexity is modest. The framework is further extended to optimally exploit feedback/acknowledgment information from the receiver/network. Simulation results both with and without a feedback channel demonstrate that precise distortion estimation enables the coder to achieve substantial and consistent gains in PSNR over known state-of-the-art RD- and non-RD-based mode switching methods.

717 citations


Journal ArticleDOI
TL;DR: A model is proposed that employs the clustered "double Poisson" time-of-arrival model proposed by Saleh and Valenzuela (1987), and the observed angular distribution is also clustered with uniformly distributed clusters and arrivals within clusters that have a Laplacian distribution.
Abstract: Most previously proposed statistical models for the indoor multipath channel include only time of arrival characteristics. However, in order to use statistical models in simulating or analyzing the performance of systems employing spatial diversity combining, information about angle of arrival statistics is also required. Ideally, it would be desirable to characterize the full spare-time nature of the channel. In this paper, a system is described that was used to collect simultaneous time and angle of arrival data at 7 GHz. Data processing methods are outlined, and results obtained from data taken in two different buildings are presented. Based on the results, a model is proposed that employs the clustered "double Poisson" time-of-arrival model proposed by Saleh and Valenzuela (1987). The observed angular distribution is also clustered with uniformly distributed clusters and arrivals within clusters that have a Laplacian distribution.

704 citations


Journal ArticleDOI
TL;DR: A secure, scalable, deployable architecture (S-BGP) for an authorization and authentication system that addresses most of the security problems associated with BGP is described.
Abstract: The Border Gateway Protocol (BGP), which is used to distribute routing information between autonomous systems (ASes), is a critical component of the Internet's routing infrastructure. It is highly vulnerable to a variety of malicious attacks, due to the lack of a secure means of verifying the authenticity and legitimacy of BGP control traffic. This paper describes a secure, scalable, deployable architecture (S-BGP) for an authorization and authentication system that addresses most of the security problems associated with BGP. The paper discusses the vulnerabilities and security requirements associated with BGP, describes the S-BGP countermeasures, and explains how they address these vulnerabilities and requirements. In addition, this paper provides a comparison of this architecture to other approaches that have been proposed, analyzes the performance implications of the proposed countermeasures, and addresses operational issues.

595 citations


Journal ArticleDOI
TL;DR: It is shown that with limited FDLs, the offset-time-based QoS scheme can be very efficient in supporting basic QoS, and suitable for the next generation optical Internet.
Abstract: We address the issue of how to provide basic quality of service (QoS) in optical burst-switched WDM networks with limited fiber delay lines (FDLs). Unlike existing buffer-based QoS schemes, the novel offset-time-based QoS scheme we study in this paper does not mandate any buffer for traffic isolation, but nevertheless can take advantage of FDLs to improve the QoS. This makes the proposed QoS scheme suitable for the next generation optical Internet. The offset times required for class isolation when making wavelength and FDL reservations are quantified, and the upper and lower bounds on the burst loss probability are analyzed. Simulations are also conducted to evaluate the QoS performance in terms of burst loss probability and queuing delay. We show that with limited FDLs, the offset-time-based QoS scheme can be very efficient in supporting basic QoS.

588 citations


Journal ArticleDOI
TL;DR: Analytically study the performance of the IEEE 802.11 protocol with a dynamically tuned backoff based on the estimation of the network status and results indicate that the capacity of the enhanced protocol approaches the theoretical limits in all the configurations analyzed.
Abstract: In WLANs, the medium access control (MAC) protocol is the main element that determines the efficiency of sharing the limited communication bandwidth of the wireless channel. The fraction of channel bandwidth used by successfully transmitted messages gives a good indication of the protocol efficiency, and its maximum value is referred to as protocol capacity. In a previous paper we have derived the theoretical limit of the IEEE 802.11 MAC protocol capacity. In addition, we showed that if a station has an exact knowledge of the network status, it is possible to tune its backoff algorithm to achieve a protocol capacity very close to its theoretical bound. Unfortunately, in a real case, a station does not have an exact knowledge of the network and load configurations (i.e., number of active stations and length of the message transmitted on the channel) but it can only estimate it. In this work we analytically study the performance of the IEEE 802.11 protocol with a dynamically tuned backoff based on the estimation of the network status. Results obtained indicate that under stationary traffic and network configurations (i.e., constant average message length and fixed number of active stations), the capacity of the enhanced protocol approaches the theoretical limits in all the configurations analyzed. In addition, by exploiting the analytical model, we investigate the protocol performance in transient conditions (i.e., when the number of active stations sharply changes).

554 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a solution for the problem of certificate revocation, which represents certificate revocation lists by authenticated dictionaries that support efficient verification whether a certificate is in the list or not and efficient updates (adding/removing certificates from the list).
Abstract: We present a solution for the problem of certificate revocation. This solution represents certificate revocation lists by authenticated dictionaries that support: (1) efficient verification whether a certificate is in the list or not and (2) efficient updates (adding/removing certificates from the list). The suggested solution gains in scalability, communication costs, robustness to parameter changes, and update rate. Comparisons to the following solutions (and variants) are included: "traditional" certificate revocation lists (CRLs), Micali's (see Tech. Memo MIT/LCS/TM-542b, 1996) certificate revocation system (CRS), and Kocher's (see Financial Cryptography-FC'98 Lecture Notes in Computer Science. Berlin: Springer-Verlag, 1998, vol.1465, p.172-7) certificate revocation trees (CRT). We also consider a scenario in which certificates are not revoked, but frequently issued for short-term periods. Based on the authenticated dictionary scheme, a certificate update scheme is presented in which all certificates are updated by a common message. The suggested solutions for certificate revocation and certificate update problems are better than current solutions with respect to communication costs, update rate, and robustness to changes in parameters, and are compatible, e.g., with X.500 certificates.

Journal ArticleDOI
TL;DR: It is found that when optimizing for an exponential packet loss model with a mean loss rate of 20% and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.
Abstract: We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framework. We use the set partitioning in hierarchical trees coder in this work, but our algorithm can protect any progressive compression scheme. In addition, we promote the use of a PMF of expected channel conditions so that our system can work with almost any model or estimate of packet losses. We find that when optimizing for an exponential packet loss model with a mean loss rate of 20% and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.

Journal ArticleDOI
N. Asokan1, Victor Shoup2, Michael Waidner2
TL;DR: In this paper, the authors present a protocol that allows two players to exchange digital signatures over the Internet in a fair way, so that either each player gets the other's signature, or neither player does.
Abstract: We present a new protocol that allows two players to exchange digital signatures over the Internet in a fair way, so that either each player gets the other's signature, or neither player does. The obvious application is where the signatures represent items of value, for example, an electronic check or airline ticket. The protocol can also be adapted to exchange encrypted data. It relies on a trusted third party, but is "optimistic," in that the third party is only needed in cases where one player crashes or attempts to cheat. A key feature of our protocol is that a player can always force a timely and fair termination, without the cooperation of the other player, even in a completely asynchronous network. A specialization of our protocol can be used for contract signing; this specialization is not only more efficient, but also has the important property that the third party can be held accountable for its actions: if it ever cheats, this can be detected and proven.

Journal ArticleDOI
TL;DR: It is shown that the maximum mean gain achieved through adaptive processing at both the transmitter and the receiver is less than the free space gain, and cannot be expressed as a product of separate gains.
Abstract: Two arrays with M and N elements are connected via a scattering medium giving uncorrelated antenna signals. The link array gain relative to the case of one element at each end is treated for the situation where the channels are known at the transmitter and receiver. It is shown that the maximum mean gain achieved through adaptive processing at both the transmitter and the receiver is less than the free space gain, and cannot be expressed as a product of separate gains. First, by finding the singular values of the transmission matrix, fundamental limitations concerning the maximum gain and the diversity orders are given, indicating that the gain is upper bounded by (/spl radic/M+/spl radic/N)/sup 2/ and the diversity order is MN. Next an iterative technique for reciprocal channels which maximizes power at each stage transmitting back and forth is described. The capacity or spectral efficiency of the random channel is described, and it is indicated how the capacity is upper bounded by N parallel channels of gain M(N

Journal ArticleDOI
TL;DR: A generalized RAKE receiver for interference suppression and multipath mitigation is proposed, exploiting the fact that time dispersion significantly distorts the interference spectrum from each base station in the downlink of a wideband CDMA system.
Abstract: Currently, a global third-generation cellular system based on code-division multiple-access (CDMA) is being developed with a wider bandwidth than existing second-generation systems. The wider bandwidth provides increased multipath resolution in a time-dispersive channel, leading to higher frequency-selectivity. A generalized RAKE receiver for interference suppression and multipath mitigation is proposed. The receiver exploits the fact that time dispersion significantly distorts the interference spectrum from each base station in the downlink of a wideband CDMA system. Compared to the conventional RAKE receiver, this generalized RAKE receiver may have more fingers and different combining weights. The weights are derived from a maximum likelihood formulation, modeling the intracell interference as colored Gaussian noise. This low-complexity detector is especially useful for systems with orthogonal downlink spreading codes, as orthogonality between own cell signals cannot be maintained in a frequency-selective channel. The performance of the proposed receiver is quantified via analysis and simulation for different dispersive channels, including Rayleigh fading channels. Gains on the order of 1-3.5 dB are achieved, depending on the dispersiveness of the channel, with only a modest increase in the number of fingers. For a wideband CDMA (WCDMA) system and a realistic mobile radio channel, this translates to capacity gains of the order of 100%.

Journal ArticleDOI
TL;DR: This paper studies the problem of authenticated key agreement in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation, and integrity.
Abstract: Many modern computing environments involve dynamic peer groups. Distributed simulation, multiuser games, conferencing applications, and replicated servers are just a few examples. Given the openness of today's networks, communication among peers (group members) must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation, and integrity. It begins by considering two-party authenticated key agreement and extends the results to group Diffie-Hellman (1976) key agreement. In the process, some new security properties (unique to groups) are encountered and discussed.

Journal ArticleDOI
TL;DR: New algorithms for dynamic routing of bandwidth guaranteed tunnels, where tunnel routing requests arrive one by one and there is no a priori knowledge regarding future requests are presented, showing that this problem is NP-hard.
Abstract: This paper presents new algorithms for dynamic routing of bandwidth guaranteed tunnels, where tunnel routing requests arrive one by one and there is no a priori knowledge regarding future requests. This problem is motivated by the service provider needs for fast deployment of bandwidth guaranteed services. Offline routing algorithms cannot be used since they require a priori knowledge of all tunnel requests that are to be rooted. Instead, on-line algorithms that handle requests arriving one by one and that satisfy as many potential future demands as possible are needed. The newly developed algorithms are on-line algorithms and are based on the idea that a newly routed tunnel must follow a route that does not "interfere too much" with a route that may he critical to satisfy a future demand. We show that this problem is NP-hard. We then develop path selection heuristics which are based on the idea of deferred loading of certain "critical" links. These critical links are identified by the algorithm as links that, if heavily loaded, would make it impossible to satisfy future demands between certain ingress-egress pairs. Like min-hop routing, the presented algorithm uses link-state information and some auxiliary capacity information for path selection. Unlike previous algorithms, the proposed algorithm exploits any available knowledge of the network ingress-egress points of potential future demands, even though the demands themselves are unknown. If all nodes are ingress-egress nodes, the algorithm can still be used, particularly to reduce the rejection rate of requests between a specified subset of important ingress-egress pairs. The algorithm performs well in comparison to previously proposed algorithms on several metrics like the number of rejected demands and successful rerouting of demands upon link failure.

Journal ArticleDOI
TL;DR: It is shown that even though the clipping scheme causes severe loss in required signal-to-noise ratio, the use of a powerful channel coding scheme such as turbo codes significantly alleviates the bit error rate performance degradation.
Abstract: The performance of the strictly band-limited OFDM systems with deliberate clipping is examined in terms of the peak-to-average power ratio (PAPR) and the resultant bit error performance. The clipping is performed on the OFDM signals sampled at the Nyquist rate, followed by the ideal low-pass filter, Since the low-pass filter considerably enlarges the PAPR, there is a severe limitation in PAPR reduction capability. Thus, in order to achieve further reduction of the PAPR, the application of the adaptive symbol selection scheme is also considered. It is shown that the significant PAPR reduction with moderate complexity can be achieved by the combination of the clipping and the adaptive symbol selection. The price to be paid for PAPR reduction by this scheme is its performance degradation. The paper theoretically analyzes the bit error rate performance of the OFDM system with the Nyquist-rate clipping combined with the adaptive symbol selection, and considers the use of the forward error correction for compensation of the degradation. It is shown that even though the clipping scheme causes severe loss in required signal-to-noise ratio, the use of a powerful channel coding scheme such as turbo codes significantly alleviates the bit error rate performance degradation.

Journal ArticleDOI
TL;DR: The possibility of combining the concept of power control with the RTS/CTS-based and busy-tone-based protocols to further increase channel utilization and shows a promising direction to enhance the performance of MANETs.
Abstract: In mobile ad hoc networks (MANETs), one essential issue is how to increase channel utilization while avoiding the hidden-terminal and the exposed-terminal problems. Several MAC protocols, such as RTS/CTS-based and busy-tone-based schemes, have been proposed to alleviate these problems. In this paper, we explore the possibility of combining the concept of power control with the RTS/CTS-based and busy-tone-based protocols to further increase channel utilization. A sender will use an appropriate power level to transmit its packets so as to increase the possibility of channel reuse. The possibility of using discrete, instead of continuous, power levels is also discussed. Through analyses and simulations, we demonstrate the advantage of our new MAC protocol. This, together with the extra benefits such as saving battery energy and reducing cochannel interference, does show a promising direction to enhance the performance of MANETs.

Journal ArticleDOI
TL;DR: By using this scheme, the mean square values of the symbol timing estimation error can be decreased by several orders of magnitude compared to the common correlation methods in both the AWGN and multipath fading channels.
Abstract: Orthogonal frequency division multiplexing (OFDM) is an effective modulation technique for high-rate and high-speed transmission over frequency selective fading channels. However, OFDM systems can be extremely sensitive and vulnerable to synchronization errors. In this paper, we present a scheme for performing timing recovery that includes symbol synchronization and sampling clock synchronization in OFDM systems. The scheme is based on pilot subcarriers. In the scheme, we use a path time delay estimation method to improve the accuracy of the correlation-based symbol synchronization methods, and use a delay-locked loop (DLL) to do the sampling clock synchronization. It is shown that by using this scheme, the mean square values of the symbol timing estimation error can be decreased by several orders of magnitude compared to the common correlation methods in both the AWGN and multipath fading channels. In addition, the scheme can track the symbol timing drift caused by the sampling clock frequency offsets.

Journal ArticleDOI
TL;DR: This paper presents an optimal dynamic code assignment (DCA) scheme using orthogonal variable-spreading-factor (OVSF) codes to enhance statistical multiplexing and spectral efficiency of W-CDMA systems supporting variable user data rates.
Abstract: This paper presents an optimal dynamic code assignment (DCA) scheme using orthogonal variable-spreading-factor (OVSF) codes. The objective of dynamic code assignment is to enhance statistical multiplexing and spectral efficiency of W-CDMA systems supporting variable user data rates. Our scheme is optimal in the sense that it minimizes the number of OVSF codes that must be reassigned to support a new call. By admitting calls that would normally be blocked without code reassignments, the spectral efficiency of the system is also maximized. Simulation results are presented to show the performance gain of dynamic code assignment compared to a static assignment scheme in terms of call blocking rate and spectral efficiency. We also discuss various signaling techniques of implementing our proposed DCA scheme in third-generation wideband CDMA systems.

Journal ArticleDOI
TL;DR: A general adaptive coding scheme for Nakagami multipath fading channels using a set of 2L-dimensional (2L-D) trellis codes originally designed for additive white Gaussian noise channels is introduced.
Abstract: We introduce a general adaptive coding scheme for Nakagami multipath fading channels. An instance of the coding scheme utilizes a set of 2L-dimensional (2L-D) trellis codes originally designed for additive white Gaussian noise (AWGN) channels. Any set of 2L-D trellis codes for AWGN channels can be used, Sets for which all codes can be generated by the same encoder and decoded by the same decoder are of particular interest. A feedback channel between the transmitter and receiver makes it possible to transmit at high spectral efficiencies under favorable channel conditions and respond to channel degradation through a smooth reduction of the spectral efficiency. We develop a general technique to determine the average spectral efficiency of the coding scheme for any set of 2L-D trellis codes. As an illustrative example, we calculate the average spectral efficiency of an adaptive codec utilizing eight 4-D trellis codes. The example codec is based on the International Telecommunications Union's ITU-T V.34 modem standard.

Journal ArticleDOI
Y.-P.E. Wang1, T. Ottosson
TL;DR: Algorithms and results for both initial and target cell search scenarios for the wideband CDMA (W-CDMA) standard are presented and optimization of key system parameters such as the loading factors for primary synchronization channel, synchronization channels, and common pilot channel for achieving the smallest average code and time acquisition time is studied.
Abstract: In a CDMA cellular system, the process of the mobile station searching for a cell and achieving code and time synchronization to its downlink scrambling code is referred to as cell search. Cell search is performed in three scenarios: initial cell search when a mobile station is switched on, idle mode search when inactive, and active mode search during a call. The latter two are also called target cell search. This paper presents algorithms and results for both initial and target cell search scenarios for the wideband CDMA (W-CDMA) standard. In W-CDMA, the cell search itself is divided into five acquisition stages: slot synchronization, frame synchronization and scrambling code group identification, scrambling code identification, frequency acquisition, and cell identification. Initial cell search needs all five stages, while target cell search in general does not need the last two stages. A pipelined process of the first three stages that minimizes the average code and time acquisition time, while keeping the complexity at a reasonable level, is considered. The frequency error in initial cell search, which may be as large as 20 kHz, is taken care of by partial symbol despreading and noncoherent combining. Optimization of key system parameters such as the loading factors for primary synchronization channel, synchronization channel, and common pilot channel for achieving the smallest average code and time acquisition time is studied. After code and time synchronization (the first three stages), a maximum likelihood (ML)-based frequency acquisition method is used to bring down the frequency error to about 200 Hz. The gain of this method is more than 10 dB compared to an alternative scheme that obtains a frequency error estimate using differential detection.

Journal ArticleDOI
TL;DR: The design enhancements have produced a set of highly efficient schemes that achieve significant reduction in handoff blocking rates while only incurring remarkably small increases in the new call blocking rates.
Abstract: We propose and evaluate new schemes for channel reservation motivated by the rapidly evolving technology of mobile positioning. The schemes, called predictive channel reservation (PCR), work by sending reservation requests to neighboring cells based on extrapolating the motion of mobile stations (MSs). A number of design enhancements are incorporated to minimize the effect of false reservations and to improve the throughput of the cellular system. These enhancements include: (1) reservation pooling; (2) queuing of reservation requests; (3) hybrid approach for integrating guard channels (GCs); and (4) using a threshold distance (TD) to control the timing of reservation requests. The design enhancements have produced a set of highly efficient schemes that achieve significant reduction in handoff blocking rates while only incurring remarkably small increases in the new call blocking rates. The PCR approach has also been used to solve the MINBLOCK optimization problem and has given significant improvement over the fractional guard channel (FGC) protocol. Detailed performance results of the different variations of the PCR scheme and comparisons with conventional channel reservation schemes are presented. An analytical Markov model for the hybrid predictive version of the scheme is developed and its applicability and numerical results are discussed.

Journal ArticleDOI
TL;DR: An effective method for increasing error resilience of video transmission over bit error prone networks is described and rate-distortion optimized mode selection and synchronization marker insertion algorithms are introduced.
Abstract: We describe an effective method for increasing error resilience of video transmission over bit error prone networks. Rate-distortion optimized mode selection and synchronization marker insertion algorithms are introduced. The resulting video communication system takes into account the channel condition and the error concealment method used by the decoder, to optimize video coding mode selection and placement of synchronization markers in the compressed bit stream. The effects of mismatch between the parameters used by the encoder and the parameters associated with the actual channel condition and the decoder error concealment method are evaluated. Results for the binary symmetric channel and wideband code division multiple access mobile network models are presented in order to illustrate the advantages of the proposed method.

Journal ArticleDOI
TL;DR: Three applications (i.e., microcell/macrocell configuration, distance-based location update, and GPRS mobility management for data routing) are used to show how the new model can be used to investigate the performance of PCS networks.
Abstract: This paper proposes a new approach to simplify the two-dimensional random walk models capturing the movement of mobile users in personal communications services (PCS) networks. Analytical models are proposed for the new random walks. For a PCS network with hexagonal configuration, our approach reduces the states of the two-dimensional random walk from (3n/sup 2/+3n-5) to n(n+1)/2, where n is the layers of a cluster. For a mesh configuration, our approach reduces the states from (2n2-2n+1) to (n/sup 2/+2n+4)/4 if n is even and to (n/sup 2/+2n+5)/4 if n is odd. Simulation experiments are conducted to validate the analytical models. The results indicate that the errors between the analytical and simulation models are within 1%. Three applications (i.e., microcell/macrocell configuration, distance-based location update, and GPRS mobility management for data routing) are used to show how our new model can be used to investigate the performance of PCS networks.

Journal ArticleDOI
TL;DR: This work develops a framework for encoding based on embedded source codes and embedded error correcting and error detecting channel codes and shows that the unequal error/erasure protection policies that maximize the average useful source coding rate allow progressive transmission with optimal unequal protection at a number of intermediate rates.
Abstract: An embedded source code allows the decoder to reconstruct the source progressively from the prefixes of a single bit stream. It is desirable to design joint source-channel coding schemes which retain the capability of progressive reconstruction in the presence of channel noise or packet loss. Here, we address the problem of joint source-channel coding of images for progressive transmission over memoryless bit error or packet erasure channels. We develop a framework for encoding based on embedded source codes and embedded error correcting and error detecting channel codes. For a target transmission rate, we provide solutions and an algorithm for the design of optimal unequal error/erasure protection. Three performance measures are considered: the average distortion, the average peak signal-to-noise ratio, and the average useful source coding rate. Under the assumption of rate compatibility of the underlying channel codes, we provide necessary conditions for progressive transmission of joint source-channel codes. We also show that the unequal error/erasure protection policies that maximize the average useful source coding rate allow progressive transmission with optimal unequal protection at a number of intermediate rates.

Journal ArticleDOI
TL;DR: This paper presents a combined OFDM/SDMA approach that couples the capabilities of the two techniques to tackle both challenges at once, and proposes four algorithms, ranging from a low-complexity linear minimum mean squared error solution to the optimal maximum likelihood detector.
Abstract: Two major technical challenges in the design of future broadband wireless networks are the impairments of the propagation channel and the need for spectral efficiency. To mitigate the channel impairments, orthogonal frequency division multiplexing (OFDM) can be used, which transforms a frequency-selective channel in a set of frequency-flat channels. On the other hand, to achieve higher spectral efficiency, space division multiple access (SDMA) can be used, which reuses bandwidth by multiplexing signals based on their spatial signature. In this paper, we present a combined OFDM/SDMA approach that couples the capabilities of the two techniques to tackle both challenges at once. We propose four algorithms, ranging from a low-complexity linear minimum mean squared error (MMSE) solution to the optimal maximum likelihood (ML) detector. By applying per-carrier successive interference cancellation (pcSIC), initially proposed for DS-CDMA, and introducing selective state insertion (SI), we achieve a good tradeoff between performance and complexity. A case study demonstrates that, compared to the MMSE approach, our pcSIC-SI-OFDM/SDMA algorithm obtains a performance gain of 10 dB for a BER of 10/sup -3/, while it is only three times more complex. On the other hand, it is two orders of magnitude less complex than the ML approach, for a performance penalty of only 2 dB.

Journal ArticleDOI
TL;DR: An analytical framework to quantify the effects of the spreading bandwidth (BW) on spread spectrum systems operating in dense multipath environments in terms of the receiver performance, the receiver complexity, and the multipath channel parameters is developed.
Abstract: We develop an analytical framework to quantify the effects of the spreading bandwidth (BW) on spread spectrum systems operating in dense multipath environments in terms of the receiver performance, the receiver complexity, and the multipath channel parameters. The focus of the paper is to characterize the symbol error probability (SEP) performance of a RAKE receiver tracking the L strongest multipath components in wide-sense stationary uncorrelated scattering (WSSUS) Gaussian channels with frequency-selective fading. Analytical SEP expressions of the RAKE receiver are derived in terms of the number of combined paths, the spreading BW and the multipath spread of the channel. The proposed problem is made analytically tractable by transforming the physical RAKE paths, which are correlated and ordered, into the domain of a "virtual RAKE" receiver with independent virtual paths. This results in a simple derivation of the SEP for a given spreading BW and an arbitrary number of combined paths.

Journal ArticleDOI
TL;DR: This paper presents the UTRA TDD mode, which is based on TD-CDMA, and an overview of the system architecture and the radio interface protocols is given.
Abstract: The third-generation mobile radio system UTRA that has been specified in the Third Generation Partnership Project (3GPP) consists of an FDD and a TDD mode. This paper presents the UTRA TDD mode, which is based on TD-CDMA. Important system features are explained in detail. Moreover, an overview of the system architecture and the radio interface protocols is given. Furthermore, the physical layer of UTRA TDD is explained, and the protocol operation is described.