scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Communications in 2005"


Journal ArticleDOI
TL;DR: A simple encoding algorithm is introduced that achieves near-capacity at sum rates of tens of bits/channel use and regularization is introduced to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers.
Abstract: Recent theoretical results describing the sum capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the power of the transmitted signal. This work is comprised of two parts. In this first part, we show that while the sum capacity grows linearly with the minimum of the number of antennas and users, the sum rate of channel inversion does not. This poor performance is due to the large spread in the singular values of the channel matrix. We introduce regularization to improve the condition of the inverse and maximize the signal-to-interference-plus-noise ratio at the receivers. Regularization enables linear growth and works especially well at low signal-to-noise ratios (SNRs), but as we show in the second part, an additional step is needed to achieve near-capacity performance at all SNRs.

1,796 citations


Journal ArticleDOI
TL;DR: The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Abstract: Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.

989 citations


Journal ArticleDOI
TL;DR: A simple encoding algorithm is introduced that achieves near-capacity at sum-rates of tens of bits/channel use and a certain perturbation of the data using a "sphere encoder" can be chosen to further reduce the energy of the transmitted signal.
Abstract: Recent theoretical results describing the sum-capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum-rates of tens of bits/channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the energy of the transmitted signal. The paper is comprised of two parts. In this second part, we show that, after the regularization of the channel inverse introduced in the first part, a certain perturbation of the data using a "sphere encoder" can be chosen to further reduce the energy of the transmitted signal. The performance difference with and without this perturbation is shown to be dramatic. With the perturbation, we achieve excellent performance at all signal-to-noise ratios. The results of both uncoded and turbo-coded simulations are presented.

972 citations


Journal ArticleDOI
TL;DR: The proposed algorithms not only provide fair resource allocation among users, but also have a comparable overall system rate with the scheme maximizing the total rate without considering fairness, and have much higher rates than that of the scheme with max-min fairness.
Abstract: In this paper, a fair scheme to allocate subcarrier, rate, and power for multiuser orthogonal frequency-division multiple-access systems is proposed. The problem is to maximize the overall system rate, under each user's maximal power and minimal rate constraints, while considering the fairness among users. The approach considers a new fairness criterion, which is a generalized proportional fairness based on Nash bargaining solutions and coalitions. First, a two-user algorithm is developed to bargain subcarrier usage between two users. Then a multiuser bargaining algorithm is developed based on optimal coalition pairs among users. The simulation results show that the proposed algorithms not only provide fair resource allocation among users, but also have a comparable overall system rate with the scheme maximizing the total rate without considering fairness. They also have much higher rates than that of the scheme with max-min fairness. Moreover, the proposed iterative fast implementation has the complexity for each iteration of only O(K/sup 2/Nlog/sub 2/N+K/sup 4/), where N is the number of subcarriers and K is the number of users.

578 citations


Journal ArticleDOI
TL;DR: It is shown that the encoding complexity of a QC-LDPC code is linearly proportional to the number of parity bits of the code for serial encoding, and to the length of thecode for high-speed parallel encoding.
Abstract: Efficient Encoding of Quasi-Cyclic Low-Density Parity-Check Codes Quasi-cyclic (QC) low-density parity-check (LDPC) codes form an important subclass of LDPC codes. These codes have encoding advantage over other types of LDPC codes. This paper addresses the issue of efficient encoding of QC-LDPC codes. Two methods are presented to find the generator matrices of QC-LDPC codes in systematic-circulant form from their parity-check matrices given in circulant form. Based on the systematic-circulation form of the generator matrix of a QC-LDPC code, various types of encoding circuits using simple shift registers are devised. It is shown that the encoding complexity of a QC-LDPC code is linearly proportional to the number of parity bits of the code for serial encoding, and to the length of the code for high-speed parallel encoding.

559 citations


Journal ArticleDOI
TL;DR: Switch between spatial multiplexing and transmit diversity is proposed as a simple way to improve the diversity performance of spatialmultiplexing.
Abstract: Multiple-input multiple-output (MIMO) wireless communication systems can offer high data rates through spatial multiplexing or substantial diversity using transmit diversity. In this letter, switching between spatial multiplexing and transmit diversity is proposed as a simple way to improve the diversity performance of spatial multiplexing. In the proposed approach, for a fixed rate, either multiplexing or diversity is chosen based on the instantaneous channel state and the decision is conveyed to the transmitter via a low-rate feedback channel. The minimum Euclidean distance at the receiver is computed for spatial multiplexing and transmit diversity and is used to derive the selection criterion. Additionally, the Demmel condition number of the matrix channel is shown to provide a sufficient condition for multiplexing to outperform diversity. Monte Carlo simulations demonstrate improvement over either multiplexing or diversity individually in terms of bit error rate.

447 citations


Journal ArticleDOI
TL;DR: Simulations show that the new schedules of iterative decoding of low-density parity-check codes and turbo codes offer better performance/complexity tradeoffs, especially when the maximum number of iterations has to remain small.
Abstract: Shuffled versions of iterative decoding of low-density parity-check codes and turbo codes are presented. The proposed schemes have about the same computational complexity as the standard versions, and converge faster. Simulations show that the new schedules offer better performance/complexity tradeoffs, especially when the maximum number of iterations has to remain small.

366 citations


Journal ArticleDOI
TL;DR: The use of multiple laser transmitters combined with multiple photodetectors (PDs) is studied for terrestrial, line-of-sight optical communication, and the modulation format is repetition Q-ary PPM across lasers, with intensity modulation.
Abstract: The use of multiple laser transmitters combined with multiple photodetectors (PDs) is studied for terrestrial, line-of-sight optical communication. The resulting multiple-input/multiple-output channel has the potential for combatting fading effects on turbulent optical channels. In this paper, the modulation format is repetition Q-ary PPM across lasers, with intensity modulation. Ideal PDs are assumed, with and without background radiation. Both Rayleigh and log-normal fading models are treated. The focus is upon both symbol-/bit-error probability for uncoded transmission, and on constrained channel capacity.

342 citations


Journal ArticleDOI
TL;DR: The cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency and an admission-control scheme based on maximizing the total utility in the network is proposed.
Abstract: In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits/joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed.

333 citations


Journal ArticleDOI
TL;DR: A low-complexity blind carrier frequency offset estimator for orthogonal frequency-division multiplexing (OFDM) systems is developed using a kurtosis-type criterion, and it is shown that this approach can be applied to blind CFO estimation in multi-input multi-output and multiuser OFDM systems.
Abstract: Relying on a kurtosis-type criterion, we develop a low-complexity blind carrier frequency offset (CFO) estimator for orthogonal frequency-division multiplexing (OFDM) systems. We demonstrate analytically how identifiability and performance of this blind CFO estimator depend on the channel's frequency selectivity and the input distribution. We show that this approach can be applied to blind CFO estimation in multi-input multi-output and multiuser OFDM systems. The issues of channel nulls, multiuser interference, and effects of multiple antennas are addressed analytically, and tested via simulations.

318 citations


Journal ArticleDOI
TL;DR: It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates.
Abstract: The effects of clipping and quantization on the performance of the min-sum algorithm for the decoding of low-density parity-check (LDPC) codes at short and intermediate block lengths are studied. It is shown that in many cases, only four quantization bits suffice to obtain close to ideal performance over a wide range of signal-to-noise ratios. Moreover, we propose modifications to the min-sum algorithm that improve the performance by a few tenths of a decibel with just a small increase in decoding complexity. A quantized version of these modified algorithms is also studied. It is shown that, when optimized, modified quantized min-sum algorithms perform very close to, and in some cases even slightly outperform, the ideal belief-propagation algorithm at observed error rates.

Journal ArticleDOI
TL;DR: An adaptive resource-allocation approach, which jointly adapts subcarrier allocation, power distribution, and bit distribution according to instantaneous channel conditions, is proposed for multiuser multiple-input multiple-output (MIMO)/orthogonal frequency-division multiplexing systems.
Abstract: Fast adaptive transmission has been recently identified as a key technology for exploiting potential system diversity and improving power-spectral efficiency in wireless communication systems. An adaptive resource-allocation approach, which jointly adapts subcarrier allocation, power distribution, and bit distribution according to instantaneous channel conditions, is proposed for multiuser multiple-input multiple-output (MIMO)/orthogonal frequency-division multiplexing systems. The resultant scheme is able to: 1) optimize the power efficiency; 2) guarantee each user's quality of service requirements, including bit-error rate and data rate; 3) ensure fairness to all the active users; and 4) be applied to systems with various types of multiuser-detection schemes at the receiver. For practical implementation, a reduced-complexity allocation algorithm is developed. This algorithm decouples the complex multiuser joint resource-allocation problem into simple single-user optimization problems by controlling the subcarrier sharing according to the users' spatial separability. Numerical results show that significant power and diversity gains are achievable, compared with nonadaptive systems. It is also demonstrated that the MIMO system is able to multiplex several users without sacrificing antenna diversity by using the proposed algorithm.

Journal ArticleDOI
TL;DR: A "double-ring" model to simulate the mobile-to-mobile local scattering environment, and sum-of-sinusoids (SoS)-based models for simulating such channels are proposed.
Abstract: Mobile-to-mobile channels find increasing applications in futuristic intelligent transport systems, ad hoc mobile wireless networks, and relay-based cellular networks. Their statistical properties are quite different from typical cellular radio channels, thereby requiring new methods for their simulation. This paper proposes a "double-ring" model to simulate the mobile-to-mobile local scattering environment, and develops sum-of-sinusoids (SoS)-based models for simulating such channels. The proposed models produce waveforms having desired statistical properties with good accuracy, and also remove some drawbacks of an existing model derived by using the discrete line spectrum simulation method.

Journal ArticleDOI
TL;DR: The performance of a direct-detection, avalanche photodiode-based free-space optical (FSO) communication system in terms of the overall bit-error rate is characterized in order to shed light on the impact of turbulence on the overall performance.
Abstract: In this paper, we characterize the performance of a direct-detection, avalanche photodiode-based free-space optical (FSO) communication system in terms of the overall bit-error rate. The system of interest uses pulse-position modulation (PPM) and is subjected to scintillation due to optical turbulence. Two scenarios are considered. In one case, a weak turbulence (clear-air) scenario is considered, for which the received signal intensity may be modeled as a log-normal random process. In the other case, we consider a negative exponentially distributed received signal intensity. To arrive at the desired results, it is assumed that the system uses a binary PPM (BPPM) modulation scheme. Furthermore, it is assumed that the receiver thermal noise is nonnegligible, and that the average signal intensity is large enough to justify a Gaussian approximation at the receiver. Union bound is used to assess the performance of M-ary PPM systems using the results of the BPPM scenario. Numerical results are presented for the BPPM case to shed light on the impact of turbulence on the overall performance.

Journal ArticleDOI
TL;DR: From performance simulations on a wireless dispersive fading channel, it is observed that the IBDFE outperforms existing DFEs and exhibits a reduction of the computational complexity when compared against existing schemes, both in signal processing and in filter design.
Abstract: Error-propagation phenomena and computational complexity of the filters' design are important drawbacks of existing decision-feedback equalizers (DFE) for dispersive channels. In this paper, we propose a new iterative block DFE (IBDFE) which operates iteratively on blocks of the received signal. Indeed, a suitable data-transmission format must be used to allow an efficient implementation of the equalizer in the frequency domain, by means of the discrete Fourier transform. Two design methods are considered. In the first method, hard detected data are used as input of the feedback, and filters are designed according to the correlation between detected and transmitted data. In the second method, the feedback signal is directly designed from soft detection of the equalized signal at the previous iteration. Estimators of the parameters involved in the IBDFE design are also derived. From performance simulations on a wireless dispersive fading channel, we observed that the IBDFE outperforms existing DFEs. Moreover, the IBDFE exhibits a reduction of the computational complexity when compared against existing schemes, both in signal processing and in filter design.

Journal ArticleDOI
TL;DR: In the proposed scheme, circular convolutions are employed to generate the interference after the discrete Fourier transform processing, which is then removed from the original received signal to increase the signal-to-interference power ratio (SIR).
Abstract: Recently, orthogonal frequency-division multiplexing (OFDM), with clusters of subcarriers allocated to different subscribers (often referred to as OFDMA), has gained much attention for its ability in enabling multiple-access wireless multimedia communications. In such systems, carrier frequency offsets (CFOs) can destroy the orthogonality among subcarriers. As a result, multiuser interference (MUI) along with significant performance degradation can be induced. In this paper, we present a scheme to compensate for the CFOs at the base station of an OFDMA system. In the proposed scheme, circular convolutions are employed to generate the interference after the discrete Fourier transform processing, which is then removed from the original received signal to increase the signal-to-interference power ratio (SIR). Both SIR analysis and simulation results will show that the proposed scheme can significantly improve system performance.

Journal ArticleDOI
TL;DR: A quasianalytical study is presented on the downlink performance of the OFCDM system with hybrid multi-code interference (MCI) cancellation and minimum mean square error (MMSE) detection, showing that the hybrid detection scheme performs much better than pure MMSE when good channel estimation is guaranteed.
Abstract: The broadband orthogonal frequency and code division multiplexing (OFCDM) system with two-dimensional spreading (time and frequency domain spreading) is becoming a very attractive technique for high-rate data transmission in future wireless communication systems. In this paper, a quasianalytical study is presented on the downlink performance of the OFCDM system with hybrid multi-code interference (MCI) cancellation and minimum mean square error (MMSE) detection. The weights of MMSE are derived and updated stage by stage of MCI cancellation. The effects of channel estimation errors and sub-carrier correlation are also studied. It is shown that the hybrid detection scheme performs much better than pure MMSE when good channel estimation is guaranteed. The power ratio between the pilot channel and all data channels should be set to 0.25, which is a near optimum value for the two-dimensional spreading system with time domain spreading factor (N/sub T/) of 4 and 8. On the other hand, in a slow fading channel, a large value of the channel estimation window size /spl gamma/N/sub T/, where /spl gamma/ is an odd integer, is expected. However, /spl gamma/=3 is large enough for the system with N/sub T/=8 while /spl gamma/=5 is more desirable for the system with N/sub T/=4. Although performance of the hybrid detection degrades in the presence of the sub-carrier correlation, the hybrid detection still works well even the correlation coefficient is as high as 0.7. Finally, given N/sub T/, although performance improves when the frequency domain spreading factor (N/sub F/) increases, the frequency diversity gain is almost saturated for a large value of N/sub F/ (i.e., N/sub F/ /spl ges/ 32).

Journal ArticleDOI
TL;DR: A new unified performance model and analysis method is proposed to study the saturation throughput and delay performance of EDCA, under the assumption of a finite number of stations and ideal channel conditions in a single-hop WLAN.
Abstract: Rapid deployment of IEEE 802.11 wireless local area networks (WLANs) and their increasing quality of service (QoS) requirements motivate extensive performance evaluations of the upcoming 802.11e QoS-aware enhanced distributed coordination function (EDCA). Most of the analytical studies up-to-date have been based on one of the three major performance models in legacy distributed coordination function analysis, requiring a large degree of complexity in solving multidimensional Markov chains. Here, we expose the common guiding principle behind these three seemingly different models. Subsequently, by abstracting, unifying, and extending this common principle, we propose a new unified performance model and analysis method to study the saturation throughput and delay performance of EDCA, under the assumption of a finite number of stations and ideal channel conditions in a single-hop WLAN. This unified model combines the strengths of all three models, and thus, is easy to understand and apply; on the other hand, it helps increase the understanding of the existing performance analysis. Despite its appealing simplicity, our unified model and analysis are validated very well by simulation results. Ultimately, by means of the proposed model, we are able to precisely evaluate the differentiation effects of EDCA parameters on WLAN performance in very broad settings, a feature which is essential for network design.

Journal ArticleDOI
TL;DR: This paper presents a reduced-complexity soft-input soft-output detection scheme, called iterative tree search detection, for multiple-input multiple-output wireless communication systems employing turbo processing at the receiver.
Abstract: This paper presents a reduced-complexity soft-input soft-output detection scheme, called iterative tree search detection, for multiple-input multiple-output wireless communication systems employing turbo processing at the receiver. In this scheme, a reduced search space is selected with the aid of the M-algorithm, and QAM signal constellations with block partitionable labels are used in order to make the detection complexity per bit almost independent of the modulation order, as well as asymptotically linear in the number of transmit antennas. Results from computer simulations are presented which demonstrate the capability of the scheme to approach optimal performance at considerably reduced complexity.

Journal ArticleDOI
TL;DR: A novel synchronization criterion is established that is termed "timing with dirty templates" (TDT), based on which timing algorithms in both data-aided (DA) and nondata- aided modes are developed and test.
Abstract: Ultra-wideband (UWB) technology for indoor wireless communications promises high data rates with low-complexity transceivers. Rapid timing synchronization constitutes a major challenge in realizing these promises. In this paper, we establish a novel synchronization criterion that we term "timing with dirty templates" (TDT), based on which we develop and test timing algorithms in both data-aided (DA) and nondata-aided modes. For the DA mode, we design a training pattern, which turns out to not only speed up synchronization, but also enable timing in a multiuser environment. Based on simple integrate-and-dump operations over the symbol duration, our TDT algorithms remain operational in practical UWB settings. They are also readily applicable to narrowband systems when intersymbol interference is avoided. Simulations confirm performance improvement of TDT relative to existing alternatives in terms of mean square error and bit-error rate.

Journal ArticleDOI
TL;DR: A simple algorithm is proposed that allows evaluating an exact and tractable expression for the probability density function of the SNR at the output of the TB receiver, subject to Rayleigh fading, thereby avoiding the need for time-consuming numerical integrations or Monte Carlo simulations.
Abstract: Transmit-beamforming (TB) over multiple-input multiple-output (MIMO) fading channels steers the transmit power in the receiver's direction, so as to maximize the output signal-to-noise ratio (SNR) after maximal ratio combining (MRC) at the receiver. This letter proposes a simple algorithm that allows evaluating an exact and tractable expression for the probability density function of the SNR at the output of the TB receiver, subject to Rayleigh fading. The latter enables the derivation of closed-form expressions for the outage and ergodic capacity of MIMO MRC systems under Rayleigh fading, thereby avoiding the need for time-consuming numerical integrations or Monte Carlo simulations.

Journal ArticleDOI
TL;DR: A technique that incorporates partial interference presubtraction (PIP) within convolutional decoding is developed and a trellis-shaping technique is developed that takes into account the knowledge of a noncausal interfering sequence, rather than just the instantaneous interference.
Abstract: This paper studies the combination of practical trellis and convolution codes with Tomlinson-Harashima precoding (THP) for the presubtraction of multiuser interference that is known at the transmitter but not known at the receiver It is well known that a straightforward application of THP suffers power, modulo, and shaping losses This paper proposes generalizations of THP that recover some of these losses At a high signal-to-noise ratio (SNR), the precoding loss is dominated by the shaping loss, which is about 153 dB To recover shaping loss, a trellis-shaping technique is developed that takes into account the knowledge of a noncausal interfering sequence, rather than just the instantaneous interference At rates of 2 and 3 bits per transmission, trellis shaping is shown to be able to recover almost all of the 153-dB shaping loss At a low SNR, the precoding loss is dominated by power and modulo losses, which can be as large as 3-4 dB To recover these losses, a technique that incorporates partial interference presubtraction (PIP) within convolutional decoding is developed At rates of 05 and 025 bits per transmission, PIP is able to recover 1-15 dB of the power loss For intermediate SNR channels, a combination of the two schemes is shown to recover both power and shaping losses

Journal ArticleDOI
TL;DR: It is explained how replacing rate-1/2 binary component codes by rate-m/(m+1) binary RSC codes can lead to better global performance, and the encoding scheme can be designed so that decoding can be achieved closer to the theoretical limit.
Abstract: The original turbo codes (TCs), presented in 1993 by Berrou et al., consist of the parallel concatenation of two rate-1/2 binary recursive systematic convolutional (RSC) codes. This paper explains how replacing rate-1/2 binary component codes by rate-m/(m+1) binary RSC codes can lead to better global performance. The encoding scheme can be designed so that decoding can be achieved closer to the theoretical limit, while showing better performance in the region of low error rates. These results are illustrated with some examples based on double-binary (m=2) 8-state and 16-state TCs, easily adaptable to a large range of data block sizes and coding rates. The double-binary 8-state code has already been adopted in several telecommunication standards.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a cooperative transmission scheme in which the collaborating nodes may have multiple antennas and present the performance analysis and design of space-time codes that are capable of achieving the full diversity provided by user cooperation.
Abstract: We consider a cooperative transmission scheme in which the collaborating nodes may have multiple antennas. We present the performance analysis and design of space-time codes that are capable of achieving the full diversity provided by user cooperation. Our codes use the principle of overlays in time and space, and ensure that cooperation takes place as often as possible. We show how cooperation among nodes with different numbers of antennas can be accomplished, and how the quality of the interuser link affects the cooperative performance. We illustrate that space-time cooperation can greatly reduce the error rates of all the nodes involved, even for poor interuser channel quality.

Journal ArticleDOI
TL;DR: Empirical fitting is provided to provide a threshold distance below which the spherical-wave model is required for accurate performance estimation in ray tracing, and shows that the capacity growth with element spacing diminishes significantly under the plane-wave assumption.
Abstract: The plane-wave assumption has been used extensively in array signal processing, parameter estimation, and wireless channel modeling to simplify analysis. It is suitable for single-input single-output and single-input multiple-output systems, because the rank of the channel matrix is one. However, for short-range multiple-input multiple-output (MIMO) channels with a line-of-sight (LOS) component, the plane-wave assumption affects the rank and singular value distribution of the MIMO channel matrix, and results in the underestimation of the channel capacity, especially for element spacings exceeding half a wavelength. The short-range geometry could apply to many indoor wireless local area network applications. To avoid this underestimation problem, the received signal phases must depend precisely on the distances between transmit and receive antenna elements. With this correction, the capacity of short-range LOS MIMO channels grows steadily as the element spacing exceeds half a wavelength, as confirmed by measurements at 5.8 GHz. In contrast, the capacity growth with element spacing diminishes significantly under the plane-wave assumption. Using empirical fitting, we provide a threshold distance below which the spherical-wave model is required for accurate performance estimation in ray tracing.

Journal ArticleDOI
TL;DR: Close-form bit-error probability expressions for spread-spectrum systems are derived by approximating narrowband interferers as independent asynchronous tone interferers by developing a new analytical framework based on perturbation theory to analyze the performance of a Rake receiver in Nakagami-m channels.
Abstract: This paper evaluates the performance of wideband communication systems in the presence of narrowband interference (NBI). In particular, we derive closed-form bit-error probability expressions for spread-spectrum systems by approximating narrowband interferers as independent asynchronous tone interferers. The scenarios considered include additive white Gaussian noise channels, flat-fading channels, and frequency-selective multipath fading channels. For multipath fading channels, we develop a new analytical framework based on perturbation theory to analyze the performance of a Rake receiver in Nakagami-m channels. Simulation results for NBI such as GSM and Bluetooth are in good agreement with our analytical results, showing the approach developed is useful for investigating the coexistence of ultrawide bandwidth systems with existing wireless systems.

Journal ArticleDOI
TL;DR: New iterative soft-input soft-output (SISO) detection schemes for intersymbol interference (ISI) channels are proposed and verified by computer simulations that the SP algorithm converges to a good approximation of the exact marginal APPs of the transmitted symbols if the FG has girth at least 6.
Abstract: In this paper, based on the application of the sum-product (SP) algorithm to factor graphs (FGs) representing the joint a posteriori probability (APP) of the transmitted symbols, we propose new iterative soft-input soft-output (SISO) detection schemes for intersymbol interference (ISI) channels. We have verified by computer simulations that the SP algorithm converges to a good approximation of the exact marginal APPs of the transmitted symbols if the FG has girth at least 6. For ISI channels whose corresponding FG has girth 4, the application of a stretching technique allows us to obtain an equivalent girth-6 graph. For sparse ISI channels, the proposed algorithms have advantages in terms of complexity over optimal detection schemes based on the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. They also allow a parallel implementation of the receiver and the possibility of a more efficient complexity reduction. The application to joint detection and decoding of low-density parity-check (LDPC) codes is also considered and results are shown for some partial-response magnetic channels. Also in these cases, we show that the proposed algorithms have a limited performance loss with respect to that can be obtained when the optimal "serial" BCJR algorithm is used for detection. Therefore, for their parallel implementation, they represent a favorable alternative to the modified "parallel" BCJR algorithm proposed in the literature for the application to magnetic channels.

Journal ArticleDOI
TL;DR: In this paper, turbo-based coding schemes for relay systems together with iterative decoding algorithms are designed and it is shown that a remarkable advantage can be achieved over the direct and multihop transmission alternatives.
Abstract: In this paper, we design turbo-based coding schemes for relay systems together with iterative decoding algorithms. In the proposed schemes, the source node sends coded information bits to both the relay and the destination nodes, while the relay simultaneously forwards its estimate for the previous coded block to the destination after decoding and re-encoding. The destination observes a superposition of the codewords and uses an iterative decoding algorithm to estimate the transmitted messages. Different from the block-by-block decoding techniques used in the literature, this decoding scheme operates over all the transmitted blocks jointly. Various encoding and decoding approaches are proposed for both single-input single-output and multi-input multi-output systems over several different channel models. Capacity bounds and information-rate bounds with binary inputs are also provided, and it is shown that the performance of the proposed practical scheme is typically about 1.0-1.5 dB away from the theoretical limits, and a remarkable advantage can be achieved over the direct and multihop transmission alternatives.

Journal ArticleDOI
TL;DR: It is proved that the CS algorithm is equivalent to a scheduling algorithm that regards the user rates as independent and identically distributed, and the average throughput of a user is independent of the probability distribution of other users.
Abstract: In this paper, we present a new wireless scheduling algorithm based on the cumulative distribution function (cdf) and its simple modification that limits the maximum starving time. This cdf-based scheduling (CS) algorithm selects the user for transmission based on the cdf of user rates, in such a way that the user whose rate is high enough, but least probable to become higher, is selected first. We prove that the CS algorithm is equivalent to a scheduling algorithm that regards the user rates as independent and identically distributed, and the average throughput of a user is independent of the probability distribution of other users. So, we can evaluate the exact user throughput only if we know the user's own distribution, which is a distinctive feature of this proposed algorithm. In addition, we try a modification on the CS algorithm to limit the maximum starving time, and prove that the modification does not affect the average interservice time. This CS with starving-time limitation (CS-STL) algorithm turns out to limit the maximum starving time at the cost of a negligible throughput loss.

Journal ArticleDOI
TL;DR: Focusing on large-scale networks with high total data rates, the analysis of the ILS reveals how its energy gain over traditional time-division multiple access depends on the channel and the data-length variations among different nodes.
Abstract: We consider the problem of minimizing the energy needed for data fusion in a sensor network by varying the transmission times assigned to different sensor nodes. The optimal scheduling protocol is derived, based on which we develop a low-complexity inverse-log scheduling (ILS) algorithm that achieves near-optimal energy efficiency. To eliminate the communication overhead required by centralized scheduling protocols, we further derive a distributed inverse-log protocol that is applicable to networks with a large number of nodes. Focusing on large-scale networks with high total data rates, we analyze the energy consumption of the ILS. Our analysis reveals how its energy gain over traditional time-division multiple access depends on the channel and the data-length variations among different nodes.