scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Communications in 2009"


Journal ArticleDOI
TL;DR: New sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power.
Abstract: Spectrum sensing is a fundamental component in a cognitive radio. In this paper, we propose new sensing methods based on the eigenvalues of the covariance matrix of signals received at the secondary users. In particular, two sensing algorithms are suggested, one is based on the ratio of the maximum eigenvalue to minimum eigenvalue; the other is based on the ratio of the average eigenvalue to minimum eigenvalue. Using some latest random matrix theories (RMT), we quantify the distributions of these ratios and derive the probabilities of false alarm and probabilities of detection for the proposed algorithms. We also find the thresholds of the methods for a given probability of false alarm. The proposed methods overcome the noise uncertainty problem, and can even perform better than the ideal energy detection when the signals to be detected are highly correlated. The methods can be used for various signal detection applications without requiring the knowledge of signal, channel and noise power. Simulations based on randomly generated signals, wireless microphone signals and captured ATSC DTV signals are presented to verify the effectiveness of the proposed methods.

1,074 citations


Journal ArticleDOI
TL;DR: This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA), and is subjected to a sensible quality of service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate.
Abstract: Two-tier networks, comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays), offer an economically viable way to improve cellular system capacity. The capacity-limiting factor in such networks is interference. The cross-tier interference between macrocells and femtocells can suffocate the capacity due to the near-far problem, so in practice hotspots should use a different frequency channel than the potentially nearby high-power macrocell users. Centralized or coordinated frequency planning, which is difficult and inefficient even in conventional cellular networks, is all but impossible in a two-tier network. This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA). The proposed allocation is optimal in terms of area spectral efficiency (ASE), and is subjected to a sensible quality of service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate. Results show the dependence of this allocation on the QoS requirement, hotspot density and the co-channel interference from the macrocell and femtocells. Design interpretations are provided.

572 citations


Journal ArticleDOI
TL;DR: In this paper, throughput performance analysis of the chunk-based subcarrier allocation is presented by considering the average bit-error-rate (BER) constraint over a chunk in downlink multiuser orthogonal frequency division multiplexing (OFDM) transmission.
Abstract: In this paper, throughput performance analysis of the chunk-based subcarrier allocation is presented by considering the average bit-error-rate (BER) constraint over a chunk in downlink multiuser orthogonal frequency division multiplexing (OFDM) transmission The outage probabilities per subcarrier are compared between the average BER-constraint based chunk allocation and the average signal-to-noise-ratio (SNR) based chunk allocation The effects of system parameters, such as the number of users, the number of subcarriers per chunk, and the coherence bandwidth, are evaluated The numerical results show that, when the chunk bandwidth is smaller than the coherence bandwidth, the average downlink throughput of the chunk-based subcarrier allocation is very close to that of the single-subcarrier-based allocation When the number of users is small, the average throughput increases dramatically with increasing the number of users due to multiuser diversity, whereas when the number of users is large, the multiuser diversity gain is saturated The effective throughput of the average BER-constraint based chunk allocation is higher than that of the average SNR based chunk allocation, especially when the number of users is large or when the ratio of the chunk bandwidth to the coherence bandwidth is large

565 citations


Journal ArticleDOI
TL;DR: This work evaluates the communication limits imposed by low-precision ADC for transmission over the real discrete-time additive white Gaussian noise (AWGN) channel, under an average power constraint on the input.
Abstract: As communication systems scale up in speed and bandwidth, the cost and power consumption of high-precision (e.g., 8-12 bits) analog-to-digital conversion (ADC) becomes the limiting factor in modern transceiver architectures based on digital signal processing. In this work, we explore the impact of lowering the precision of the ADC on the performance of the communication link. Specifically, we evaluate the communication limits imposed by low-precision ADC (e.g., 1-3 bits) for transmission over the real discrete-time additive white Gaussian noise (AWGN) channel, under an average power constraint on the input. For an ADC with K quantization bins (i.e., a precision of log2 K bits), we show that the input distribution need not have any more than K+1 mass points to achieve the channel capacity. For 2-bin (1-bit) symmetric quantization, this result is tightened to show that binary antipodal signaling is optimum for any signal-to- noise ratio (SNR). For multi-bit quantization, a dual formulation of the channel capacity problem is used to obtain tight upper bounds on the capacity. The cutting-plane algorithm is employed to compute the capacity numerically, and the results obtained are used to make the following encouraging observations : (a) up to a moderately high SNR of 20 dB, 2-3 bit quantization results in only 10-20% reduction of spectral efficiency compared to unquantized observations, (b) standard equiprobable pulse amplitude modulated input with quantizer thresholds set to implement maximum likelihood hard decisions is asymptotically optimum at high SNR, and works well at low to moderate SNRs as well.

410 citations


Journal ArticleDOI
TL;DR: A new type of estimator is introduced that aims at maximizing the effective receive signal-to-noise ratio (SNR) after taking into consideration the channel estimation errors, thus referred to as the linear maximum SNR (LMSNR) estimator.
Abstract: In this work, we consider the two-way relay network (TWRN) where two terminals exchange their information through a relay node in a bi-directional manner and study the training-based channel estimation under the amplify-and-forward (AF) relay scheme. We propose a two-phase training protocol for channel estimation: in the first phase, the two terminals send their training signals concurrently to the relay; and in the second phase, the relay amplifies the received signal and broadcasts it to both terminals. Each terminal then estimates the channel parameters required for data detection. First, we assume the channel parameters to be deterministic and derive the maximum-likelihood (ML) -based estimator. It is seen that the newly derived ML estimator is nonlinear and differs from the conventional least-square (LS) estimator. Due to the difficulty in obtaining a closed-form expression of the mean square error (MSE) for the ML estimator, we resort to the Crameacuter-Rao lower bound (CRLB) on the estimation MSE for design of optimal training sequence. Secondly, we consider stochastic channels and focus on the class of linear estimators. In contrast to the conventional linear minimum-mean-square-error (LMMSE) -based estimator, we introduce a new type of estimator that aims at maximizing the effective receive signal-to-noise ratio (SNR) after taking into consideration the channel estimation errors, thus referred to as the linear maximum SNR (LMSNR) estimator. Furthermore, we prove that orthogonal training design is optimal for both the CRLB- and the LMSNR-based design criteria. Finally, simulations are conducted to corroborate the proposed studies.

338 citations


Journal ArticleDOI
TL;DR: The pairwise error probabilities of single-input single- output (SISO) and multiple-input multiple-output (MIMO) FSO systems with intensity modulation and direct detection as generalized infinite power series with respect to the signal- to-noise ratio are expressed and an upper bound for the associated approximation error is provided.
Abstract: Atmospheric turbulence induced fading is one of the main impairments affecting free-space optics (FSO) communications. In recent years, Gamma-Gamma fading has become the dominant fading model for FSO links because of its excellent agreement with measurement data for a wide range of turbulence conditions. However, in contrast to RF communications, the analysis techniques for FSO are not well developed and prior work has mostly resorted to simulations and numerical integration for performance evaluation in Gamma-Gamma fading. In this paper, we express the pairwise error probabilities of single-input single- output (SISO) and multiple-input multiple-output (MIMO) FSO systems with intensity modulation and direct detection (IM/DD) as generalized infinite power series with respect to the signal- to-noise ratio. For numerical evaluation these power series are truncated to a finite number of terms and an upper bound for the associated approximation error is provided. The resulting finite power series enables fast and accurate numerical evaluation of the bit error rate of IM/DD FSO with on-off keying and pulse position modulation in SISO and MIMO Gamma-Gamma fading channels. Furthermore, we extend the well-known RF concepts of diversity and combining gain to FSO and Gamma-Gamma fading. In particular, we provide simple closed-form expressions for the diversity gain and the combining gain of MIMO FSO with repetition coding across lasers at the transmitter and equal gain combining or maximal ratio combining at the receiver.

336 citations


Journal ArticleDOI
TL;DR: A generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems and shows that the proposed algorithm is robust to channel estimation errors.
Abstract: Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.

259 citations


Journal ArticleDOI
TL;DR: This paper describes and analyzes low-density parity-check code families that support variety of different rates while maintaining the same fundamental decoder architecture and proposes a design method that maintains good graphical properties and hence low error floors for all rates.
Abstract: This paper describes and analyzes low-density parity-check code families that support variety of different rates while maintaining the same fundamental decoder architecture. Such families facilitate the decoding hardware design and implementation for applications that require communication at different rates, for example to adapt to changing channel conditions. Combining rows of the lowest-rate parity-check matrix produces the parity-check matrices for higher rates. An important advantage of this approach is that all effective code rates have the same blocklength. This approach is compatible with well known techniques that allow low-complexity encoding and parallel decoding of these LDPC codes. This technique also allows the design of programmable analog LDPC decoders. The proposed design method maintains good graphical properties and hence low error floors for all rates.

252 citations


Journal ArticleDOI
TL;DR: This paper considers that a secondary user may access the spectrum allocated to a primary user as long as the interference power, inflicted at the primar's receiver as an effect of the transmission of the secondary user, remains below predefined power limits, average or peak.
Abstract: In this paper, we analyze the capacity gains of opportunistic spectrum-sharing channels in fading environments with imperfect channel information. In particular, we consider that a secondary user may access the spectrum allocated to a primary user as long as the interference power, inflicted at the primar's receiver as an effect of the transmission of the secondary user, remains below predefined power limits, average or peak, and investigate the capacity gains offered by this spectrum-sharing approach when only partial channel information of the link between the secondaryiquests transmitter and primary's receiver is available to the secondary user. Considering average received-power constraint, we derive the ergodic and outage capacities along with their optimum power allocation policies for Rayleigh flat-fading channels, and provide closedform expressions for these capacity metrics. We further assume that the interference power inflicted on the primaryiquests receiver should remain below a peak threshold. Introducing the concept of interference-outage, we derive lower bounds on the ergodic and outage capacities of the channel. In addition, we obtain closedform expressions for the expenditure-power required at the secondary transmitter to achieve the above-mentioned capacity metrics. Numerical simulations are conducted to corroborate our theoretical results.

215 citations


Journal ArticleDOI
TL;DR: An adaptive regret based learning procedure is applied which tracks the set of correlated equilibria of the game, treated as a distributed stochastic approximation, which is shown to perform very well compared with other similar adaptive algorithms.
Abstract: We consider dynamic spectrum access among cognitive radios from an adaptive, game theoretic learning perspective. Spectrum-agile cognitive radios compete for channels temporarily vacated by licensed primary users in order to satisfy their own demands while minimizing interference. For both slowly varying primary user activity and slowly varying statistics of "fast" primary user activity, we apply an adaptive regret based learning procedure which tracks the set of correlated equilibria of the game, treated as a distributed stochastic approximation. This procedure is shown to perform very well compared with other similar adaptive algorithms. We also estimate channel contention for a simple CSMA channel sharing scheme.

198 citations


Journal ArticleDOI
TL;DR: The concept of faster-than-Nyquist (FTN) signaling is extended to pulse trains that modulate a bank of subcarriers, a method called two dimensional FTN signaling, which achieves the isolated-pulse error performance in as little as half the bandwidth of ordinary OFDM.
Abstract: We extend Mazo's concept of faster-than-Nyquist (FTN) signaling to pulse trains that modulate a bank of subcarriers, a method called two dimensional FTN signaling. The signal processing is similar to orthogonal frequency division multiplex(OFDM) transmission but the subchannels are not orthogonal. Despite nonorthogonal pulses and subcarriers, the method achieves the isolated-pulse error performance; it does so in as little as half the bandwidth of ordinary OFDM. Euclidean distance properties are investigated for schemes based on several basic pulses. The best have Gaussian shape. An efficient distance calculation is given. Concatenations of ordinary codes and FTN are introduced. The combination achieves the outer code gain in as little as half the bandwidth. Receivers must work in two dimensions, and several iterative designs are proposed for FTN with outer convolutional coding.

Journal ArticleDOI
TL;DR: A low-complexity, greedy max-min algorithm is proposed to solve the resource allocation for an OFDM based cognitive radio system in which one or more spectrum holes exist between multiple primary user (PU) frequency bands.
Abstract: The problem of subcarrier, bit and power allocation for an OFDM based cognitive radio system in which one or more spectrum holes exist between multiple primary user (PU) frequency bands is studied. The cognitive radio user is able to use any portion of the frequency band as long as it does not interfere unduly with the PUs' transmissions. We formulate the resource allocation as a multidimensional knapsack problem and propose a low-complexity, greedy max-min algorithm to solve it. The proposed algorithm is simple to implement and simulation results show that its performance is very close to (within 0.3% of) the optimal solution.

Journal ArticleDOI
TL;DR: A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed, providing better performance of UEP scheme, which is confirmed both theoretically and experimentally.
Abstract: A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally.

Journal ArticleDOI
TL;DR: This paper obtains a closed-form expression for the joint probability density function of k consecutive ordered eigenvalues and, as a special case, the PDF of the lscrthordered eigenvalue of Wishart matrices and proposes a general methodology to evaluate some multiple nested integrals of interest.
Abstract: Random matrices play a crucial role in the design and analysis of multiple-input multiple-output (MIMO) systems. In particular, performance of MIMO systems depends on the statistical properties of a subclass of random matrices known as Wishart when the propagation environment is characterized by Rayleigh or Rician fading. This paper focuses on the stochastic analysis of this class of matrices and proposes a general methodology to evaluate some multiple nested integrals of interest. With this methodology we obtain a closed-form expression for the joint probability density function of k consecutive ordered eigenvalues and, as a special case, the PDF of the lscrth ordered eigenvalue of Wishart matrices. The distribution of the largest eigenvalue can be used to analyze the performance of MIMO maximal ratio combining systems. The PDF of the smallest eigenvalue can be used for MIMO antenna selection techniques. Finally, the PDF the kth largest eigenvalue finds applications in the performance analysis of MIMO singular value decomposition systems.

Journal ArticleDOI
TL;DR: This paper considers the analysis of optimum combining systems in the presence of both co-channel interference and thermal noise and derives exact closed-form expressions for the moments of the SINR in the cases where either the desired-user or the interferers undergo Rician fading.
Abstract: This paper considers the analysis of optimum combining systems in the presence of both co-channel interference and thermal noise. We address the cases where either the desired-user or the interferers undergo Rician fading. Exact expressions are derived for the moment generating function of the SINR which apply for arbitrary numbers of antennas and interferers. Based on these, we obtain expressions for the symbol error probability with M-PSK. For the case where the desired-user undergoes Rician fading, we also derive exact closed-form expressions for the moments of the SINR. We show that these moments are directly related to the corresponding moments of a Rayleigh system via a simple scaling parameter, which is investigated in detail. Numerical results are presented to validate the analysis and to examine the impact of Rician fading on performance.

Journal ArticleDOI
TL;DR: It is shown that a more involved equalization algorithm allows to achieve an excellent bit-error-rate performance, even when error-correcting codes designed for the Gaussian-noise limited channel are employed, and thus does not require a complete redesign of the coding scheme.
Abstract: We investigate the spectral efficiency, achievable by a low-complexity symbol-by-symbol receiver, when linear modulations based on the superposition of uniformly time- and frequency-shifted replicas of a base pulse are employed. Although orthogonal signaling with Gaussian inputs achieves capacity on the additive white Gaussian noise channel, we show that, when finite-order constellations are employed, by giving up the orthogonality condition (thus accepting interference among adjacent signals) we can considerably improve the performance, even when a symbol-by-symbol receiver is used. We also optimize the spacing between adjacent signals to maximize the achievable spectral efficiency. Moreover, we propose a more involved transmission scheme, consisting of the superposition of two independent signals with suitable power allocation and a two-stage receiver, showing that it allows a further increase of the spectral efficiency. Finally, we show that a more involved equalization algorithm, based on soft interference cancellation, allows to achieve an excellent bit-error-rate performance, even when error-correcting codes designed for the Gaussian-noise limited channel are employed, and thus does not require a complete redesign of the coding scheme.

Journal ArticleDOI
TL;DR: A general theory for 1:N and M:1 dimension changing mappings is presented, and two examples for a Gaussian source and channel are provided where both a 2:1 bandwidth-reducing and a 1:2 bandwidth-expanding mapping are optimized.
Abstract: This paper deals with lossy joint source-channel coding for transmitting memoryless sources over AWGN channels. The scheme is based on the geometrical interpretation of communication by Kotel'nikov and Shannon where amplitudecontinuous, time-discrete source samples are mapped directly onto the channel using curves or planes. The source and channel spaces can have different dimensions and thereby achieving either compression or error control, depending on whether the source bandwidth is smaller or larger than the channel bandwidth. We present a general theory for 1:N and M:1 dimension changing mappings, and provide two examples for a Gaussian source and channel where we optimize both a 2:1 bandwidth-reducing and a 1:2 bandwidth-expanding mapping. Both examples show high spectral efficiency and provide both graceful degradation and improvement for imperfect channel state information at the transmitter.

Journal ArticleDOI
TL;DR: It is shown in this paper that once the authors identify the trapping sets of an LDPC code of interest, a sum-product algorithm (SPA) decoder can be custom-designed to yield floors that are orders of magnitude lower than floors of the the conventional SPA decoder.
Abstract: One of the most significant impediments to the use of LDPC codes in many communication and storage systems is the error-rate floor phenomenon associated with their iterative decoders. The error floor has been attributed to certain subgraphs of an LDPC code's Tanner graph induced by so-called trapping sets. We show in this paper that once we identify the trapping sets of an LDPC code of interest, a sum-product algorithm (SPA) decoder can be custom-designed to yield floors that are orders of magnitude lower than floors of the the conventional SPA decoder. We present three classes of such decoders: (1) a bi-mode decoder, (2) a bit-pinning decoder which utilizes one or more outer algebraic codes, and (3) three generalized-LDPC decoders. We demonstrate the effectiveness of these decoders for two codes, the rate-1/2 (2640,1320) Margulis code which is notorious for its floors and a rate-0.3 (640,192) quasi-cyclic code which has been devised for this study. Although the paper focuses on these two codes, the decoder design techniques presented are fully generalizable to any LDPC code.

Journal ArticleDOI
TL;DR: An approach based on coalition games is proposed, in which the boundary nodes can use cooperative transmission to help the backbone nodes in the middle of the network, and can improve the network connectivity by about 50%, compared with pure repeated game schemes.
Abstract: In wireless packet-forwarding networks with selfish nodes, application of a repeated game can induce the nodes to forward each others' packets, so that the network performance can be improved. However, the nodes on the boundary of such networks cannot benefit from this strategy, as the other nodes do not depend on them. This problem is sometimes known as the curse of the boundary nodes. To overcome this problem, an approach based on coalition games is proposed, in which the boundary nodes can use cooperative transmission to help the backbone nodes in the middle of the network. In return, the backbone nodes are willing to forward the boundary nodes' packets. Here, the concept of core is used to study the stability of the coalitions in such games. Then three types of fairness are investigated, namely, min-max fairness using nucleolus, average fairness using the Shapley function, and a newly proposed market fairness. Based on the specific problem addressed in this paper, market fairness is a new fairness concept involving fairness between multiple backbone nodes and multiple boundary nodes. Finally, a protocol is designed using both repeated games and coalition games. Simulation results show how boundary nodes and backbone nodes form coalitions according to different fairness criteria. The proposed protocol can improve the network connectivity by about 50%, compared with pure repeated game schemes.

Journal ArticleDOI
TL;DR: This paper provides a general soft decision SMSE (SDSMSE) framework that extends the original SMSE framework to achieve synergistic CR benefits of overlay and underlay techniques and provides considerable flexibility to design overlay, underlay and hybrid overlay/underlay waveforms that are scenario dependent.
Abstract: Recent studies suggest that spectrum congestion is primarily due to inefficient spectrum usage rather than spectrum availability. Dynamic spectrum access (DSA) and cognitive radio (CR) are two techniques being considered to improve spectrum efficiency and utilization. The advent of CR has created a paradigm shift in wireless communications and instigated a change in FCC policy towards spectrum regulations. Within the hierarchical DSA model, spectrum overlay and underlay techniques are employed to enable primary and secondary users to coexist while improving overall spectrum efficiency. As employed here, spectrum overlay exploits unused (white) spectral regions while spectrum underlay exploits underused (gray) spectral regions. In general, underlay approaches use more spectrum than overlay approaches and operate below the noise floor of primary users. Spectrally modulated, spectrally encoded (SMSE) signals, to include orthogonal frequency domain multiplexing (OFDM) and multi-carrier code division multiple access (MC-CDMA), are candidate CR waveforms. The SMSE structure supports and is well suited for CR-based software defined radio (SDR) applications. This paper provides a general soft decision SMSE (SDSMSE) framework that extends the original SMSE framework to achieve synergistic CR benefits of overlay and underlay techniques. This extended framework provides considerable flexibility to design overlay, underlay and hybrid overlay/underlay waveforms that are scenario dependent. Overlay/underlay framework flexibility is demonstrated herein for a family of SMSE signals, including OFDM and MC-CDMA. Analytic derivation of CR error probability for overlay and underlay applications is presented. Simulated performance analysis of overlay, underlay and hybrid overlay/underlay waveforms is also presented and benefits discussed, to include improved spectrum efficiency and channel capacity maximization. Performance analysis of overlay/underlay CR waveform in fading channels will be discussed in Part II of the paper.

Journal ArticleDOI
TL;DR: This paper develops optimal resource allocation algorithms for the OFDMA downlink assuming the availability of only partial (imperfect) CSI, and considers both continuous and discrete ergodic weighted sum rate maximization subject to total power constraints, and average bit error rate constraints for the discrete rate case.
Abstract: Previous research efforts on OFDMA resource allocation have typically assumed the availability of perfect channel state information (CSI). Unfortunately, this is unrealistic, primarily due to channel estimation errors, and more importantly, channel feedback delay. In this paper, we develop optimal resource allocation algorithms for the OFDMA downlink assuming the availability of only partial (imperfect) CSI. We consider both continuous and discrete ergodic weighted sum rate maximization subject to total power constraints, and average bit error rate constraints for the discrete rate case. We approach these problems using a dual optimization framework, allowing us to solve these problems with O(MK) complexity per symbol for an OFDMA system with K used subcarriers and M active users, while achieving relative optimality gaps of less than 10-5 for continuous rates and less than 10-3 for discrete rates in simulations based on realistic parameters.

Journal ArticleDOI
TL;DR: Simulation results show that hybrid RF/FSO systems with BICM outperform previously proposed hybrid systems employing a simple repetition code and selection diversity and develop code design and power assignment criteria and provide an efficient code search procedure.
Abstract: In this paper, we propose a novel architecture for hybrid radio frequency (RF)/free-space optics (FSO) wireless systems. Hybrid RF/FSO systems are attractive since the RF and FSO sub-systems are affected differently by weather and fading phenomena. For example, while 60-GHz RF systems are susceptible to rain, fog is detrimental to FSO systems. We show that a hybrid system robust to these impairments is obtained by joint bit-interleaved coded modulation (BICM) of the bit steams transmitted over the RF and FSO sub-channels. An asymptotic performance analysis reveals that a properly designed convolutional code can exploit the diversity offered by the independent sub-channels. Furthermore, we develop code design and power assignment criteria and provide an efficient code search procedure. The cut-off rate of the proposed hybrid system is also derived and compared to that of hybrid systems with perfect channel state information at the transmitter. Simulation results show that hybrid RF/FSO systems with BICM outperform previously proposed hybrid systems employing a simple repetition code and selection diversity.

Journal ArticleDOI
TL;DR: The framework relies on the Moment Generating Function (MGF-) based approach for performance analysis of communication systems over fading channels, and on some properties of the Laplace Transform, which allow to develop a single-integral relation between the M GF of a random variable and the MGF of its inverse.
Abstract: In this Letter, we propose a comprehensive framework for performance analysis of cooperative wireless systems using Amplify and Forward (AF) relay methods. The framework relies on the Moment Generating Function (MGF-) based approach for performance analysis of communication systems over fading channels, and on some properties of the Laplace Transform, which allow to develop a single-integral relation between the MGF of a random variable and the MGF of its inverse. Moreover, a simple lower bound for Outage Probability (Pout) and Outage Capacity (OC) computation is also introduced. Numerical and simulation results are provided to substantiate the accuracy of the proposed framework.

Journal ArticleDOI
TL;DR: Experimental results show that over the AWGN channel, these non-binary quasi-cyclic LDPC codes significantly outperform Reed-Solomon codes of the same lengths and rates decoded with either algebraic hard-dec decision Berlekamp-Massey algorithm or algebraic soft-decision Kotter-Vardy algorithm.
Abstract: This paper presents two algebraic methods for constructing high performance and efficiently encodable nonbinary quasi-cyclic LDPC codes based on arrays of special circulant permutation matrices and multi-fold array dispersions. Codes constructed based on these methods perform well over the AWGN and other types of channels with iterative decoding based on belief-propagation. Experimental results show that over the AWGN channel, these non-binary quasi-cyclic LDPC codes significantly outperform Reed-Solomon codes of the same lengths and rates decoded with either algebraic hard-decision Berlekamp-Massey algorithm or algebraic soft-decision Kotter-Vardy algorithm. Also presented in this paper is a class of asymptotically optimal LDPC codes for correcting bursts of erasures. Codes constructed also perform well over flat fading channels. Non-binary quasi-cyclic LDPC codes have a great potential to replace Reed-Solomon codes in some applications in communication environments and storage systems for combating mixed types of noises and interferences.

Journal ArticleDOI
TL;DR: Two optimization models are established and solved that determine the desirable transmitter power, transmitter wavelength, transmitter telescope gain, and receiver telescope gain and the solution feasibility relies on using the quantum cascade laser in transmitters.
Abstract: The performance of free-space optics (FSO) communication systems are affected by building sway and atmospheric interference. These adverse effects can be mitigated by appropriately adjusting the system parameters. In this paper, two optimization models are established and solved. The desirable transmitter power, transmitter wavelength, transmitter telescope gain, and receiver telescope gain can be determined accordingly. The solution feasibility relies on using the quantum cascade laser (QCL) in transmitters. Details of the numerical experiments are also reported.

Journal ArticleDOI
TL;DR: It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user, and a closed-form expression for the utility achieved at equilibrium is obtained.
Abstract: A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility while satisfying its QoS requirements. The user's QoS constraints are specified in terms of the average source rate and an upper bound on the average delay where the delay includes both transmission and queuing delays. The utility function considered here measures energy efficiency and is particularly suitable for wireless networks with energy constraints. The Nash equilibrium solution for the proposed non-cooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a "size" for the user which is an indication of the amount of network resources consumed by the user. Using this competitive multiuser framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are studied. In addition, analytical expressions are given for users' delay profiles and the delay performance of the users at Nash equilibrium is quantified.

Journal ArticleDOI
TL;DR: In this article, the authors describe a parallel serial decoder architecture that can be used to map any low-density parity-check (LDPC) code with such a structure to a hardware emulation platform.
Abstract: Many classes of high-performance low-density parity-check (LDPC) codes are based on parity check matrices composed of permutation submatrices. We describe the design of a parallel-serial decoder architecture that can be used to map any LDPC code with such a structure to a hardware emulation platform. High-throughput emulation allows for the exploration of the low bit-error rate (BER) region and provides statistics of the error traces, which illuminate the causes of the error floors of the (2048, 1723) Reed-Solomon based LDPC (RS-LDPC) code and the (2209, 1978) array-based LDPC code. Two classes of error events are observed: oscillatory behavior and convergence to a class of non-codewords, termed absorbing sets. The influence of absorbing sets can be exacerbated by message quantization and decoder implementation. In particular, quantization and the log-tanh function approximation in sum-product decoders strongly affect which absorbing sets dominate in the errorfloor region. We show that conventional sum-product decoder implementations of the (2209, 1978) array-based LDPC code allow low-weight absorbing sets to have a strong effect, and, as a result, elevate the error floor. Dually-quantized sum-product decoders and approximate sum-product decoders alleviate the effects of low-weight absorbing sets, thereby lowering the error floor.

Journal ArticleDOI
TL;DR: A performance evaluation of single-user communication systems operating in a composite channel is conducted by deriving an analytical expression for the outage probability and derive the moment generating function of the G-distribution, hence facilitating the calculation of average bit error probabilities.
Abstract: Composite multipath fading/shadowing environments are frequently encountered in different realistic scenarios. These channels are generally modeled as a mixture of Nakagami-m multipath fading and log-normal shadowing. The resulting composite probability density function (pdf) is not available in closed form, thereby making the performance evaluation of communication links in these channels cumbersome. In this paper, we propose to model composite channels by the G-distribution. This pdf arises when the log-normal shadowing is substituted by the inverse-Gaussian one. This substitution will prove to be very accurate for several shadowing conditions. In this paper we conduct a performance evaluation of single-user communication systems operating in a composite channel. Our study starts by deriving an analytical expression for the outage probability. Then, we derive the moment generating function of the G-distribution, hence facilitating the calculation of average bit error probabilities. We also derive analytical expressions for the channel capacity for three adaptive transmission techniques, namely, i) optimal rate adaptation with constant power, ii) optimal power and rate adaptation, and iii) channel inversion with fixed rate.

Journal ArticleDOI
TL;DR: This work considers a memoryless system, where the signal transmitted by the relay is obtained by applying an instantaneous relay function to the previously received signal, and optimize the relay function via functional analysis such that the average probability of error is minimized at the high signal-to-noise ratio (SNR) regime.
Abstract: We propose relaying strategies for uncoded two-way relay channels, where two terminals transmit simultaneously to each other with the help of a relay. In particular, we consider a memoryless system, where the signal transmitted by the relay is obtained by applying an instantaneous relay function to the previously received signal. For binary antipodal signaling, a class of so called absolute (abs)-based schemes is proposed in which the processing at the relay is solely based on the absolute value of the received signal. We analyze and optimize the symbol-error performance of existing and new abs-based and non-abs-based strategies under an average power constraint, including abs-based and non-abs-based versions of amplify and forward (AF), detect and forward (DF), and estimate and forward (EF). Additionally, we optimize the relay function via functional analysis such that the average probability of error is minimized at the high signal-to-noise ratio (SNR) regime. The optimized relay function is shown to be a Lambert W function parameterized on the noise power and the transmission energy. The optimized function behaves like abs-AF at low SNR and like abs-DF at high SNR, respectively; EF behaves similarly to the optimized function over the whole SNR range. We find the conditions under which each class of strategies is preferred. Finally, we show that all these results can also be generalized to higher order constellations.

Journal ArticleDOI
TL;DR: A unified approach for constructing binary and nonbinary quasi-cyclic LDPC codes under a single framework is presented and numerical results show that the codes constructed perform well over the AWGN channel with iterative decoding.
Abstract: A unified approach for constructing binary and nonbinary quasi-cyclic LDPC codes under a single framework is presented. Six classes of binary and nonbinary quasi-cyclic LDPC codes are constructed based on primitive elements, additive subgroups, and cyclic subgroups of finite fields. Numerical results show that the codes constructed perform well over the AWGN channel with iterative decoding.