scispace - formally typeset
Search or ask a question

Showing papers in "Wireless Communications and Mobile Computing in 2011"


Journal ArticleDOI
TL;DR: Two optimized relay placement strategies are proposed with the objective of federating disjoint wireless sensor network sectors with the maximum connectivity under a cost constraint on the total number of RNs to be deployed.
Abstract: Advances in sensing and wireless communication technologies have enabled a wide spectrum of Outdoor Environment Monitoring applications. In such applications, several wireless sensor network sectors tend to collaborate to achieve more sophisticated missions that require the existence of a communication backbone connecting (federating) different sectors. Federating these sectors is an intricate task because of the huge distances between them and because of the harsh operational conditions. A natural choice in defeating these challenges is to have multiple relay nodes (RNs) that provide vast coverage and sustain the network connectivity in harsh environments. However, these RNs are expensive; thus, the least possible number of such devices should be deployed. Furthermore, because of the harsh operational conditions in Outdoor Environment Monitoring applications, fault tolerance becomes crucial, which imposes further challenges; RNs should be deployed in such a way that tolerates failures in some links or nodes. In this paper, we propose two optimized relay placement strategies with the objective of federating disjoint wireless sensor network sectors with the maximum connectivity under a cost constraint on the total number of RNs to be deployed. The performance of the proposed approach is validated and assessed through extensive simulations and comparisons assuming practical considerations in outdoor environments. Copyright © 2011 John Wiley & Sons, Ltd.

37 citations


Journal ArticleDOI
TL;DR: A spectral-estimation-based algorithm to estimate the EC function, given channel measurements, and the effect of spectral estimation error on the accuracy of EC estimation is analyzed, indicating the excellent practicality of this algorithm.
Abstract: The next generation wireless networks call for quality of service (QoS) support The effective capacity (EC) proposed by Wu and Negi provides a powerful tool for the design of QoS provisioning mechanisms In their previous work, Wu and Negi derived a formula for effective capacity of a Rayleigh fading channel with arbitrary Doppler spectrum However, their paper did not provide simulation results to verify the accuracy of the EC formula derived in their paper This is due to difficulty in simulating a Rayleigh fading channel with a Doppler spectrum of continuous frequency, required by the EC formula To address this difficulty, we develop a verification methodology based on a new discrete-frequency EC formula; different from the EC formula developed by Wu and Negi, our new discrete-frequency EC formula can be used in practice Through simulation, we verify that the EC formula developed by Wu and Negi is accurate Furthermore, to facilitate the application of the EC theory to the design of practical QoS provisioning mechanisms in wireless networks, we propose a spectral-estimation-based algorithm to estimate the EC function, given channel measurements; we also analyze the effect of spectral estimation error on the accuracy of EC estimation Simulation results show that our proposed spectral-estimation-based EC estimation algorithm is accurate, indicating the excellent practicality of our algorithm Copyright © 2010 John Wiley & Sons, Ltd

19 citations


Journal ArticleDOI
TL;DR: This paper considers a spatially separated wireless sensor network (SS-WSN), which consists of a number of isolated subnetworks that could be far away from each other in distance, and proposes some efficient heuristics for EM-TSP, a generalization of the classical traveling salesman problem (TSP), an NP-complete problem.
Abstract: This paper considers a spatially separated wireless sensor network (SS-WSN), which consists of a number of isolated subnetworks that could be far away from each other in distance. We address the issue of using mobile mules to collect data from these sensor nodes. In such an environment, both data collection latency and network lifetime are critical issues. We model this problem as a bi-objective problem, called energy-constrained mule traveling salesman problem (EM-TSP), which aims at minimizing the traversal paths of mobile mules such that at least one node in each subnetwork is visited by a mule and the maximum energy consumption among all sensor nodes does not exceed a pre-defined threshold. Interestingly, the traversal problem turns out to be a generalization of the classical traveling salesman problem (TSP), an NP-complete problem. Based on some geometrical properties of the network, we propose some efficient heuristics for EM-TSP. We then extend our heuristics to multiple mobile mules. Extensive simulation results have been conducted, which show that our proposed solutions usually give much better solutions than most TSP-like approximations. Copyright c ⃝ 2010 John Wiley & Sons, Ltd.

16 citations


Journal ArticleDOI
TL;DR: This paper performs stochastic analysis of broadcasting delays in VANETs under three typical scenarios: freeway, sparse traffic and dense traffic, and utilizes them to analyze the broadcasting delay in these scenarios.
Abstract: High mobility of nodes in vehicular ad hoc networks (VANETs) may lead to frequent breakdowns of established routes in conventional routing algorithms commonly used in mobile ad hoc networks. To satisfy the high reliability and low delivery-latency requirements for safety applications in VANETs, broadcasting becomes an essential operation for route establishment and repair. However, high node mobility causes constantly changing traffic and topology, which creates great challenges for broadcasting. Therefore, there is much interest in better understanding the properties of broadcasting in VANETs. In this paper we perform stochastic analysis of broadcasting delays in VANETs under three typical scenarios: freeway, sparse traffic and dense traffic, and utilize them to analyze the broadcasting delays in these scenarios. In the freeway scenario, the analytical equation of the expected delay in one connected group is given based on statistical analysis of real traffic data collected on freeways. In the sparse traffic scenario, the broadcasting delay in an n-vehicle network is calculated by a finite Markov chain. In the dense traffic scenario, the collision problem is analyzed by different radio propagation models. The correctness of these theoretical analyses is confirmed by simulations. These results are useful to provide theoretical insights into the broadcasting delays in VANETs. Copyright © 2010 John Wiley & Sons, Ltd.

15 citations


Journal ArticleDOI
TL;DR: An infrastructure for mobile agent watermarking (MAW) is introduced, a lightweight approach that can efficiently detect manipulation attacks performed by potentially malicious hosts that might seek to subvert the normal agent operation.
Abstract: Mobile agents are software entities consisting of code, data, and state that can migrate autonomously from host to host executing their code. In such scenario there are some security issues that must be considered. In particular, this paper deals with the protection of mobile agents against manipulation attacks performed by the host, which is one of the main security issues to solve in mobile agent systems. This paper introduces an infrastructure for mobile agent watermarking (MAW). MAW is a lightweight approach that can efficiently detect manipulation attacks performed by potentially malicious hosts that might seek to subvert the normal agent operation. MAW is the first proposal in the literature that adapts software watermarks to verify the execution integrity of an agent. The second contribution of this paper is a technique to punish a malicious host that performed a manipulation attack by using a trusted third party (TTP) called host revocation authority (HoRA). A proof-of-concept has also been developed and we present some performance evaluation results that demonstrate the usability of the proposed mechanisms. Copyright © 2010 John Wiley & Sons, Ltd.

9 citations


Journal ArticleDOI
TL;DR: A new combined method of SPW (sub-block phase weighting) for PAPR reduction and linearization technique for the improvement of the power efficiency and for the nonlinear compensation of HPA is proposed.
Abstract: Orthogonal frequency division multiplexing (OFDM) has been widely used in many kinds of communication systems However, OFDM signal has serious problem of high peak-to-average-power ratio (PAPR) due to so many sub-carriers So, OFDM signal has very wide dynamic range Therefore, the bit error rate (BER) performance may be degraded because of the nonlinear devices like the high power amplifier (HPA) Even if the linearization and large back-off are used to compensate for the HPA nonlinearity, the power efficiency of the HPA is still very low since the PAPR is very high Therefore, the PAPR reduction of the OFDM signal before the linearization would be more reasonable to improve the power efficiency and nonlinear compensation at the same time In this paper, we propose a new combined method of SPW (sub-block phase weighting) for PAPR reduction and linearization technique for the improvement of the power efficiency and for the nonlinear compensation of HPA An updated SPW method is proposed to use a novel weighting factor multiplication of the complementary sequence characteristic and PAPR threshold technique From the simulation results, it can be confirmed that BER performance is significantly improved and out-of-band spectrum radiations are much mitigated Power efficiency of HPA can be enhanced since we can set small IBO (input back-off) due to the PAPR reduction The proposed system shows about 3 and 1 dB performance improvement than the LCP (linearized constant peak-power)-OFDM and LCP-OFDM plus SPW at BER = 10−4 Copyright © 2010 John Wiley & Sons, Ltd

9 citations


Journal ArticleDOI
TL;DR: Both analysis and extensive simulation results were given to demonstrate that the proposed schemes guarantee optimal amount of data transmission by increasing the number of effective hits and outperform the popular Least Frequently Used scheme in terms of both effective Hits and communication cost.
Abstract: In mobile wireless data access networks, remote data access is expensive in terms of bandwidth consumption. An efficient caching scheme can reduce the amount of data transmission, hence, bandwidth consumption. However, an update event makes the associated cached data objects obsolete and useless for many applications. Data access frequency and update play a crucial role in deciding which data objects should be cached. Seemingly, frequently accessed but infrequently updated objects should have higher preference while preserving in the cache. Other objects should have lower preference or be evicted, or should not be cached at all, to accommodate higher-preference objects. In this paper, we proposed Optimal Update-based Replacement, a replacement or eviction scheme, for cache management in wireless data networks. To facilitate the replacement scheme, we also presented two enhanced cache access schemes, named Update-based Poll-Each-Read and Update-based Call-Back. The proposed cache management schemes were supported with strong theoretical analysis. Both analysis and extensive simulation results were given to demonstrate that the proposed schemes guarantee optimal amount of data transmission by increasing the number of effective hits and outperform the popular Least Frequently Used scheme in terms of both effective hits and communication cost. Copyright © 2011 John Wiley & Sons, Ltd.

9 citations


Journal ArticleDOI
TL;DR: This paper generalizes binary Luby transform codes to GF(q) to develop a low complexity maximum likelihood decoder and the proposed codes have numerous advantages, including low coding overhead, low encoding and decoding complexity, and good performance over various message block lengths, making them practical for real-time applications.
Abstract: Binary fountain codes such as Luby transform codes are a class of erasure codes which have demonstrated an asymptotic performance close to the Shannon limit when decoded with the belief propagation algorithm. When these codes are generalized to GF(q) for q > 2, their performance approaches the Shannon limit much faster than the usual binary fountain codes. In this paper, we extend binary fountain codes to GF(q). In particular, we generalize binary Luby transform codes to GF(q) to develop a low complexity maximum likelihood decoder. The proposed codes have numerous advantages, including low coding overhead, low encoding and decoding complexity, and good performance over various message block lengths, making them practical for real-time applications. Copyright © 2011 John Wiley & Sons, Ltd.

8 citations


Journal ArticleDOI
TL;DR: In this study, optimal and suboptimal receivers are investigated for code-multiplexed transmitted-reference (CM-TR) ultra-wideband systems and the linear minimum mean-squared error (MMSE) receiver is derived for the downlink of a multi-user CM-TR system.
Abstract: In this study, optimal and suboptimal receivers are investigated for code-multiplexed transmitted-reference (CM-TR) ultra-wideband systems. First, a single-user scenario is considered, and a CM-TR system is modeled as a generalized noncoherent pulse-position modulated system. Based on that model, the optimal receiver that minimizes the bit error probability is derived. Then, it is shown that the conventional CM-TR receiver converges to the optimal receiver under certain conditions and achieves close-to-optimal performance in practical cases. Next, multi-user systems are considered, and the conventional receiver, blinking receiver, and chip discriminator are investigated. Also, the linear minimum mean-squared error (MMSE) receiver is derived for the downlink of a multi-user CM-TR system. In addition, the maximum likelihood receiver is obtained as a performance benchmark. The practicality and the computational complexity of the receivers are discussed, and their performance is evaluated via simulations. The linear MMSE receiver is observed to provide the best trade-off between performance and complexity/practicality. Copyright © 2011 John Wiley & Sons, Ltd.

6 citations


Journal ArticleDOI
TL;DR: This paper considers achieving geography-limited broadcasting by means of the time-to-live (TTL) forwarding, which limits the propagation of a packet within a specified number of hops from the source, and shows that the TTL-based approach provides a practical trade-off between geographic coverage and broadcast overhead.
Abstract: In multihop wireless networks, delivering a packet to all nodes within a specified geographic distance from the source is a packet forwarding primitive (geography-limited broadcasting), which has a wide range of applications including disaster recovery, environment monitoring, intelligent transportation, battlefield communications, and location-based services. Geography-limited broadcasting, however, relies on all nodes having continuous access to precise location information, which may not be always achievable. In this paper, we consider achieving geography-limited broadcasting by means of the time-to-live (TTL) forwarding, which limits the propagation of a packet within a specified number of hops from the source. Because TTL operation does not require location information, it can be used universally under all conditions. Our analytical results, which are validated by simulations, confirm that TTL-based forwarding can match the performance of the traditional location-based geography-limited broadcasting in terms of the area coverage as well as the broadcasting overhead. It is shown that the TTL-based approach provides a practical trade-off between geographic coverage and broadcast overhead. By not delivering the packet to a tiny fraction of the total node population, all of which are located near the boundary of the target area, TTL-based approach reduces the broadcast overhead significantly. This coverage-overhead trade-off is useful if the significance of packet delivery reduces proportionally to the distance from the source. Copyright © 2011 John Wiley & Sons, Ltd.

5 citations


Journal ArticleDOI
TL;DR: Numerical results show that the proposed multiuser two-way relay processing schemes and the optimal power control policies can efficiently limit the interference caused by the secondary network to primary users, and the sum rate of SUs can also be greatly improved.
Abstract: We consider a cognitive radio system where a secondary network shares the spectrum band with a primary network. Aiming at improving the frequency efficiency of the secondary network, we set a multiantenna relay station in the secondary network to perform two-way relaying. Three linear processing schemes at the relay station based on zero forcing, zero forcing-maximum ratio transmission, and minimum mean square error criteria are derived to guarantee the quality of service of primary users and to suppress the intrapair and interpair interference among secondary users (SUs). In addition, the transmit power of SUs is optimized to maximize the sum rate of SUs and to limit the interference brought to PUs. Numerical results show that the proposed multiuser two-way relay processing schemes and the optimal power control policies can efficiently limit the interference caused by the secondary network to primary users, and the sum rate of SUs can also be greatly improved. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper proposes a fragment-based retransmission (FBR) scheme with quality-of-service considerations for IEEE 802.11e-based wireless local area networks and develops an analytical model and simulation model to investigate the performance of FBR.
Abstract: In order to satisfy quality-of-service requirements for real-time multimedia applications over wireless networks, IEEE 802.11e has been proposed to enhance wireless-access functionalities. In IEEE 802.11e, collisions occur frequently as the system load becomes heavy, and then, the latency for successfully transmitting data is lengthened seriously because of contention, handshaking, and backoff overheads for collision avoidance. In this paper, we propose a fragment-based retransmission (FBR) scheme with quality-of-service considerations for IEEE 802.11e-based wireless local area networks. Our FBR can be used in all proposed fragmentation-based schemes and greatly decrease redundant transmission overheads. By utilizing FBR, the retransmission delay will be significantly improved to conform strict time requirements for real-time multimedia applications. We develop an analytical model and a simulation model to investigate the performance of FBR. The capability of the proposed scheme is evaluated by a series of simulations, for which we have encouraging results. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A novel framework quantifying the system performance of PBS by making use of spatially quantized decision regions that are determined according to service properties is provided and can be used to improve their command on the services’ behavior and estimate service performance before deployment.
Abstract: Proximity-based services (PBS) are a subclass of location-based services that aim to detect the closest point of interest by comparing relative position of a mobile user with a set of entities to be detected. Traditionally, the performances of PBS are measured on the basis of the norm of the estimation error. Although this performance criterion is suitable for location-based services that aim tracking applications, it does not give enough information about the performance of PBS. This paper provides a novel framework quantifying the system performance of PBS by making use of spatially quantized decision regions that are determined according to service properties. The detection problem in PBS is modeled by an M-ary hypothesis test, and analytical expressions for correct detection, false alarm, and missed detection rates are derived. A relation between location estimation accuracy requirements that are mandated by regulatory organizations and the performance metrics of PBS is given. Additionally, a flexible cost expression that can be used to design high-performance PBS is provided. A system deployment scenario is considered to demonstrate the results. By using this framework, PBS designers can improve their command on the services’ behavior and estimate service performance before deployment. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A comprehensive study on the statistics of adaptive antenna arrays is presented, including the derivation of the probability distribution function of the cochannel interference for the uplink of mobile communications systems, unconditioned on the interferers' directions of arrival.
Abstract: A comprehensive study on the statistics of adaptive antenna arrays is presented. The main contribution is the derivation of the probability distribution function of the cochannel interference for the uplink of mobile communications systems, unconditioned on the interferers' directions of arrival. Closed-form results for the cumulative distribution function, moment generating function, and outage probability are also reported. Numerical verification is provided, utilizing both simulation and published data. Copyright © 2011 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The results show that the proposed HQLA power control scheme is superior to the corresponding CGB-PC in both average power consumption and effective capacity.
Abstract: We consider the problem of optimal power control for quality-of-service-assured wireless communication. The quality of service (QoS) measures of our consideration are a triplet of data rate, delay, and delay bound violation probability (DBVP). Our target is to develop power control laws that can provide delay guarantees for real-time applications over wireless networks. The power control laws that aim at optimizing certain physical-layer performance measures, usually adapt the transmission power based on the channel gain; we call these “channel-gain-based” (CGB) power control (PC). In this paper, we show that CGB-PC laws achieve poor link-layer delay performance. To improve the performance, we propose a novel scheme called hierarchical queue-length-aware (HQLA) power control. The key idea is to combine the best features of the two PC laws, i.e., a given CGB-PC law and the clear-queue (CQ) PC law; here, the CQ-PC is defined as a PC law that uses a transmission power just enough to empty the queue at the link layer. We analyze our proposed HQLA-PC scheme by the matrix-geometric method. The analysis agrees well with the simulation results. More importantly, our results show that the proposed HQLA power control scheme is superior to the corresponding CGB-PC in both average power consumption and effective capacity. Copyright © 2010 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is proved that the equivalent preprocessing to the ideal interleaving is to re-code independent parallel channels codewords by a linear inner precoder from a special class of unitary precoders complying with the optimality criterion derived in the paper.
Abstract: A novel optimal two stage coding for finite set of parallel flat-fading MIMO channels with single common information source with specific constant rate requirement is derived. The optimality of suggested coding is achieved in terms of the capacity versus outage performance. The well-known optimal coding rule relies on Gaussian codewords spanned over the whole available finite set of parallel channels. We prove that the equivalent preprocessing to the ideal interleaving is to re-code independent parallel channels codewords by a linear inner precoder from a special class of unitary precoders complying with the optimality criterion derived in the paper. Performing such linear mixture of codewords sharing common Gaussian block-wise codebook, the same capacity versus the outage is guaranteed without any interleaving over parallel channels. We utilize a virtual multiple access (VMA) channel approach to derive the optimality criterion. Selected precoders with various space-time or time-only domain span were tested against this criterion and we provide the optimality results on variety of the channel parameters. We showed that the temporal processing is the most important one to achieve the optimality of the precoder. A full space-time precoding does not perform better than one which is temporal-only. Copyright © 2010 John Wiley & Sons, Ltd.