scispace - formally typeset
Search or ask a question

Showing papers in "Annales Des Télécommunications in 2011"


Journal ArticleDOI
TL;DR: There is a clear trade-off between flexibility and performance, but it is concluded that both Xen and OpenFlow are suitable platforms for network virtualization.
Abstract: Currently, there is a strong effort of the research community in rethinking the Internet architecture to cope with its current limitations and support new requirements. Many researchers conclude that there is no one-size-fits-all solution for all of the user and network provider needs and thus advocate for a pluralist network architecture, which allows the coexistence of different protocol stacks running at the same time over the same physical substrate. In this paper, we investigate the advantages and limitations of the virtualization technologies for creating a pluralist environment for the Future Internet. We analyze two types of virtualization techniques, which provide multiple operating systems running on the same hardware, represented by Xen, or multiple network flows on the same switch, represented by OpenFlow. First, we define the functionalities needed by a Future Internet virtual network architecture and how Xen and OpenFlow provide them. We then analyze Xen and OpenFlow in terms of network programmability, processing, forwarding, control, and scalability. Finally, we carry out experiments with Xen and OpenFlow network prototypes, identifying the overhead incurred by each virtualization tool by comparing it with native Linux. Our experiments show that OpenFlow switch forwards packets as well as native Linux, achieving similar high forwarding rates. On the other hand, we observe that the high complexity involving Xen virtual machine packet forwarding limits the achievable packet rates. There is a clear trade-off between flexibility and performance, but we conclude that both Xen and OpenFlow are suitable platforms for network virtualization.

102 citations


Journal ArticleDOI
TL;DR: This letter proposes an identity-based signature scheme without bilinear pairings that is more practical than the previous related schemes for practical application and saves the running time and the size of the signature.
Abstract: The proxy signature schemes allow proxy signers to sign messages on behalf of an original signer, a company, or an organization. Such schemes have been suggested for use in a number of applications, particularly in distributed computing, where delegation of rights is quite common. Many identity-based proxy signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, in this letter, we propose an identity-based signature scheme without bilinear pairings. With the running time being saved greatly, our scheme is more practical than the previous related schemes for practical application.

80 citations


Journal ArticleDOI
TL;DR: It is shown, by applying the Akaike information criterion, that the Weibull and Gamma distributions generally fit agglomerates of received signal amplitude data and that in various individual cases the Lognormal distribution provides a good fit.
Abstract: Comprehensive statistical characterizations of the dynamic narrowband on-body area and on-body to off-body area channels are presented. These characterizations are based on real-time measurements of the time domain channel response at carrier frequencies near the 900- and 2,400-MHz industrial, scientific, and medical bands and at a carrier frequency near the 402-MHz medical implant communications band. We consider varying amounts of body movement, numerous transmit–receive pair locations on the human body, and various bandwidths. We also consider long periods, i.e., hours of everyday activity (predominantly indoor scenarios), for on-body channel characterization. Various adult human test subjects are used. It is shown, by applying the Akaike information criterion, that the Weibull and Gamma distributions generally fit agglomerates of received signal amplitude data and that in various individual cases the Lognormal distribution provides a good fit. We also characterize fade duration and fade depth with direct matching to second-order temporal statistics. These first- and second-order characterizations have important utility in the design and evaluation of body area communications systems.

71 citations


Journal ArticleDOI
TL;DR: This work proposes a different approach based on opportunistic relaying that relies on electing some sensors to support the transmission of other ones having a worst connection and evaluates this approach from a theoretical point of view and on realistic simulations using the packet error rate outage probability as a performance criterion.
Abstract: Body area networks (BAN) offer amazing perspectives to instrument and support humans in many aspects of their life. Among all possible applications, this paper focuses on body monitoring applications having a body equipped with a set of sensors transmitting in real time their measures to a common sink. In this context, at the application level, the network fits with a star topology, which is quite usual in the broad scope of wireless networks. Unfortunately, the structure of the network at the physical layer is totally different. Indeed, due to the specificity of BAN radio channel features, all radio links are not stationary and all sensors suffer from link losses during independent time frames. In wireless networks, link losses are often coped with multi-hop transmission schemes to ensure a good connectivity. However, since the radio links are not stationary, the multi-hop routes should adapt quickly to BAN changes. We instead propose in this work a different approach based on opportunistic relaying. The concept relies on electing some sensors to support the transmission of other ones having a worst connection. Instead of changing the relay time to time, we rather select a relay node from a statistical perspective. We evaluate this approach from a theoretical point of view and on realistic simulations using the packet error rate outage probability as a performance criterion.

43 citations


Journal ArticleDOI
TL;DR: Uncertainty analysis of human exposure to radio waves is studied with a spectral approach of stochastic collocation methods, which allows determining in an efficient way the statistical moments of the output variable, the specific absorption rate, with respect to uncertain input parameters.
Abstract: Uncertainty analysis of human exposure to radio waves is studied with a spectral approach of stochastic collocation methods. This approach allows determining in an efficient way the statistical moments of the output variable, the specific absorption rate, with respect to uncertain input parameters. Polynomial chaos expansions are used for the random output, and the spectral coefficients are determined by projection or regression. These techniques are used with an electromagnetic solver based on a finite difference time domain scheme. The convergence of the statistical moments is analyzed for two case studies. Global sensitivity is also analyzed for the uncertain position of a cellular phone in the close vicinity of a human head model.

32 citations


Journal ArticleDOI
TL;DR: The first results achieved in the French ANR project BANET (Body Area NEtwork and Technologies) concerning the channel characterization and modeling aspects of Body Area Networks (BANs) are presented.
Abstract: The first results achieved in the French ANR (National Research Agency) project BANET (Body Area NEtwork and Technologies) concerning the channel characterization and modeling aspects of Body Area Networks (BANs) are presented (part II). A scenario-based approach is used to determine the BAN statistical behavior, trends, and eventually models, from numerous measurement campaigns. Measurement setups are carefully described in the UWB context. The numerous sources of variability of the channel are addressed. A particular focus is put on the time-variant channel, showing notably that it is the main cause of the slow fading variance. Issues related to the data processing and the measurement uncertainties are also described.

31 citations


Journal ArticleDOI
TL;DR: The performance analysis of the 32 × 32 crosspoint-queued switch will prove that some form of round-robin-based algorithms become a better choice for implementation due to their simplicity, small hardware requirements, and avoidance of the starvation problem, which is a major drawback of the longest queue first algorithm.
Abstract: The performance analysis of the 32 × 32 crosspoint-queued switch is presented in this paper. Switches with small buffers in crosspoints have been evaluated in the late 1980s but mostly for uniform traffic. However, due to technological limitations of that time, it was impractical to implement large buffers together with switching fabric. The crosspoint-queued switch architecture has been recently brought back into focus since modern technology enables an easy implementation of large buffers in crosspoints. An advantage of this solution is the absence of control communication between linecards and schedulers. In this paper, the performances of four algorithms (longest queue first, round robin, exhaustive round robin, and frame-based round robin matching) are analyzed and compared. The results obtained for the crosspoint-queued switch are compared with the output queued switch. Throughput, average cell latency and instantaneous packet delay variance are evaluated under uniform and nonuniform traffic patterns. The results will show that the longest queue first algorithm has the highest throughput in many simulated cases but the highest average cell latency and delay variance among observed algorithms. It will also be shown that the choice of the scheduling algorithm does not play a role in the switch performance if the buffers are long enough. This will prove that some form of round-robin-based algorithms become a better choice for implementation due to their simplicity, small hardware requirements, and avoidance of the starvation problem, which is a major drawback of the longest queue first algorithm.

26 citations


Journal ArticleDOI
Yan Zhang1, Guido Dolmans1
TL;DR: In this protocol, data channels are separated from control channels to support collision-free high data rate communication for CE applications and an asynchronous wakeup trigger mode is proposed as an enhancement to the priority traffic.
Abstract: The newly emerging wireless body area networks (WBANs) are intended to support both medical applications and consumer electronic (CE) applications. These two types of applications present diverse service requirements. To satisfy both medical and CE applications with a uniform medium access control (MAC) protocol becomes a new challenge for the WBAN. Addressing this problem, a priority-guaranteed MAC protocol is proposed in this paper. In this protocol, data channels are separated from control channels to support collision-free high data rate communication for CE applications. Priority-specific control channels are adopted to provide priority guarantee to life-critical medical applications. Traffic-specific data channels are deployed to improve resource efficiency and latency performance. Moreover, in order to further minimize energy consumption and access latency, an asynchronous wakeup trigger mode is proposed as an enhancement to the priority traffic. Monte Carlo simulations are carried out for performance evaluation. As compared with IEEE 802.15.4 MAC and its improved versions, the priority-guaranteed MAC demonstrates significant improvements on throughput and energy efficiency with a tolerable penalty on latency performance of bursty traffic in CE applications. Therefore, the customized priority-guaranteed MAC satisfies the service requirements of WBAN by making tradeoff among the performances of different applications.

24 citations


Journal ArticleDOI
TL;DR: This paper focuses on the radio scene analysis (or fingerprinting) positioning method, and proposes a hierarchical pattern matching method during the real-time localization phase for dealing with the expanded search space after adding orientation-sensitive information.
Abstract: The emergence of innovative location-oriented services and the great advances in mobile computing and wireless networking motivated the development of positioning systems in indoor environments. However, despite the benefits from location awareness within a building, the implicating indoor characteristics and increased user mobility impeded the implementation of accurate and time-efficient indoor localizers. In this paper, we consider the case of indoor positioning based on the correlation between location and signal intensity of the received Wi-Fi signals. This is due to the wide availability of WLAN infrastructure and the ease of obtaining such signal strength (SS) measurements by standard 802.11 cards. With our focus on the radio scene analysis (or fingerprinting) positioning method, we study both deterministic and probabilistic schemes. We then describe techniques to improve their accuracy without increasing considerably the processing time and hardware requirements of the system. More precisely, we first propose considering orientation information and simple SS sample processing during the training of the system or the entire localization process. For dealing with the expanded search space after adding orientation-sensitive information, we suggest a hierarchical pattern matching method during the real-time localization phase. Numerical results based on real experimental measurements demonstrated a noticeable performance enhancement, especially for the deterministic case which has additionally the advantage of being less complex compared to the probabilistic one.

22 citations


Journal ArticleDOI
TL;DR: This paper proposes a surrogate model to assess the distribution of the whole body SAR in the case of an exposure to multiple plane waves, and suggests that the exposure to a single plane wave arriving face to the body, used for the guidelines, does not constitute the worst case.
Abstract: The assessment of the exposure to electromagnetic waves is nowadays a key question. Dealing with the relationship between exposure and incident field, most of previous investigations have been performed with a single plane wave. Realistic exposure in the far field can be modeled as multiple plane waves with random direction of arrival, random amplitude, and random phase. This paper, based on numerical investigations, studies the whole body specific absorption rate (SAR) linked to the exposure induced by five random plane waves having uniformly distributed angles of arrival in the horizontal plane, log-normal distributed amplitudes, and uniformly distributed phases. A first result shows that this random heterogeneous exposure generates maximal variations of ±25% for the whole body specific absorption. An important observation is that the exposure to a single plane wave arriving face to the body, used for the guidelines, does not constitute the worst case. We propose a surrogate model to assess the distribution of the whole body SAR in the case of an exposure to multiple plane waves. For a sample of 30 values of whole body SAR induced by five plane waves at 2.4 GHz, this simple approach, considering the resulting SAR as the sum of the SAR induced by each isolated plane wave, leads to an estimated distribution of whole body SAR following the real distribution with a p value of 76% according to the Kolmogorov statistical test.

18 citations


Journal ArticleDOI
TL;DR: This paper uses the mathematical framework of stochastic geometry to model both the road system and the locations of network nodes, and derives analytical formulas for distributions of connection lengths that play a major role in current problems in the analysis and planning of networks.
Abstract: The access network displays an important particularity that the locations of the network components strongly depend on geometrical features such as road systems and a city’s architecture. This paper deals with the distributions of point-to-point connection lengths that play a major role in current problems in the analysis and planning of networks. Using the mathematical framework of stochastic geometry to model both the road system and the locations of network nodes, we derive analytical formulas for distributions of connection lengths. These formulas depend explicitly on a few parameters that can be computed easily and fast avoiding time-consuming reconstructions. We validate the approach by a comparison with actual network data and show its adaptability by considering several policies for nodes location and examples of use.

Journal ArticleDOI
TL;DR: The short signature scheme of Zhang et al. is proven to be secure against the adaptive chosen-message attacks in the random oracle model, so the proposed protocol can withstand the possible attacks and is more secure and efficient.
Abstract: An authenticated group key agreement protocol allows a group of parties to authenticate each other and then determine a group key via an insecure network environment In 2009, Lee et al first adopted bilinear pairings to propose a new nonauthenticated group key agreement protocol and then extend it to an authenticated group key agreement protocol This paper points out that the authenticated protocol of Lee et al is vulnerable to an impersonation attack such that any adversary can masquerade as a legal node to determine a group key with the other legal nodes and the powerful node This paper shall employ the short signature scheme of Zhang et al to propose a new authenticated group key agreement protocol The short signature scheme of Zhang et al is proven to be secure against the adaptive chosen-message attacks in the random oracle model, so the proposed protocol can withstand the possible attacks Besides, compared with the authenticated protocol of Lee et al, the proposed protocol is more secure and efficient

Journal ArticleDOI
TL;DR: A cooperative agent based approach for the vertical handover using a knowledge plane is presented to introduce the agents in the mobile nodes and access points to collect the necessary information from the environment and take a handover decision.
Abstract: Advances in technology have enabled a proliferation of mobile devices and a broad spectrum of novel and outbreaking solutions for new applications and services. The increasing demand for all time and everywhere services requires the network operators to integrate different kinds of wireless and cellular networks. To enable this integration, it is important that users can roam freely across networks. As different technologies are involved in the current infrastructure, the problem of vertical handover needs to be addressed. To cope with the problem of seamless connectivity, several solutions have been presented. But most of them either lack intelligence or are not adaptable for reducing the packet loss and delay involved in the handover procedure. An intelligent technique is needed in order to perform the service continuity in the heterogeneous environment. This paper presents a cooperative agent based approach for the vertical handover using a knowledge plane. We propose to introduce the agents in the mobile nodes and access points to collect the necessary information from the environment. Based on this information, agents take a handover decision. A selection function is also introduced in this work which helps in choosing a best network from the available ones for handover. Finally, the proposed approach is validated with the help of simulations.

Journal ArticleDOI
TL;DR: This work divides and presents potential misbehaving nodes in four different adversary models, based on their capacities, and provides algorithms that enable the location-unaware nodes to determine their coordinates in the presence of these adversaries.
Abstract: Geolocalization of nodes in a wireless sensor network is a process that allows location-unaware nodes to discover their spatial coordinates. This process requires the cooperation of all the nodes in the system. Ensuring the correctness of the process, especially in the presence of misbehaving nodes, is crucial for ensuring the integrity of the system. We analyze the problem of location-unaware nodes determining their location in the presence of misbehaving neighboring nodes that provide false data during the execution of the process. We divide and present potential misbehaving nodes in four different adversary models, based on their capacities. We provide algorithms that enable the location-unaware nodes to determine their coordinates in the presence of these adversaries. The algorithms always work for a given number of neighbors provided that the number of misbehaving nodes is below a certain threshold, which is determined for each adversary model.

Journal ArticleDOI
TL;DR: This work considers hybrid automatic repeat request (HARQ) protocols on a fading channel with Chase combining and deals with both Rayleigh and Nakagami-m fading and derives the packet loss probability and the throughput for HARQ both for a slow- varying and a fast-varying channel.
Abstract: This work considers hybrid automatic repeat request (HARQ) protocols on a fading channel with Chase combining and deals with both Rayleigh and Nakagami-m fading. We derive the packet loss probability and the throughput for HARQ both for a slow-varying and a fast-varying channel. We then consider link adaptation with complete channel state information (CSI) for which the instantaneous signal-to-noise ratio (SNR) is known and with incomplete CSI for which only the average SNR is known. We derive analytical formulae of the long-term throughput. These formulae are simple enough to be used for higher level simulations. We show that the throughput is slightly higher on a slow-varying channel but at the expense of a higher loss probability.

Journal ArticleDOI
TL;DR: It is shown that Lee et al.
Abstract: In 2009, Lee et al. (Ann Telecommun 64:735–744, 2009) proposed a new authenticated group key agreement protocol for imbalanced wireless networks. Their protocol based on bilinear pairing was proven the security under computational Diffie–Hellman assumption. It remedies the security weakness of Tseng’s nonauthenticated protocol that cannot ensure the validity of the transmitted messages. In this paper, the authors will show that Lee et al.’s authenticated protocol also is insecure. An adversary can impersonate any mobile users to cheat the powerful node. Furthermore, the authors propose an improvement of Lee et al.’s protocol and prove its security in the Manulis et al.’s model. The new protocol can provide mutual authentication and resist ephemeral key compromise attack via binding user’s static private key and ephemeral key.

Journal ArticleDOI
TL;DR: This work presents an in-depth simulation-based analysis of two reactive routing protocols, i.e., dynamic source routing (DSR) and ad hoc on-demand distance vector (AODV) with modified IEEE 802.11a PHY/MAC layers in modified VANET mobility models, showing that in urban/highway mobility scenarios, AODV’s performance with forthcoming802.11p at high bit rate would be better than DSR.
Abstract: Realistic mobility dynamics and underlying PHY/MAC layer implementation affect real deployment of routing protocols in vehicular ad hoc network (VANET). Currently, dedicated short range communication devices are using wireless access in vehicular environment (WAVE) mode of operation, but now IEEE is standardizing 802.11p WAVE. This work presents an in-depth simulation-based analysis of two reactive routing protocols, i.e., dynamic source routing (DSR) and ad hoc on-demand distance vector (AODV) with modified IEEE 802.11a PHY/MAC layers (comparable to 802.11p) in modified VANET mobility models (freeway, stop sign, and traffic sign) in terms of load, throughput, delay, number of hops, and retransmission attempts. Results obtained using OPNET simulator show that in urban/highway mobility scenarios, AODV’s performance with forthcoming 802.11p at high bit rate would be better than DSR in terms of high throughput, less delay, and retransmission attempts. Moreover, this comprehensive evaluation will assist to address challenges associated with future deployment of routing protocols integrated upon devices with upcoming IEEE 802.11p, concerning specific macro-/micro-mobility scenarios.

Journal ArticleDOI
TL;DR: The dynamic delay spread of the channel, which determines an energy collector detecting the signal energy over a time window, is investigated and a two-state alternating Weibull renewal process model is proposed.
Abstract: In this paper, we expand the knowledge of the ultra-wideband (UWB) channel in the frequency range of 3.1–10 GHz in close proximity of a human body. The channels under dynamic conditions due to the effect of body motions are studied through the pseudo-dynamic measurement method. Firstly, the first-order statistics of the channels, namely, amplitude distributions are investigated. Secondly, the dynamic features of the channels are also studied through the second-order statistics of the channels, namely, the good and bad channel durations as well as the LCR, which are important for a cross-layer design. Three strongest peaks capturing most of the energy of the channel are taken into account. Finally, a two-state alternating Weibull renewal process model is proposed. The model provides good usability with low complexity and can then be used to better design communication network protocols for WBANs. In addition, for the sake of designing a non-coherent receiver, the dynamic delay spread of the channel, which determines an energy collector detecting the signal energy over a time window, is investigated.

Journal ArticleDOI
Hong Peng1, Jun Wang1
TL;DR: In this paper, the authors proposed an optimal audio watermarking scheme using genetic optimization with variable-length mechanism, which can automatically determine optimal embedding parameters for each audio frame of an audio signal.
Abstract: Designing an optimal audio watermarking system is an open difficult issue since its two basic performance measures, i.e., imperceptibility and robustness, are conflicting with each other. So, an optimal audio watermarking scheme needs to optimally balance both imperceptibility and robustness. In order to realize such an optimal watermarking system, by considering the balance as an optimization problem, we propose an optimal audio watermarking scheme using genetic optimization with variable-length mechanism in this paper. The presented genetic optimization procedure can automatically determine optimal embedding parameters for each audio frame of an audio signal. Specially, employed variable-length mechanism can effectively search most suitable positions for watermark embedding, including suitable audio frames and their AC coefficients. By dint of the genetic optimization with variable-length mechanism, proposed audio watermarking scheme can not only guarantee good quality of watermarked audio signal but also effectively improve its robustness. Experimental results show that proposed watermarking scheme has good imperceptibility and high capability against common signal processing and some desynchronization attacks.

Journal ArticleDOI
TL;DR: The Monte Carlo implementation of joint probabilistic data-association filter (JPDAF) is applied to the well-known problem of multi-target tracking in a cluttered area and the distributed expectation maximization algorithm is exploited via the average consensus filter to diffuse the nodes’ information over the whole network.
Abstract: In this paper, a distributed multi-target tracking (MTT) algorithm suitable for implementation in wireless sensor networks is proposed. For this purpose, the Monte Carlo (MC) implementation of joint probabilistic data-association filter (JPDAF) is applied to the well-known problem of multi-target tracking in a cluttered area. Also, to make the tracking algorithm scalable and usable for sensor networks of many nodes, the distributed expectation maximization algorithm is exploited via the average consensus filter, in order to diffuse the nodes’ information over the whole network. The proposed tracking system is robust and capable of modeling any state space with nonlinear and non-Gaussian models for target dynamics and measurement likelihood, since it uses the particle-filtering methods to extract samples from the desired distributions. To encounter the data-association problem that arises due to the unlabeled measurements in the presence of clutter, the well-known JPDAF algorithm is used. Furthermore, some simplifications and modifications are made to MC–JPDAF algorithm in order to reduce the computation complexity of the tracking system and make it suitable for low-energy sensor networks. Finally, the simulations of tracking tasks for a sample network are given.

Journal ArticleDOI
TL;DR: This study is focused on the characterization of the propagation channel between two wearable devices placed on a human body, and operating at 2.4 and 5.8 GHz.
Abstract: In this paper, a couple of path gain models for on-body communication systems are analyzed and compared. The study is focused on the characterization of the propagation channel between two wearable devices placed on a human body, and operating at 2.4 and 5.8 GHz. Wearable wireless low-cost commercial modules and low-profile annular ring slot antennas were used, and measurements were performed for different radio links on a human body. Measurement results have been compared with CST Microwave Studio simulations by resorting to simplified body models like flat, cylindrical, spherical, and ellipsoidal canonical geometries. Characteristic parameters appearing in the propagation models have been calculated for the analyzed on-body channels and summarized in a concluding table.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an asynchronously scheduled and multiple wake-up provisioned duty cycle MAC protocol for WSNs, which employs an asynchronous rendezvous schedule selection technique to provision a maximum of n wake-ups in the operational cycle of a receiver.
Abstract: To reduce the energy cost of wireless sensor networks (WSNs), the duty cycle (i.e., periodic wake-up and sleep) concept has been used in several medium access control (MAC) protocols. Although these protocols are energy efficient, they are primarily designed for low-traffic environments and therefore sacrifice delay in order to maximize energy conservation. However, many applications having both low and high traffic demand a duty cycle MAC that is able to achieve better energy utilization with minimum energy loss ensuring delay optimization for timely and effective actions. In this paper, nW-MAC is proposed; this is an asynchronously scheduled and multiple wake-up provisioned duty cycle MAC protocol for WSNs. The nW-MAC employs an asynchronous rendezvous schedule selection technique to provision a maximum of n wake-ups in the operational cycle of a receiver. The proposed MAC is suitable to perform in both low- and high-traffic applications using a reception window-based medium access with a specific RxOp. Furthermore, per cycle multiple wake-up concept ensures optimum energy consumption and delay maintaining a higher throughput, as compare to existing mechanisms. Through analysis and simulations, we have quantified the energy-delay performance and obtained results that expose the effectiveness of nW-MAC.

Journal ArticleDOI
TL;DR: The wave concept iterative procedure in cylindrical coordinates is used to analyze this new antenna, and using the proposed procedure, less computing time and memory are needed to calculate the electromagnetic parameters of the annular multi-slits antenna.
Abstract: In this paper, a new antenna for satellite applications is proposed. This antenna is designed to operate at any frequency desired. It consists of a circular microstrip patch antenna which incorporates concentric annular slits, and it is printed on a grounded substrate. The details of the proposed antenna design and numerical results are presented and discussed. The wave concept iterative procedure in cylindrical coordinates is used to analyze this new antenna. Using the proposed procedure, less computing time and memory are needed to calculate the electromagnetic parameters of the annular multi-slits antenna.

Journal ArticleDOI
TL;DR: In the past 10 years, the Internet has become the network capable of integrating all types of services (data, voice, video and TV) and nowadays, the IP layer appears as the convergence layer for ensuring connectivity between heterogeneous networks.
Abstract: In the past 10 years, the Internet has become the network capable of integrating all types of services (data, voice, video and TV). Nowadays, the IP layer appears as the convergence layer for ensuring connectivity between heterogeneous networks. Even though the shortcomings of the Internet in terms of quality of service, security and mobility are well understood in the technical literature, and in spite of initiatives flourishing all over the world (GENI in the USA, AKARI in Japan and various Research Programs in Europe), the predominance of the current Internet will certainly prevail in the near future. Nevertheless, the emergence of new technologies, especially in optics and wireless, is substantially modifying the current landscape. The availability of smart phones and mobile computers in the mass market is going to accelerate the convergence of fixed and mobile networks. The architecture of the latter will be heavily modified by the introduction of IP as the convergence layer. In addition, the ability of smart terminals to support voice, video, data services and the evolution of the wireless technology towards even higher bit rates, is about to open the door to the transfer of massive volumes of traffic through the air interface. This will imply a deep rethinking of traffic engineering tools for both radio and mobile backhaul networks. The situation is similar for the fixed access with the deployment of optical fibre. Very high bit rates (1 Gbit/s or more for the downlink and at least 10 Mbit/s for the uplink) enable the transfer of huge amounts of traffic that have a great impact on backhaul and core networks. Moreover, the evolution of optical technology towards more flexibility with Dynamic Optical Circuit Switching and Optical Burst Switching should deeply modify the architecture of core networks where a better coordination between the IP and the physical layer will be necessary. In parallel to technological evolutions, the Internet is the place of rapid emergence of new services and usage. As already observed with peer-to-peer networks, many overlay networks are rapidly emerging on top of the Internet, e.g. social networks that can potentially give rise to large amounts of traffic though the exchange of voluminous content (pictures, videos etc.) while requiring an acceptable quality of service level. The situation is similar for Over the Top players deploying services over the Internet and requesting quality of service and flexibility from the network. To facilitate this tremendous emergence of new usage and services, Content Distribution Systems will certainly be deployed on a large scale and will significantly modify the main traffic flows in networks, as we already witness such evolutions. The above evolutions in terms of broadband applications, emergence of new technologies, and changes in usage, require the continual reappraisal of traffic manageP. Chemouil (*) Orange Labs, Issy-les-Moulineaux, France e-mail: prosper.chemouil@orange-ftgroup.com

Journal ArticleDOI
TL;DR: Using the 1-min data for 13 years (1993–2005) of rainfall measurements in Penang in the northwestern part of the Malayan Peninsula, it is estimated rain occurrence to be ~12% of the year.
Abstract: Using the 1-min data for 13 years (1993–2005) of rainfall measurements in Penang in the northwestern part of the Malayan Peninsula, we estimate rain occurrence to be ~12% of the year. At the same location during 0.01% of a year, rainfall exceeds 126 mm/h. The 13-year average 1-min rainfall data are analyzed to study the diurnal, monthly, and annual variation of total rain accumulations. The Southern Oscillation values were low in 1993, 1995, and 1998 and the percentage exceedance of rain rates during the La Nina years were higher than El Nino years.

Journal ArticleDOI
TL;DR: This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks, and investigates the performance of CFRC using simulation in ns-2, and the results demonstrate that CFRC increases the throughput with the desired fairness.
Abstract: This paper presents a fair and efficient rate control mechanism, referred to as congestion-aware fair rate control (CFRC), for IEEE 802.11s-based wireless mesh networks. Existing mechanisms usually concentrate on achieving fairness and achieve a poor throughput. This mainly happens due to the synchronous rate reduction of neighboring links or nodes of a congested node without considering whether they actually share the same bottleneck or not. Furthermore, the achievable throughput depends on the network load, and an efficient fair rate is achievable when the network load is balanced. Therefore, existing mechanisms usually achieve a fair rate determined by the mostly loaded network region. CFRC uses an AIMD-based rate control mechanism which enforces a rate-bound to the links that use the same bottleneck. To achieve the maximum achievable rate, it balances the network load in conjunction with the routing mechanism. Furthermore, it allows the intra-mesh flows to utilize the network capacity, and the intra-mesh flows achieve a high throughput. Finally, we investigate the performance of CFRC using simulation in ns-2, and the results demonstrate that CFRC increases the throughput with the desired fairness.

Journal ArticleDOI
TL;DR: It is examined in this paper how ordinary kriging can be an effective tool to take up this challenge of assessing the field level anywhere inside the volume of interest, without any additional measurements, and to know the uncertainty associated to the assessed value.
Abstract: The maintenance of frequency modulation (FM) broadcasting sites is one of the activities of TDF to guarantee the availability of broadcasting services for its clients. As it is not always possible to stop the broadcasting due to the large number of listeners, TDF has to guarantee the safety of workplaces with respect to the limits of exposure to electromagnetic fields inside the mast near the antenna. Exposure limits are defined in the European Directive for workers (see [1]). Today, TDF carries out measurements with broadband fieldmeter to identify areas above the action level in terms of electric field limit, but for precise measurements, three axes probing with selective measurements are carried out, frequency by frequency, inside the workplace. Hence, the magnitude of the electric field in the zone is obtained either by a broadband fieldmeter which is moved manually along a vertical axis or by a selective three axes probing with a lot of punctual measurements to have a spatial sampling of the field in the volume of interest. Regrettably, in practice, measurements take a long time because of the difficult situation of measurement points, the closeness of metallic structures and a lack of place inside the mast. Consequently, the number of sampling points is generally too limited to assess the exposure. The challenge is not only to assess the field level anywhere inside the volume of interest, without any additional measurements, but also to know the uncertainty associated to the assessed value. By comparing these results to the limit value, one can conclude on the safety of the volume. As a simple interpolation of the measured values does not yield the associated uncertainty, we examine in this paper how ordinary kriging can be an effective tool to take up this challenge. The paper is organized as follows: in Section 2, we recall the main theoretical results concerning the ordinary kriging and, particularly, the variogram. In Section 3, we explain why numerical simulation is used as a help in the implemented process and we present a description of the FM transmitter numerical model. In Section 4, we detail the computations of bidimensional variograms that concern the distribution of electric field in restricted planes of the spatial coordinates system. In Section 5, we detail the construction of three-dimensional variograms that are necessary for the computation of fields over volumes. Finally, some concluding remarks are given in Section 6.

Journal ArticleDOI
TL;DR: The OP mediates the communication and negotiation among AMSs, ensuring that their SLAs and policies meet the requirements needed for the provisioning of the services, and simplifies the federation of domains and the distribution of new services in virtualised network environments.
Abstract: Existing services require assurable end-to-end quality of service, security and reliability constraints. Therefore, the networks involved in the transport of the data must cooperate to satisfy those constraints. In a next generation Internet, each of those networks may be managed by different entities. Furthermore, their policies and service level agreements (SLAs) will differ, as well as the autonomic management systems controlling them. In this context, we in the Autonomic Internet (AutoI) consortium propose the Orchestration Plane (OP), which promotes the interaction among different Autonomic Management Systems (AMSs). The OP mediates the communication and negotiation among AMSs, ensuring that their SLAs and policies meet the requirements needed for the provisioning of the services. It also simplifies the federation of domains and the distribution of new services in virtualised network environments.

Journal ArticleDOI
TL;DR: A novel distributed coding scheme for broadcast over mobile ad hoc networks by combining MPR technique with network coding using a rateless code and a new degree distribution to decrease the delay introduced at the intermediate nodes.
Abstract: We propose a novel distributed coding scheme for broadcast over mobile ad hoc networks. In this scheme, we combine MPR technique with network coding. Only multipoint relay (MPR) nodes perform coding using a rateless code. Rather than waiting for a large number of encoded packets to be received before MPR nodes could decode and resend coded packets, we design a new degree distribution that enables the nodes to start decoding even when small number of encoded packets are received. Thus, we decrease the delay introduced at the intermediate nodes. The main advantage of using a rateless code for encoding instead of using a random linear combination of the previously received packets is to reduce significantly the encoding and decoding complexities. We provide a performance evaluation using a simulation to demonstrate the efficiency of our code even under mobility condition.

Journal ArticleDOI
TL;DR: A new channel identification parameter that is based on the number of significant paths within the received signal that can achieve similar or better results compared to other existing methods with lower complexity is proposed.
Abstract: We propose a new channel identification parameter that is based on the number of significant paths within the received signal. Our proposed parameter can achieve similar or better results compared to other existing methods with lower complexity. Moreover, our results show that it is possible to use only two channel identification parameters instead of three in joint channel identification techniques, as it was used in conventional methods.