scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Wireless Information Networks in 2017"


Journal ArticleDOI
TL;DR: The experiments conducted using the specially designed module show that based on the settings used, the amount of energy for sending the same amount of data may differ up to 200-fold, which calls for efficient selection of the communication mode to be used by the energy restricted devices.
Abstract: Long lifetime of a wireless sensor/actuator node, low transceiver chip cost and large coverage area are the main characteristics of the low power wide area network (LPWAN) technologies. These targets correlate well with the requirements imposed by the health and wellbeing applications of the digital age. Therefore, LPWANs can found their niche among traditional short range technologies for wireless body area networks, such as ZigBee, Bluetooth and ultra wideband. To check this hypothesis, in this work we investigate the indoor performance with one of the LPWAN technologies, named LoRa, by the means of empirical measurements. The measurements were conducted using the commercially available devices in the main campus of the University of Oulu, Finland. In order to obtain the comprehensive picture, the experiments were executed for the sensor nodes operating with various physical layer settings, i.e., using the different spreading factors, bandwidths and transmit powers. The obtained results indicate that with the largest spreading factor of 12 and 14 dBm transmit power, the whole campus area (570 m North to South and over 320 m East to West) can be covered by a single base station. The average measured packet success delivery ratio for this case was 96.7%, even with no acknowledgements and retransmissions used. The campus was covered also with lower spreading factors with 2 dBm transmit power, but considerably more packets were lost. For example with spreading factor 8, 13.1% of the transmitted packets were lost. Aside of this, we have investigated the power consumption of the LoRa compliant transceiver with different physical layer settings. The experiments conducted using the specially designed module show that based on the settings used, the amount of energy for sending the same amount of data may differ up to 200-fold. This calls for efficient selection of the communication mode to be used by the energy restricted devices and emphasizes the importance of enabling adaptive data rate control.

102 citations


Journal ArticleDOI
TL;DR: Simulation results proved that by optimally modifying the BLE retransmission model, a maximum delay below 46 ms and a packet loss rate in the order of $$10^{-5}$$10-5 can be obtained, enabling BLE to fulfill the requirements of even the most demanding cases within the considered range of applications.
Abstract: In recent years, integration of wireless sensor networks in industrial environments has greatly increased. With this trend, new fields such as industrial IoT have arisen, which in turn have opened the doors to new possibilities that are shaping the future of industrial automation. In contrast to regular wireless networks, however, industrial applications of WSN are characterized for being time-critical systems with highly stringent requirements that challenge all available technologies. Because of its ultra-low energy properties, compatibility with most mobile units, reduced production costs, robustness and high throughput, Bluetooth low energy (BLE) is a potential candidate for these settings. This article explores the potential of BLE of meeting the real-time demands found in the domain of industrial process automation and industrial IoT. In order to evaluate the suitability of the protocol for these scenarios, the effect of adaptations in the retransmission scheme on the reliability and timeliness performance are thoroughly studied. Three retransmission schemes are evaluated and simulation results proved that by optimally modifying the BLE retransmission model, a maximum delay below 46 ms and a packet loss rate in the order of $$10^{-5}$$ can be obtained, enabling BLE to fulfill the requirements of even the most demanding cases within the considered range of applications.

55 citations


Journal ArticleDOI
TL;DR: Education and literate causes in the impracticality of everyday healthcare using wearable ICT devices in Japan are examined, and the importance of real-time vital signs monitoring for schoolchildren in classroom learning and physical training, namely, persons during exercises is emphasized.
Abstract: Although there have been a variety of wearable Information and Communication Technology (ICT) devices around us, which are easily connectable to smart phones, unfortunately, very few people are practicing everyday healthcare using them. There must be some causes in it. This paper examines educational and literate causes in the impracticality of everyday healthcare using wearable ICT devices, which may be inherent to Japan, and emphasizes the importance of real-time vital signs monitoring for schoolchildren in classroom learning and physical training, namely, persons during exercises. Then, the paper points out technical problems in its realization in terms of vital sensing and wireless networking, and introduces some solutions which we have been making up to the present. And finally, the paper shows some challenges of the future towards realization of real-time vital signs monitoring for schoolchildren during physical training, with the possibility of wireless multi-hop networking taking the mobility and location of vital sensor nodes into consideration.

26 citations


Journal ArticleDOI
TL;DR: A biologically inspired scheme of collaborative mobile sensing designed in such a way that the coverage, the energy efficiency and a high network availability are maintained and models that allow mobile sinks to move in a self-organized and self-adaptive way are proposed.
Abstract: Future generations of radio-based networks promise new timeliness for collaborative low-power sensing schemes in wireless sensor networks. Due to the hostile and inaccessible environment in which sensors are deployed, collect and transfer data in such networks is not an easy task. An effective data gathering can be improved by introducing unmanned aerial vehicles called drones, which act as mobile sinks and can autonomously fly over the network with the primary goal of collecting data from sensors. This paper presents a biologically inspired scheme of collaborative mobile sensing. The proposal has been designed in such a way that the coverage, the energy efficiency and a high network availability are maintained. Social foraging behaviors of the Escherichia coli bacteria modeled in the bacterial foraging optimization have been used to achieve these goals, especially the chemotaxis and the swarming features that allow bacteria to move. After a description, a formalization of the problem of mobile sensing is presented. Then, models that allow mobile sinks to move in a self-organized and self-adaptive way is proposed. In order to highlight the impact of mobility on energy consumption, delay, network coverage and successful amount of delivered data, intensive experiments have been done. Results demonstrate the effectiveness of the approach.

25 citations


Journal ArticleDOI
TL;DR: The results reveal that the possibility of performing only one retransmission can significantly reduce the required radio resources needed for data delivery compared to the case of performing a single transmission round.
Abstract: Ultra-reliable low-latency communications (URLLC) is a new feature to be considered for the fifth generation (5G) cellular systems. This feature is essential for the support of envisioned mission-critical applications, particularly in the realm of machine-type communications. These applications require that the messages, which are generally short-length packets, to be exchanged between a source and a destination with the high level of reliability and within a short period of time. The characteristics of URLLC do not fit directly in the conventional communication models. For instance, most of the existing communication models are developed considering moderate levels of reliability, neglecting the small effects of the feedback errors. However, even such small errors cannot be ignored for URLLC. This paper proposes a communication model for URLLC considering the reliabilities of both data and control channels. Then, the optimal and sub-optimal resource allocations are derived. We show that the proposed sub-optimal resource allocations have lower computational complexities with a negligible performance degradations compared to that of the optimal solutions. The results reveal that the possibility of performing only one retransmission can significantly reduce the required radio resources needed for data delivery compared to the case of performing a single transmission round.

21 citations


Journal ArticleDOI
TL;DR: This paper studies wireless biomedical capsule (WBC) tracking, for magnetic sensing and actuation settings, where an embedded permanent magnet is used inside a passive WBC together with magnetic sensors outside the human body, producing a magnetic field around the WBC.
Abstract: This paper studies wireless biomedical capsule (WBC) tracking, for magnetic sensing and actuation settings, where an embedded permanent magnet is used inside a passive WBC together with magnetic sensors outside the human body, producing a magnetic field around the WBC. First, a 2D WBC localization scheme is developed based on only magnetic sensing and adaptive recursive least squares (RLS) on-line parameter estimation with forgetting factor. Next, we propose a hybrid localization technique for simultaneous position and orientation estimation with high accuracy. The proposed hybrid localization technique is based on data fusion of magnetic measurements and electromagnetic signals emitted by the WBC for image transmission and other medical information using a similar adaptive RLS parameter estimation scheme. Later, the proposed localization techniques are integrated with an adaptive tracking law to construct an adaptive capsule tracking controller. The simulations demonstrate promising results for the efficiency and accuracy level of the proposed adaptive localization and tracking control schemes.

17 citations


Journal ArticleDOI
TL;DR: Honeypot IDS is proposed for the detection and prevention of Rogue Access Point via attack detection performed by internal and external malicious users, to reduce false alarm rate generated by existing IDS.
Abstract: Wireless network security is becoming a great challenge as its popularity is in the high spirit. On account of open medium, insignificant software implementation, potential for hardware deficits, and improper configuration; Wi-Fi network is vulnerable to Rogue Access Point (RAP). Rogue Access Point is an unauthorized access point which can be installed by end-users without the knowledge of security administrator. When this rogue device is connected to the Internet, it can be used by an assailant to breach the security of the network. Existing RAPs detection techniques have limited capabilities and are not able to detect all variants of assaulters activities. In this paper, a method named Honeypot Intrusion Detection System (Honeypot IDS) is proposed for the detection and prevention of Rogue Access Point via attack detection performed by internal and external malicious users. Honeypot IDS combines Intrusion Detection System and Honeypot, to reduce false alarm rate generated by existing IDS. The proposed approach consist of three phases; filtering, intrusion detection system and honeypot. The traffic after passing filtering and intrusion detection system is rerouted to honeypot for in-depth investigation. The proposed architecture improves the overall performance of the system by diminishing false alarm rate generated by intrusion detection system and is able to sustain the overall workload of honeypot.

17 citations


Journal ArticleDOI
Masahiko Shimizu1
TL;DR: It is shown that channel capacity of the interleaved configuration with inter-subarray coding is larger than that of the localized configuration, and it is thought that this configuration is suitable for millimeter-wave beam multiplexing.
Abstract: A millimeter-wave beam multiplexing method using a subarray type interleaved configuration hybrid beamforming with inter-subarray coding is proposed. By multiplexing of adequate directional beams, the proposed method can reduce inter-beam interference and create multiple beams of a theoretical maximum gain that an array antenna can generate. As results of performance comparison in subarray type beamforming with the feasibility, it is shown that channel capacity of the interleaved configuration with inter-subarray coding is larger than that of the localized configuration. Particularly, in user dense environments, the interleaved configuration is effective. Therefore, we think that the interleaved configuration is suitable for millimeter-wave beam multiplexing.

15 citations


Journal ArticleDOI
TL;DR: A vector network analyzer based channel sounding system capable of performing measurements in the range from 2 to 50 GHz is presented and it is found that the system works as expected.
Abstract: The aim of this work is to present a vector network analyzer based channel sounding system capable of performing measurements in the range from 2 to 50 GHz. Further, this paper describes an indoor measurement campaign performed at 26–30 GHz. The sounding system is capable of receiving two channels and transmitting one. Using this feature a channel measurement has been performed using both a directional horn antenna and a virtual uniform circular array (UCA) at the same time. This allows for comparative studies of measured channels with two different antennas in a simultaneous way. The measurement has been conducted with 42 measurement positions distributed along a 10 m long path through an indoor laboratory environment. The transmitter was positioned such that measurements were conducted both in line-of-sight and non-line-of-sight scenarios. The measurements showed good agreement between the measurement data collected with the horn antenna and the data collected with the UCA. The propagation environment was found to be sparse both in delay and angular domain for the given scenario. Based on the performed measurement campaign together with validation measurements of the system stability, it is found that the system works as expected.

15 citations


Journal ArticleDOI
TL;DR: 3GPP (Third Generation Partnership Project), the standardization body responsible for standardizing cellular systems, has specified both Narrowband IoT (NB-IoT) and enhanced Machine-Type Communications (eMTC) in LTE release 13, which both have approximately 20 dB better link budget than LTE, and a modem complexity reduced to about 10% of LTE.
Abstract: Internet of Things (IoT) is expected to dramatically increase the number of connected devices. Multiple forecasts estimate that the number of IoT devices to go beyond 100 billion. The only question is when exactly this will take place. However, there is consensus that it will happen. Over the last approximately three decades, Internet has brought a significant impact on our society. And during last decade, the Internet usage has been dramatically boosted by the availability of powerful smartphones and fast connectivity using Wi-Fi and cellular systems. IoT is expected to become the next big leap of the Internet, where almost anything can be connected. For upcoming 5G systems, the requirements aim to support 1000,000 devices per square kilometers. M2M communications is seen as the nerve system for the Internet of Things (IoT). In the past, M2M communications was typically materialized using wired communication in order to achieve high reliability. Additionally the power consumption of the devices was so high that they required external power supply. With the evolution of wireless communication technologies and further evolution of sensor and actuator technologies, the power consumption and cost of wireless machine-type communications have been reduced significantly; and it is expected that this trend continues during forthcoming years. Applications for M2M communications can be divided into two main categories of massive and mission-critical M2M communications depending on their requirements. With massive M2M communications, we mean that the services typically span a very large numbers of devices that are usually equipped with sensors or actuators. Obviously, the amount of data generated by these devices and sensors is normally very small, and having very low latency is not required. In mission-critical M2M communications, on the other hand, very high reliability and availability as well as very low latency are required. Examples for these systems are traffic safety or control, control of critical infrastructure and wireless connectivity for industrial processes. These systems require different type of communications what it is known as Ultra-Reliable and Low-Latency Communications (URLLC). Wireless has several obvious advantages over wired: ease and reduced cost of installation, higher flexibility, and the support of mobility, to mention a few. M2M communications is often divided into local area and wide area technologies. Local area technologies are providing access from a few meters and up to hundreds of meters, whereas wide area technologies provide a link budget allowing connectivity distances up to tens of kilometers. The GSM system has been the most deployed wide area communication system used for M2M communications. However, many operators have recently announced to decommission GSM systems. This calls for new cellular wide-area M2M connectivity solutions, which can be either standalone or fully embedded into already deployed 4G/ LTE networks by means of software upgrades. 3GPP (Third Generation Partnership Project), the standardization body responsible for standardizing cellular systems, has specified both Narrowband IoT (NB-IoT) and enhanced Machine-Type Communications (eMTC) in LTE release 13, which both have approximately 20 dB better link budget than LTE, and a modem complexity reduced to about 10% of LTE. They are also considered as low power technologies. The approximately 20 dB improved link & Sassan Iraji sassan.iraji@aalto.fi

15 citations


Journal ArticleDOI
TL;DR: This paper proposes a mobile agent routing protocol, called zone-based mobile agent aggregation, which utilises a bottom-up mobile agent migration scheme in which the mobile agents start their journeys from the centre of the event regions to the sink aiming to reduce the MA itinerary cost and delay and increase data aggregation routing accuracy.
Abstract: Mobile agent data aggregation routing forwards mobile agents in wireless sensor network to collect and aggregate data. The key objective of data aggregation routing is to maximise the number of collected data samples at the same time as minimising network resource consumption and data collection delay. This paper proposes a mobile agent routing protocol, called zone-based mobile agent aggregation. This protocol utilises a bottom-up mobile agent migration scheme in which the mobile agents start their journeys from the centre of the event regions to the sink aiming to reduce the MA itinerary cost and delay and increase data aggregation routing accuracy. In addition, the proposed protocol reduces the impact of network architecture, event source distribution model and/or data heterogeneity on the performance of data aggregation routing.

Journal ArticleDOI
TL;DR: Performance analysis of hybrid localization based on radio-frequency (RF) and inertial measurement unit (IMU) measurements for a single wireless capsule endoscopy (WCE) traveling the gastrointestinal tract is studied and a posterior Cramér–Rao Bound (PCRB) is derived.
Abstract: In this paper, performance analysis of hybrid localization based on radio-frequency (RF) and inertial measurement unit (IMU) measurements for a single wireless capsule endoscopy (WCE) traveling the gastrointestinal tract is studied. Specifically, the multiple body-mounted sensors are considered which are located on the front and back of a patient’s medical jacket and form the uniform rectangular arrays (URAs). With the aim of locating the WCE, two types of RF measurements, namely time-of-arrival (TOA) and direction-of-arrival (DOA), are estimated from the received signals at the URAs transmitted by the WCE, which are integrated with the IMU acceleration measurements via the standard extended Kalman filter. Here, a posterior Cramer–Rao Bound (PCRB) of the proposed TOA/DOA and IMU-based hybrid localization is derived as fundamental limits on squared position error, where the accuracies of TOA and DOA measurements are entailed by means of CRB to account for their dependency on the environmental parameters, while the accuracies of the IMU measurements are addressed with the acceleration measurement error standard deviation. Numerical results are provided, sustained by simulations which verify the millimeter accuracy of the TOA/DOA and IMU-based hybrid localization within the regulation of medical implant communication services and the exactness of the PCRB.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed optimized mechanism decreases end-to-end delay and improve lifetime as compared by other conventional ones.
Abstract: Wireless sensor networks (WSNs) are known to be highly energy-constrained and consequently lifetime is a critical metric in their design and implementation. Range assignment by adjusting the transmission powers of nodes create a energy-efficient topology for such networks while preserving other network issues, however, it may effect on the performance of other techniques such as network coding. This paper addresses the problem of lifetime optimization for WSNs where the network employs both range assignment and network-coding-based multicast. We formulate the problem and then reformulated it as convex optimization that offer a numerous theoretical or conceptual advantages. The proposed programming leads to efficient or distributed algorithms for solving the problem. Simulation results show that the proposed optimized mechanism decreases end-to-end delay and improve lifetime as compared by other conventional ones.

Journal ArticleDOI
TL;DR: This Special Issue is dealing with the diverse set of topics under the theme of personalized healthcare, and theoretically investigates theoretically the optical wireless activity monitoring system’s performance in terms of packet failure in a specific environment.
Abstract: The use of wireless body area networks (WBAN) has been seen as a modern way to monitor human’s health related parameters remotely and seamlessly. The WBAN is utilizing energy efficient sensor nodes distributed around the human body and a coordinator node that controls the personal network’s operation. WBANs are utilizing shortrange radio technologies to convey vital sign information from the source to the final destination; either being electric health record or just a central hub, which can keep track and analyze the measured parameters on-site. The reasoning adopted by the health service providers is to decrease the caregivers’ workload, allowing freedom and mobility for the patients or other persons who need to be monitored. Finally, all these are targeting to improve patients’ quality of life but also to decrease the healthcare costs. Currently technologies, such as the dominant Bluetooth and especially Bluetooth Low Energy, but also ZigBee, IEEE802.15.6, etc. are available for commercial WBAN implementation. In Europe, the European Telecommunication Standard Institute (ETSI) is also developing its own standard for smart body area networks under the Technical Committee SmartBAN. Despite of the existing standards suitable for WBAN use, the research is still widely ongoing. And the prognoses are expecting even wider deployment of wearable technology in healthcare and welfare related applications than it is realized so far. However, WBAN is only one part of the whole concept forming an effective and dependable chain to deliver health related information. The medical information and communication technology (ICT) concept as a whole consists of backbone systems, data collection and analysis, safety and secrecy related issues, various kinds of detectors, energy efficient electronics, and so on. In addition, the data transmission path is typically heterogeneous, which means that information is passing through various radio and wired connections. Moreover, different medical instruments can produce incompatible information, which need to be collected, merged and analyzed jointly with other related information. As can be seen, the medical ICT is multidiscipline research area, which involves communication, electronics, data and signal processing, medical etc. disciplines together. In this Special Issue, we are dealing with the diverse set of topics under the theme of personalized healthcare. The papers included in this Special Issue are extended versions of the papers originally presented in the 10th International Symposium on Medical Information and Communication Technology (ISMICT) in 2016 at Worcester Polytechnic Institute, Worcester, MA, USA. Seven papers are selected for this Issue. The first paper ‘‘Theoretical and Experimental Approach for the Design of an Optical Wireless Physical Activity Monitoring System’’ by Clément Le Bas et al. investigates theoretically the optical wireless activity monitoring system’s performance in terms of packet failure in a specific environment. Paper studies, in particular, the impact of the optical source directivity, the emitted optical power, the position of the motion sensor device on the body, and the number of optical receivers fixed on the room ceiling. & Matti Hämäläinen matti.hamalainen@oulu.fi

Journal ArticleDOI
TL;DR: IEEE 802.11ah, Bluetooth Low Energy and IEEE 802.15.4 are selected for performance comparison for home automation scenario and BLE is recognized as the most suitable candidate technology for the home automation domain.
Abstract: Understanding the main short-range wireless technologies operating in unlicensed bands is an important step before deploying Internet of Things (IoT) applications In this study, we have selected IEEE 80211ah, Bluetooth Low Energy and IEEE 802154 for performance comparison in terms of delay, service ratio, traffic loss, activity factor and battery lifetime in a home automation scenario Also, a low overhead protocol stack suitable for IoT applications is considered The analysis takes into account heterogeneous devices and different traffic loads The results show advantages and disadvantages with respect to the different performance indicators with generally satisfactory behavior for the three technologies Among them, BLE is recognized as the most suitable candidate technology for the home automation domain

Journal ArticleDOI
TL;DR: This paper addresses the SPF problem of a ZigBee-based wireless sensor network by means of using multiple coordinators with different personal area network identifiers (PAN IDs) by proposing a solution where members of a network switch from one coordinator to another in case of failure by changing their respective PAN ID.
Abstract: Reliability and precise timestamping of events that occur are two of the most important requirements for mission critical wireless sensor networks Accurate timestamping is obtained by synchronizing the nodes to each other while reliability can be obtained by eliminating single points of failure (SPF) In this paper, we address the SPF problem of a ZigBee-based wireless sensor network by means of using multiple coordinators with different personal area network identifiers (PAN IDs) We propose a solution where members of a network switch from one coordinator to another in case of failure by changing their respective PAN ID We verify experimentally that our solution provides gains in terms of recovery speed and, therefore, synchronization accuracy with respect to a solution proposed in the literature

Journal ArticleDOI
TL;DR: An optical wireless communicating accelerometer-based system for physical activity level monitoring and shows that it exists an optimal value of half-power angle of the transmitter to insure the optical wireless performance whatever the coverage and the height of the device over the body are.
Abstract: We propose in this paper, an optical wireless communicating accelerometer-based system for physical activity level monitoring. Optical wireless technology presents the advantage to be secure regarding electromagnetic interference, low-cost and easy to deploy. Considering a mobile patient, we theoretically investigate the performance in terms of packet failure in a specific environment. We study in particular the impact of the optical source directivity, the emitted optical power, the position of the motion sensor device on the body, and the number of optical receivers fixed on the room ceiling. The theoretical results are compared with experimental measurements for several configurations using a custom-made wearable communicating device. This permits validating optical wireless technology efficiency for physical activity monitoring and showing that it exists an optimal value of half-power angle of the transmitter to insure the optical wireless performance whatever the coverage and the height of the device over the body are. At last, we illustrate the trade-off between emitted power and number of active receivers.

Journal ArticleDOI
TL;DR: The presented results reveal that the communication settings need to be accounted for when determining the time of flight using UWB and show that the accuracy of ranging is strongly affected by the used channel, data rate and pulse repetition frequency.
Abstract: The ultra wideband (UWB) radio signals are known for their good time resolution enabling implementation of accurate localization and tracking. The recent appearance of commercial UWB transceivers in masses on the market has boosted the interest towards this technology and facilitated its use not just for research, but also for business. In this paper we focus on the problem of UWB-based wireless indoor localization of machines and humans by means of IEEE 802.15.4-2015 high rate pulse repetition UWB technology and specifically the accuracy of such localization. Namely, we report the results of an extensive experimental study revealing the effect of various communication settings on the accuracy of indoor localization for the proposed earlier asymmetric localization protocol. The conducted experiments lasted over 200 hours almost nonstop and involved transmission of more than 30 million ranging packets. In the experiments we have tested over 200 different modes and explored the effect of seven different parameters on the UWB ranging performance. The presented results reveal that the communication settings need to be accounted for when determining the time of flight using UWB. Also we show that the accuracy of ranging is strongly affected by the used channel, data rate and pulse repetition frequency. Finally, we note that the increase of the UWB transceiver’s temperature due to self-heating has a strong effect on the results of the localization.

Journal ArticleDOI
TL;DR: A clustering algorithm based on fuzzy comprehensive evaluation method (CAFCE), is proposed to achieve both energy efficiency and QoS performance simultaneously.
Abstract: Energy efficiency and quality of service (QoS) are both required performances in the practical applications of wireless sensor networks. Clustering is one of main methods to achieve performance tradeoffs. In the clustering algorithm, the selection of the cluster heads is based on some criteria, which provides the opportunity to implement the performance tradeoffs. In this paper, a clustering algorithm based on fuzzy comprehensive evaluation method (CAFCE), is proposed to achieve both energy efficiency and QoS performance. Residual energy, location, number of neighbors, queue length and channel quality are taken into account as the factors that can influence the selection of cluster heads. Fuzzy comprehensive evaluation method is adopted to select appropriate cluster heads in order to achieve energy efficiency and QoS performance simultaneously. Each node obtains its own evaluation value. Then this value is mapped onto the time axis, and a time-trigger mechanism enables the node broadcast cluster head advertisement in the corresponding time slice. The rule of minimum transmission power is adopted to form the cluster. Simulation results denote that CAFCE algorithm has longer lifetime and better QoS performance than other algorithms.

Journal ArticleDOI
TL;DR: The experimental results suggest that DMCC balances the load among different clusters and reduces the energy consumption, which improves the network lifetime.
Abstract: Wireless sensor networks (WSNs) need simple and effective approaches to reduce energy consumption because of limited energy. Clustering nodes is an effective approach to make WSNs energy-efficient. In this paper we propose a distributed multi-competitive clustering approach (DMCC) for WSNs. First, the nodes with high residual energy are selected to act as cluster head candidates (CHCs). Second, cluster heads (CHs) are selected from the CHCs based on a hybrid of competition. If the distances to the selected CHs are suitable, a CHC with more neighbor nodes and smaller average distance to its neighbor nodes is more likely to become a CH. If the number of CHs selected from the CHCs is insufficient, more CHs are selected from non-CHCs continually according to residual energy until the CHs number is suitable. DMCC makes the CHs number stable and distribute the CHs evenly. Simulation experiments were performed on to compare DMCC with some related clustering approaches. The experimental results suggest that DMCC balances the load among different clusters and reduces the energy consumption, which improves the network lifetime.

Journal ArticleDOI
TL;DR: This paper derives and exploit two theoretical expressions of the outage probability in a UNB based IoT network, accounting for both interference due to the spectral randomness and path loss dueto the propagation, which enables the network capacity to be estimated as a function of the path-loss exponent.
Abstract: Thanks to its low energy consumption and very long range (up to 50 km in free-space), ultra-narrow-band transmission (UNB) represents a promising alternative to classical technologies used in cellular networks to serve low-throughput wireless sensor networks and the Internet of Things (IoT). In UNB, nodes access to the medium by selecting their frequency in a random and continuous way. This randomness leads to new behavior in the interference which has not been theoretically analyzed, when considering the pathloss of nodes randomly deployed around the receiver. In this paper, in order to quantify the system performance, we derive and exploit two theoretical expressions of the outage probability in a UNB based IoT network, accounting for both interference due to the spectral randomness and path loss due to the propagation (with and without Rayleigh fading). This enables us to estimate the network capacity as a function of the path-loss exponent, by determining the maximum number of simultaneous supported nodes. We highlight that the bandwidth should be chosen based on the propagation channel properties.

Journal ArticleDOI
TL;DR: The results show that the contribution of a ground loop through the floor in HBC was small, and capacitive coupling with the floor is often included in the transmission models of HBC.
Abstract: Human body communication (HBC) is a wireless transmission method that utilizes the human body as part of the transmission medium. A signal is transmitted by weak electric current through the human body and by capacitive coupling between transmitter, receiver, human body, and floor. Capacitive coupling with the floor is often included in the transmission models of HBC; however, its contribution is not well understood. This paper evaluated the contribution of the ground loop through the floor in HBC. The received signal strength was measured for two cases: two subjects shaking hands, and a subject touching an off-body receiver placed on a stand. The subjects each wore a transmitter or a receiver on their wrist. They stood on a carpet-covered metal floor, concrete floor, hardwood floor, and wooden chair to be above the floor. The variation of the signal attenuation was approximately 40 dB depending on which hand the subject used to shake hands or to touch the off-body receiver, while the variation caused by the different floor types was less than 5 dB. The attenuation obtained by numerical simulation showed similar results. These results show that the contribution of a ground loop through the floor was small.

Journal ArticleDOI
TL;DR: This paper addresses QoS provisioning approaches of CRN components and provides an up-to-date comprehensive survey of the recent improvement in these approaches.
Abstract: Much interest in Cognitive Radio Networks (CRNs) has been raised recently by enabling unlicensed (secondary) users to utilize the unused portions of the licensed spectrum. CRN utilization of residual spectrum bands of Primary (licensed) Networks (PNs) must avoid harmful interference to the users of PNs and other overlapping CRNs. The coexisting of CRNs depends on four components: Spectrum Sensing, Spectrum Decision, Spectrum Sharing, and Spectrum Mobility. Various approaches have been proposed to improve Quality of Service (QoS) provisioning in CRNs within fluctuating spectrum availability. However, CRN implementation poses many technical challenges due to a sporadic usage of licensed spectrum bands, which will be increased after deploying CRNs. Unlike traditional surveys of CRNs, this paper addresses QoS provisioning approaches of CRN components and provides an up-to-date comprehensive survey of the recent improvement in these approaches. Major features of the open research challenges of each approach are investigated. Due to the extensive nature of the topic, this paper is the first part of the survey which investigates QoS approaches on spectrum sensing and decision components respectively. The remaining approaches of spectrum sharing and mobility components will be investigated in the next part.

Journal ArticleDOI
TL;DR: This paper proposes a fuzzy based target tracking algorithm (CTFTT), which constructs a convoy tree around the target and dynamically moves the tree along with the target by adding new nodes into the tree and removing old nodes from the tree.
Abstract: One important application area of wireless sensor network (WSN) is tracking mobile target. When a target enters in a monitoring region and moves around it, the deployed WSN is used to collect information about the target and send it to the nearby base station. In this paper, we propose a fuzzy based target tracking algorithm (CTFTT). The algorithm constructs a convoy tree around the target and dynamically moves the tree along with the target by adding new nodes into the tree and removing old nodes from the tree. The expansion, contraction and reconfiguration of the tree is done using a fuzzy based sensing model. Important advantages are (1) convoy tree provides 100% coverage, (2) fuzzy mechanism helps to localize the evevts such as tree expansion, contraction and reconfiguration. This in turn helps to reduce the energy consumption in the network. Localized events also reduce communication overhead. Thus CTFTT is able to support the tracking of even fast moving objects. Extensive simulation shows that our algorithm performs better than the existing tree based algorithms in terms of coverage and energy.

Journal ArticleDOI
TL;DR: It has been proved that from UHF up to C band, a significant increase in system spectral efficiency can be reached through various techniques, such as Coordinated Multi-Point (CoMP), Massive Multiple-InputMultiple-Output (MIMO), and interference management and cancellation, but the resulting performance will not cope with the full expectations of IMT-2020 and 5G-PPP requirements.
Abstract: It has been proved that from UHF up to C band, a significant increase in system spectral efficiency can be reached through various techniques, such as Coordinated Multi-Point (CoMP), Massive Multiple-InputMultiple-Output (MIMO), and interference management and cancellation; still, the resulting performance will not cope with the full expectations of IMT-2020 and 5G-PPP requirements for 5G networks, mainly in terms of offering 10 Gbps peak data rates with connection densities of 100 k to 1 M devices/km. To overcome this limitation, the future architecture of such 5G networks is being defined to be deployed on small cells and to use higher frequency bands, such as super high frequency (SHF, 3-30 GHz) or extremely high frequency (EHF, 30-300 GHz), also referred as to centimeter and millimeter wave bands, respectively.

Journal ArticleDOI
TL;DR: The derived theoretical bounds are effectively based on the Shannon theorem, combined with selected propagation loss models, assumed link nonidealities, as well as the given energy harvesting and storage capabilities.
Abstract: Energy-efficient, reliable and scalable machine-to-machine (M2M) communications is the key technical enabler of Internet-of-Things (IoT) networks. Furthermore, as the number of populated devices is constantly increasing, self-sustaining or energy-autonomous IoT nodes are a promising prospect receiving increasing interest. In this paper, the feasibility and fundamental limits of energy harvesting based M2M communication systems are studied and presented. The derived theoretical bounds are effectively based on the Shannon theorem, combined with selected propagation loss models, assumed link nonidealities, as well as the given energy harvesting and storage capabilities. Fundamental limits and available operational time of the communicating nodes are derived and analyzed, together with extensive numerical results evaluated in different practical scenarios for low power sensor type communication applications.

Journal ArticleDOI
TL;DR: This paper evaluates the performance of nonlinear block multi-diagonalization (NL-BMD) precoding, an intermediate solution between the conventional linear precoder and nonlinear precoder (NLP), over an analog-digital hybrid beamforming constitution.
Abstract: Massive multiple-input multiple-output (MIMO) technology is a key enabler in 5G. A configuration combining analog beamforming and digital MIMO signal processing for multi-beam multiplexing, i.e. analog-digital hybrid beamforming, is one of the promising approaches of massive MIMO. With the configuration, we can mitigate the problems of complexity and power consumption, which are serious in higher super high frequency and extremely high frequency bands. In this paper, we evaluate the performance of nonlinear block multi-diagonalization (NL-BMD) precoding, an intermediate solution between the conventional linear precoder (LP) and nonlinear precoder (NLP), over an analog-digital hybrid beamforming constitution. Through numerical evaluation assuming indoor hotspot scenarios, it is clarified that, NL-BMD yields up to 18.8% improvement and 5.1 dB gain in average sum-rate spectral efficiency performance, compared with block diagonalization (BD), which is one of the typical schemes of conventional LP, while reducing the complexity to 1/2 of the conventional NLP. In addition, even under a dynamic fading condition, NL-BMD shows tolerance for channel transitions and still better performance than BD: NL-BMD suppresses performance loss in sum-rate spectral efficiency due to channel transitions to 18.3%, whereas BD shows over 20% loss.

Journal ArticleDOI
TL;DR: Two new methods that use sparse recovery and active learning techniques for near real-time artifact identification and removal in electroencephalography (EEG) recordings and outperforms Independent Component Analysis and standard sparse recovery algorithms by preserving both spectral and complexity properties of the denoised EEG.
Abstract: We have developed two new methods that use sparse recovery and active learning techniques for near real-time artifact identification and removal in electroencephalography (EEG) recordings. The first algorithm, called Correlated Sparse Signal Recovery addresses the problem of structured sparse signal recovery when statistical rather than exact properties describing the structure of the signal are appropriate, as in the elimination of eye movement artifacts; such tasks cannot be done efficiently using structured models that assume a common sparsity profile of fixed groups of components. Our algorithm learns structured sparse coefficients in a Bayesian paradigm. Using it, we have successfully identified and subtracted eye movement (structured) artifacts in real EEG recordings resulting in minimal data loss. Our method outperforms Independent Component Analysis and standard sparse recovery algorithms by preserving both spectral and complexity properties of the denoised EEG. Our second method uses a new active selection algorithm that we call Output-based Active Selection (OAS). When applied to the task of detection of EEG epochs containing other non-structured artifacts from an ensemble of detectors, OAS boosts accuracy of the ensemble from 91 to 97.5% with only 10% active labels. Our methods can also be applied to real-time artifact removal in magnetoencephalography and blood pressure signals.

Journal ArticleDOI
TL;DR: A method for tracking peaks of traffic using performance metrics extracted from the operation and maintenance database of the network, which yields a promising traffic localization precision even with considering imperfections of coverage prediction and mobile equipment capabilities in handling measurements is proposed.
Abstract: In recent years, there has been an increasing awareness to tracking traffic peaks reflecting the presence of mass events or permanent traffic hotspots. This trend is driven by dominant themes for wireless evolution towards 5G networks such as the problematic of hotspot offloading solutions, the emergence of heterogeneous networks with small cells’ deployment and the development of green networks’ concept. Actually, tracking traffic peaks with a high accuracy is of great interest to know how the congested zones can be offloaded, where small cells should be deployed and how they could be managed for sleep mode concept or even controlled according to traffic mobility if they are moving. In this paper, we propose a method for tracking peaks of traffic using performance metrics extracted from the operation and maintenance database of the network. These metrics are the timing advance, the angle of arrival, the neighboring cell level, the cell load and two mean throughputs: arithmetic (AMT) and harmonic (HMT). The combined use of these performance metrics, projected over a coverage map, yields a promising traffic localization precision even with considering imperfections of coverage prediction and mobile equipment capabilities in handling measurements. The proposed solution can be easily implemented in the network at an appreciable low cost.

Journal ArticleDOI
TL;DR: The proposed algorithm can significantly achieve the maximization of network performance in relieving the network congestion with less power consumption and show the efficiency of the proposed algorithm.
Abstract: In this paper, we propose a cross layer congestion optimization scheme for allocating the resources of wireless sensor networks to achieve maximization of network performance. The congestion control, routing selection, link capacity allocation, and power consumption are all taken account to yield an optimal scheme based on the Lagrangian optimization. The Lagrangian multiplier is adopted to adjust power consumption, congestion rate, routing selection and link capacity allocation, so that the network performance can be satisfied between the trade-off of efficiency and fairness of resource allocation. The proposed algorithm can significantly achieve the maximization of network performance in relieving the network congestion with less power consumption. Excellent simulation results are obtained to demonstrate our innovative idea, and show the efficiency of our proposed algorithm.