scispace - formally typeset
Search or ask a question

Showing papers in "Annales Des Télécommunications in 2016"


Journal ArticleDOI
TL;DR: This article presents the main security threats in software-defined networking and proposes AuthFlow, an authentication and access control mechanism based on host credentials, and shows that the results show that AuthFlow denies the access of hosts either without valid credentials or with revoked authorization.
Abstract: Software-defined networking (SDN) is being widely adopted by enterprise networks, whereas providing security features in these next generation networks is a challenge. In this article, we present the main security threats in software-defined networking and we propose AuthFlow, an authentication and access control mechanism based on host credentials. The main contributions of our proposal are threefold: (i) a host authentication mechanism just above the MAC layer in an OpenFlow network, which guarantees a low overhead and ensures a fine-grained access control; (ii) a credential-based authentication to perform an access control according to the privilege level of each host, through mapping the host credentials to the set of flows that belongs to the host; (iii) a new framework for control applications, enabling software-defined network controllers to use the host identity as a new flow field to define forwarding rules. A prototype of the proposed mechanism was implemented on top of POX controller. The results show that AuthFlow denies the access of hosts either without valid credentials or with revoked authorization. Finally, we show that our scheme allows, for each host, different levels of access to network resources according to its credential.

82 citations


Journal ArticleDOI
TL;DR: The past, present and future of on-line voting is reviewed, with a focus on the role of technology transfer, from research to practice, and the range of divergent views concerning the adoption of on -line voting for critical elections.
Abstract: Electronic voting systems are those which depend on some electronic technology for their correct functionality. Many of them depend on such technology for the communication of election data. Depending on one or more communication channels in order to run elections poses many technical challenges with respect to verifiability, dependability, security, anonymity and trust. Changing the way in which people vote has many social and political implications. The role of election administrators and (independent) observers is fundamentally different when complex communications technology is involved in the process. Electronic voting has been deployed in many different types of election throughout the world for several decades. Despite lack of agreement on whether this has been a ‘success’, there has been—in the last few years—enormous investment in remote electronic voting (primarily as a means of exploiting the internet as the underlying communication technology). This paper reviews the past, present and future of on-line voting. It reports on the role of technology transfer, from research to practice, and the range of divergent views concerning the adoption of on-line voting for critical elections.

70 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the tree structure greatly reduces the calculation overheads which preserves differential privacy for users.
Abstract: As a special kind of application of wireless sensor networks, body sensor networks (BSNs) have broad application perspectives in health caring. Big data acquired from BSNs usually contain sensitive information, such as physical condition, location information, and so on, which is compulsory to be appropriately protected. However, previous methods overlooked the privacy protection issue, leading to privacy violation. In this paper, a differential privacy protection scheme for sensitive big data in BSNs is proposed. A tree structure is constructed to reduce errors and provide long range queries. Haar Wavelet transformation method is applied to convert histogram into a complete binary tree. At last, to verify the advantages of our scheme, several experiments are conducted to show the outperformed results. Experimental results demonstrate that the tree structure greatly reduces the calculation overheads which preserves differential privacy for users.

39 citations


Journal ArticleDOI
TL;DR: BroFlow, an Intrusion Detection and Prevention System based on Bro traffic analyzer and on the global network view of the software-defined networks (SDN) which is provided by the OpenFlow is proposed.
Abstract: Internal users are the main causes of anomalous and suspicious behaviors in a communication network. Even when traditional security middleboxes are present, internal attacks may lead the network to outages or to leakage of sensitive information. In this article, we propose BroFlow, an Intrusion Detection and Prevention System based on Bro traffic analyzer and on the global network view of the software-defined networks (SDN) which is provided by the OpenFlow. BroFlow main contributions are (i) dynamic and elastic resource provision of traffic-analyzing machines under demand; (ii) real-time detection of DoS attacks through simple algorithms implemented in a policy language for network events; (iii) immediate reaction to DoS attacks, dropping malicious flows close of their sources, and (iv) near-optimal placement of sensors through a proposed heuristic for strategically positioning sensors in the network infrastructure, which is shared by multi-tenants, with a minimum number of sensors. We developed a prototype of the proposed system, and we evaluated it in a virtual environment of the Future Internet Testbed with Security (FITS). An evaluation of the system under attack shows that BroFlow guarantees the forwarding of legitimate packets at the maximal link rate, reducing up to 90 % of the maximal network delay caused by the attack. BroFlow reaches 50 % of bandwidth gain when compared with conventional firewalls approaches, even when the attackers are legitimate tenants acting in collusion. In addition, the system reduces the sensors number, while keeping full coverage of network flows.

37 citations


Journal ArticleDOI
TL;DR: A large scale e-healthcare monitoring system that targets a crowd of individuals in a wide geographical area that is efficiently integrating many emerging technologies such as mobile computing, edge computing, wearable sensors, cloud computing, big data techniques, and decision support systems is proposed.
Abstract: Rapid development of wearable devices and mobile cloud computing technologies has led to new opportunities for large scale e-healthcare systems. In these systems, individuals’ health information are remotely detected using wearable sensors and forwarded through wireless devices to a dedicated computing system for processing and evaluation where a set of specialists namely, hospitals, healthcare agencies and physicians will take care of such health information. Real-time or semi-real time health information are used for online monitoring of patients at home. This in fact enables the doctors and specialists to provide immediate medical treatments. Large scale e-healthcare systems aim at extending the monitoring coverage from individuals to include a crowd of people who live in communities, cities, or even up to a whole country. In this paper, we propose a large scale e-healthcare monitoring system that targets a crowd of individuals in a wide geographical area. The system is efficiently integrating many emerging technologies such as mobile computing, edge computing, wearable sensors, cloud computing, big data techniques, and decision support systems. It can offer remote monitoring of patients anytime and anywhere in a timely manner. The system also features some unique functions that are of great importance for patients’ health as well as for societies, cities, and countries. These unique features are characterized by taking long-term, proactive, and intelligent decisions for expected risks that might arise by detecting abnormal health patterns shown after analyzing huge amounts of patients’ data. Furthermore, it is using a set of supportive information to enhance the decision support system outcome. A rigorous set of evaluation experiments are conducted and presented to validate the efficiency of the proposed model. The obtained results show that the proposed model is scalable by handling a large number of monitored individuals with minimal overhead. Moreover, exploiting the cloud-based system reduces both the resources consumption and the delay overhead for each individual patient.

35 citations


Journal ArticleDOI
TL;DR: It is demonstrated how lightweight solutions using Linux containers are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand.
Abstract: Many seemingly simple questions that individual users face in their daily lives may actually require substantial number of computing resources to identify the right answers. For example, a user may want to determine the right thermostat settings for different rooms of a house based on a tolerance range such that the energy consumption and costs can be maximally reduced while still offering comfortable temperatures in the house. Such answers can be determined through simulations. However, some simulation models as in this example are stochastic, which require the execution of a large number of simulation tasks and aggregation of results to ascertain if the outcomes lie within specified confidence intervals. Some other simulation models, such as the study of traffic conditions using simulations may need multiple instances to be executed for a number of different parameters. Cloud computing has opened up new avenues for individuals and organizations with limited resources to obtain answers to problems that hitherto required expensive and computationally-intensive resources. This paper presents SIMaaS, which is a cloud-based Simulation-as-a-Service to address these challenges. We demonstrate how lightweight solutions using Linux containers (e.g., Docker) are better suited to support such services instead of heavyweight hypervisor-based solutions, which are shown to incur substantial overhead in provisioning virtual machines on-demand. Empirical results validating our claims are presented in the context of two case studies.

33 citations


Journal ArticleDOI
TL;DR: This paper shows using case-studies from the FIWARE Future Internet Service domain that the software framework can support non-expert developers to address interoperability challenges.
Abstract: Interoperability remains a significant burden to the developers of Internet of Things systems. This is because resources and APIs are dynamically composed; they are highly heterogeneous in terms of their underlying communication technologies, protocols and data formats, and interoperability tools remain limited to enforcing standards-based approaches. In this paper, we propose model-based engineering methods to reduce the development effort towards ensuring that complex software systems interoperate with one another. Lightweight interoperability models can be specified in order to monitor and test the execution of running software so that interoperability problems can be quickly identified, and solutions put in place. A graphical model editor and testing tool are also presented to highlight how a visual model improves upon textual specifications. We show using case-studies from the FIWARE Future Internet Service domain that the software framework can support non-expert developers to address interoperability challenges.

27 citations


Journal ArticleDOI
TL;DR: In this article, a multi-GNSS receiver design is presented in various processing stages for three different GNSS systems, namely, GPS, Galileo, and the Chinese BeiDou navigation satellite system (BDS).
Abstract: Global navigation satellite systems (GNSSs) have been experiencing a rapid growth in recent years with the inclusion of Galileo and BeiDou navigation satellite systems. The existing GPS and GLONASS systems are also being modernized to better serve the current challenging applications under harsh signal conditions. Therefore, the research and development of GNSS receivers have been experiencing a new upsurge in view of multi-GNSS constellations. In this article, a multi-GNSS receiver design is presented in various processing stages for three different GNSS systems, namely, GPS, Galileo, and the Chinese BeiDou navigation satellite system (BDS). The developed multi-GNSS software-defined receiver performance is analyzed with real static data and utilizing a hardware signal simulator. The performance analysis is carried out for each individual system, and it is then compared against each possible multi-GNSS combination. The true multi-GNSS benefits are also highlighted via an urban scenario test carried out with the hardware signal simulator. In open sky tests, the horizontal 50 % error is approximately 3 m for GPS only, 1.8 to 2.8 m for combinations of any two systems, and 1.4 m when using GPS, Galileo, and BDS satellites. The vertical 50 % error reduces from 4.6 to 3.9 when using all the three systems compared to GPS only. In severe urban canyons, the position error for GPS only can be more than ten times larger, and the solution availability can be less than half of the availability for a multi-GNSS solution.

22 citations


Journal ArticleDOI
TL;DR: In this article, the challenges for middleware emanating from the advent of the Internet of Things and cloud computing are addressed, particularly with respect to the types of middleware needed to support key domains, such as cyber-physical systems, smart cities, smart grid, big data analytics, digital earth, and so on.
Abstract: To address the challenges for middleware emanating from the advent of the Internet of Things and cloud com- puting, new research is required, particularly with respect to the types of middleware needed to support key domains, such as cyber-physical systems, smart cities, the smart grid, big data analytics, digital earth, and so on. This special issue addresses these challenges by (i) considering the state- of-the-art of middleware in these areas, (ii) documenting emerging ideas and concepts that help meet key challenges, (iii) increasing awareness of a grand challenge for dis- tributed system, and (iv) galvanizing the community of researchers around this challenge

20 citations


Journal ArticleDOI
TL;DR: This paper discusses the implementation of the provided CSMA/CA algorithm and points out to the parts that do not respect the standard specifications and proposes and implements a compliant version of this algorithm and demonstrates the correctness of the implementation.
Abstract: In the wireless sensor networks domain, one of the most used standards is IEEE 802.15.4. This standard has been made available on many low power operating systems such as TinyOS and Contiki OS. It is crucial for the implementation to be compliant with the specifications of the standard. In the case of Contiki OS, the provided version of the main medium access algorithm, unslotted Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA), presents many flaws. In this paper, we discuss the implementation of the provided CSMA/CA algorithm and we point out to the parts that do not respect the standard specifications. We also propose and implement a compliant version of this algorithm and show through simulation the correctness of the implementation.

18 citations


Journal ArticleDOI
TL;DR: The INCOME middleware framework is presented and experiments on a first open source prototype show that QoC-based filtering and privacy protection using attributed-based access control can be performed at a reasonable cost.
Abstract: The Internet of Things (IoT) enables producers of context data like sensors to interact with remote consumers of context data like smart pervasive applications in an entirely decoupled way. However, two important issues are faced by context data distribution, namely providing context information with a sufficient level of quality-i.e. quality of context, QoC-while preserving the privacy of context owners. This article presents the solutions provided by the INCOME middleware framework for addressing these two potentially contradictory issues while hiding the complexity of context data distribution in heterogeneous and large-scale environments. Context producers and consumers not only express their needs in context contracts but also the guarantees they are ready to fulfil. These contracts are then translated into advertisement and subscription filters to determine how to distribute context data. Our experiments on a first open source prototype show that QoC-based filtering and privacy protection using attributed-based access control can be performed at a reasonable cost

Journal ArticleDOI
TL;DR: This paper investigates a decode-and-forward cooperative communication network (DFCCN) with energy harvesting relays is investigated in which a selected best relay with the Nth channel gain harvests energy from received radio frequency signals and then consumes the harvested energy by forwarding the recoded signals to a destination.
Abstract: In this paper, a decode-and-forward cooperative communication network (DFCCN) with energy harvesting relays is investigated in which a selected best relay with the N t h channel gain harvests energy from received radio frequency signals and then consumes the harvested energy by forwarding the recoded signals to a destination. Based on the studied energy harvesting receivers, two operation protocols are proposed: (1) power splitting relays in DFCCN (PSDFCCN protocol) and (2) time switching relays in DFCCN (TSDFCCN protocol). The system performances of the proposed protocols are evaluated based on the exact outage probabilities and compared to that of the direct transmission protocol. The theoretical results are confirmed by Monte Carlo simulations. It is found that (1) the system performances of the proposed PSDFCCN and TSDFCCN protocols are improved when the number of energy harvesting relays increases and when the parameter N is small; (2) the proposed PSDFCCN and TSDFCCN protocols with a small N value outperforms the direct transmission protocol; (3) the target signal-to-noise Ratio (SNR), the location of cooperative relays, and the energy harvesting parameters, e.g., the power splitting ratio, energy harvesting time, and energy conversion efficiency, have significant impacts on the system performance; and (4) the theoretical results agree well with the simulations.

Journal ArticleDOI
TL;DR: A novel architecture for carrier-managed WLAN networks which leverages network function virtualization concepts and virtualization technology in general is presented, based on a WLAN Cloudlet which offloads MAC layer processing from access points and consolidates network functions and value-added services.
Abstract: Over the past few years, Wireless Local Area Networks (WLANs) have been extensively deployed and have significantly evolved. However, the deployment of large-scale WLAN still presents management issues. Moreover, while newer WLAN technologies and services have been emerging at a prolific rate, the architecture of WLAN networks has been quite static and has seen difficulties to evolve. In this paper, we present a novel architecture for carrier-managed WLAN networks which leverages Network Function Virtualization concepts and virtualization technology in general. It is based on a WLAN Cloudlet which offloads MAC layer processing from access points and consolidates network functions and value-added services. All these functions and services are based on software instances. This brings more flexibility and adaptability and allows operators to easily implement new services while reducing CAPEX/OPEX and network equipment costs (e.g., access points).

Journal ArticleDOI
TL;DR: The FRM digital filter design technique and another important technique named complex-exponential modulation (CEM) are exploited and applied to the design of a novel cascaded channelized filter bank to realize selective sub-channel with very narrow transition bandwidth.
Abstract: The two key requirements of channelized filter bank in the design of a digital receiver are low computational complexity and reconfigurability. Modulated discrete Fourier transform (MDFT) filter bank permits sub-channel with linear phase characteristics and provides high degree of computational efficiency. However, with sub-channel exhibiting narrow transition bandwidth in MDFT filter bank, the length of the prototype filter becomes long prohibitively, which can reduce the computational efficiency. It is well known that the frequency response masking (FRM) provides an attractive technique for the realization of digital filters with very narrow transition bandwidth. In this paper, the FRM digital filter design technique and another important technique named complex-exponential modulation (CEM) are exploited and applied to the design of a novel cascaded channelized filter bank to realize selective sub-channel with very narrow transition bandwidth. A simulation is provided to illustrate the design of the proposed CEM filter bank. It is shown that the resulting filter bank entails less computational complexity substantially and reduces multiplier resource consumption comparing to the conventional MDFT filter bank.

Journal ArticleDOI
TL;DR: One statistical method is described that, if the assumptions underlying the protocol’s security proof hold, could provide convincing evidence that no attack occurred for the Norwegian Internet voting protocol (or other similar voting protocols).
Abstract: Even when using a provably secure voting protocol, an election authority cannot argue convincingly that no attack that changed the election outcome has occurred, unless the voters are able to use the voting protocol correctly. We describe one statistical method that, if the assumptions underlying the protocol’s security proof hold, could provide convincing evidence that no attack occurred for the Norwegian Internet voting protocol (or other similar voting protocols). To determine the statistical power of this method, we need to estimate the rate at which voters detect possible attacks against the voting protocol. We designed and carried out an experiment to estimate this rate. We describe the experiment and results in full. Based on the results, we estimate upper and lower bounds for the detection rate. We also discuss some limitations of the practical experiment.

Journal ArticleDOI
TL;DR: A Quantized Cramer Rao Bound (Q-CRB) method is introduced, which adapts the use of the CRB to handle grid-based localization algorithms with certain constraints, such as localization boundaries, and derives a threshold granularity level which identifies where theCRB can be appropriately applied to this type of algorithm.
Abstract: In this paper, we introduce a Quantized Cramer Rao Bound (Q-CRB) method, which adapts the use of the CRB to handle grid-based localization algorithms with certain constraints, such as localization boundaries. In addition, we derive a threshold granularity level which identifies where the CRB can be appropriately applied to this type of algorithm. Moreover, the derived threshold value allows the users of grid-based LSE techniques to probably avoid some unnecessary complexities associated with using high grid resolutions. To examine the feasibility of the new proposed bound, the grid-based least square estimation (LSE) technique was implemented. The Q-CRB was used to evaluate the performance of the LSE method under extensive simulation scenarios. The results show that the Q-CRB provided a tight bound in the sense that the Q-CRB can characterize the behaviour of location errors of the LSE technique at various system parameters, e.g. granularity levels, measurement accuracies, and in the presence or absence of localization boundaries.

Journal ArticleDOI
TL;DR: The genericity and elasticity of CIRUS is demonstrated and evaluated with the deployment of a Ubilytics use case using a real dataset based on records originating from a practical source.
Abstract: The Internet of Things (IoT) has become a reality with the availability of chatty embedded devices. The huge amount of data generated by things must be analysed with models and technologies of the “Big Data Analytics”, deployed on cloud platforms. The CIRUS project aims to deliver a generic and elastic cloud-based framework for Ubilytics (ubiquitous big data analytics). The CIRUS framework collects and analyses IoT data for Machine to Machine services using Component-off-the-Shelves (COTS) such as IoT gateways, Message brokers or Message-as-a-Service providers and big data analytics platforms deployed and reconfigured dynamically with Roboconf. In this paper, we demonstrate and evaluate the genericity and elasticity of CIRUS with the deployment of a Ubilytics use case using a real dataset based on records originating from a practical source.

Journal ArticleDOI
TL;DR: Two types of interference mitigation strategies are proposed, namely passive schemes and active scheme, and the simulation results show that the proposed active scheme represents the highest performance gains compared to the proposed practical passive schemes.
Abstract: Wireless body sensor networks (WBSNs) are expected to play a pivotal role in health-related and well-being applications. In this paper, we consider a situation in which a large number of people wearing body sensor networks are gathered in very close vicinity (as can happen in sport events or emergency hospitals). Clearly, BSNs compete with each other to gain access to the same frequency which results in experiencing mutual (internal) interference. Therefore, we investigate the “internal interference” and its destructive impacts on the overall performance gain of WBSNs using IEEE 802.15.4 standard protocol. As the number of WBSNs increases in the channel, it becomes highly likely for active periods of neighbouring WBSNs to overlap with each other. The increase in overlapping active periods would increase the probability of packet collisions leading to performance degradation. In this paper, two types of interference mitigation strategies are proposed, namely passive schemes and active scheme. The terms passive and active refer to the absence and presence of the capability of communication between WBSNs to efficiently utilise the same frequency spectrum. According to the passive schemes, WBSNs are enabled to change their operating frequencies whenever required to mitigate the impacts of internal interference, whereas active scheme offers collaborative utilisation of the channel. The simulation results show that the proposed active scheme represents the highest performance gains compared to the proposed practical passive schemes.

Journal ArticleDOI
TL;DR: A new cryptographic voting protocol for remote electronic voting that offers three of the most challenging features of such protocols: verifiability, everlasting privacy, and receipt-freeness is presented.
Abstract: We present a new cryptographic voting protocol for remote electronic voting that offers three of the most challenging features of such protocols: verifiability, everlasting privacy, and receipt-freeness. Trusted authorities and computational assumptions are only needed during vote casting and tallying to prevent the creation of invalid ballots and to achieve receipt-freeness and fairness, but not to guarantee vote privacy. The implementation of everlasting privacy is based on perfectly hiding commitments and non-interactive zero-knowledge proofs, whereas receipt-freeness is realized with mix networks and homomorphic tallying.

Journal ArticleDOI
TL;DR: Experimental results show that the EVFDT algorithm attains significantly high detection accuracy with less false alarm rate, and four metrics are used including classification accuracy, time, memory, and computational cost to evaluate the performance of proposed algorithm on real-time WBAN.
Abstract: Securing cloud-assisted Wireless Body Area Network (WBAN) environment by applying security mechanism that consumes less resources is still a challenging task. This research makes an attempt to address the same. One of the most prominent attacks in cloud-assisted WBAN is Distributed Denial of Service (DDoS) attack that not only disrupts the communication but also diminishes the network bandwidth and capacity. This work is an extension of our previous research work in which an Enhanced Very Fast Decision Tree (EVFDT) was proposed which could detect DDoS attack successfully. However, in our previous work, the proposed algorithm is evaluated on the dataset generated by implementing LEACH protocol in NS-2. In this paper, a real-time cloud-assisted WBAN test bed is deployed to investigate the efficiency and accuracy of proposed EVFDT algorithm for real-time sensor network traffic. To evaluate the performance of proposed algorithm on real-time WBAN, four metrics are used including classification accuracy, time, memory, and computational cost. It was observed that EVFDT outperforms the existing algorithms by maintaining better results for these metrics even in the presence of extreme noise. Experimental results show that the EVFDT algorithm attains significantly high detection accuracy with less false alarm rate.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed scheme outperforms both the conventional scheme and that employing the SDR-based scaling and stopping mechanism in terms of BER performance and average number of decoding iterations.
Abstract: In this paper, a new extrinsic information scaling and early stopping mechanism for long term evolution (LTE) turbo code is proposed. A scaling factor is obtained by computing the Pearson’s correlation coefficient between the extrinsic and a posteriori log-likelihood ratio (LLR) at every half-iteration. Additionally, two new stopping criteria are proposed. The first one uses the regression angle which is computed at each half-iteration and is applied at low E b/N 0. The second one uses Pearson’s correlation coefficient and is applicable for high E b/N 0 values. The performance of the proposed scheme was compared against an existing scaling and stopping mechanism based on the sign difference ratio (SDR) technique as well as conventional LTE turbo code. Simulations have been performed with both quadrature phase shift keying (QPSK) modulation and 16-quadrature amplitude modulation (QAM) together with code rates of 1/3 and 1/2. The results demonstrate that the proposed scheme outperforms both the conventional scheme and that employing the SDR-based scaling and stopping mechanism in terms of BER performance and average number of decoding iterations. The performance analysis using EXIT charts for each scheme shows higher initial output mutual information for input mutual information of zero. Better convergence is also demonstrated with the wider tunnel for the proposed scheme. Additionally, the computational complexity analysis demonstrates a significant gain in terms of the average number of computations per packet with the different modulation and coding schemes while still gaining in terms of error performance.

Journal ArticleDOI
TL;DR: This survey attempts to capture the NFV phenomenon in its multi-faceted historical development over the last two decades, by answering the question “What are the main goals of NFV systems?” and by highlighting the advantages and technical limits ofNFV in supporting those goals.
Abstract: The fields of networking and telecommunications are presently witnessing the transition of a number of Network Function Virtualization (NFV) principles and techniques from research into practice. This survey attempts to capture the NFV phenomenon in its multi-faceted historical development over the last two decades, by answering the question “What are the main goals of NFV systems?” and by highlighting the advantages and technical limits of NFV in supporting those goals. By focusing on the whys and hows of NFV, we propose a reasoned overview of the most significant design elements of NFV as a complementary synthesis to the analytical taxonomies of papers and standards that are usually found in survey documents.

Journal ArticleDOI
TL;DR: This work presents two studies of an electronic voting system that is tailored to the needs of complex elections and evaluates the usability of the implemented EasyVote prototype from both the voter and electoral official perspectives.
Abstract: Many studies on electronic voting evaluate their usability in the context of simple elections. Complex elections, which take place in many European countries, also merit attention. The complexity of the voting process, as well as that of the tallying and verification of the ballots, makes usability even more crucial in this context. Complex elections, both paper-based and electronic, challenge voters and electoral officials to an unusual extent. In this work, we present two studies of an electronic voting system that is tailored to the needs of complex elections. In the first study, we evaluate the effectiveness of the ballot design with respect to motivating voters to verify their ballot. Furthermore, we identify factors that motivate voters to verify, or not to verify, their ballot. The second study also addresses the effectiveness of the ballot design in terms of verification, but this time from the electoral officials’ perspective. Last, but not least, we evaluate the usability of the implemented EasyVote prototype from both the voter and electoral official perspectives. In both studies, we were able to improve effectiveness, without impacting efficiency and satisfaction. Despite these usability improvements, it became clear that voters who trusted the electronic system were unlikely to verify their ballots. Moreover, these voters failed to detect the “fraudulent” manipulations. It is clear that well-formulated interventions are required in order to encourage verification and to improve the detection of errors or fraudulent attempts.

Journal ArticleDOI
TL;DR: This paper proposes an efficient and highly reliable query-driven routing protocol for wireless sensor networks that provides the best theoretical energy aware routes to reach any node in the network and routes the request and reply packets with a lightweight overhead.
Abstract: Wireless sensor networks become very attractive in the research community, due to their applications in diverse fields such as military tracking, civilian applications and medical research, and more generally in systems of systems. Routing is an important issue in wireless sensor networks due to the use of computationally and resource limited sensor nodes. Any routing protocol designed for use in wireless sensor networks should be energy efficient and should increase the network lifetime. In this paper, we propose an efficient and highly reliable query-driven routing protocol for wireless sensor networks. Our protocol provides the best theoretical energy aware routes to reach any node in the network and routes the request and reply packets with a lightweight overhead. We perform an overall evaluation of our protocol through simulations with comparison to other routing protocols. The results demonstrate the efficiency of our protocol in terms of energy consumption, load balancing of routes, and network lifetime.

Journal ArticleDOI
TL;DR: This paper applies SDN flow-based routing control to inter- domain routing and proposes a fine-granularity inter-domain routing mechanism, named SDI (Software Defined Inter-domain routed), which enables inter- domains routing to support the flexible routing policy by matching multiple fields of IP packet header.
Abstract: Software-defined networking (SDN) scheme decouples network control plane and data plane, which can improve the flexibility of traffic management in networks. OpenFlow is a promising implementation instance of SDN scheme and has been applied to enterprise networks and data center networks in practice. However, it has less effort to spread SDN control scheme over the Internet to conquer the ossification of inter-domain routing. In this paper, we further innovate to the SDN inter-domain routing inspired by the OpenFlow protocol. We apply SDN flow-based routing control to inter-domain routing and propose a fine-granularity inter-domain routing mechanism, named SDI (Software Defined Inter-domain routing). It enables inter-domain routing to support the flexible routing policy by matching multiple fields of IP packet header. We also propose a method to reduce redundant flow entries for inter-domain settings. And, we implement a prototype and deploy it on a multi-domain testbed.

Journal ArticleDOI
TL;DR: A novel trust system which considers packet loss of links in wireless mesh networks, based on a statistical detection method implemented on each node of the network, and allows every WMN’s node to assign to each of its neighbors, a trust value which reflects its real behavior.
Abstract: Most trust and reputation solutions in wireless mesh networks (WMNs) rely on the intrusion detection system (IDS) Watchdog. Nevertheless, Watchdog does not consider packet loss on wireless links and may generate false positives. Consequently, a node that suffers from packet loss on one of its links may be accused wrongly, by Watchdog, of misbehaving. To deal with this issue, we propose in this paper a novel trust system which considers packet loss of links. Our trust system is based on a statistical detection method (SDM) implemented on each node of the network. Firstly, the SDM, via CUSUM test, analyzes the behavior of the packets loss in order to detect a dropping attack. Secondly, the SDM, through the Kolmogorov-Smirnov test, compares the behavior of the total packets loss with that of the control packets in order to identify the attack type. Our system allows every WMN’s node to assign to each of its neighbors, a trust value which reflects its real behavior. We have validated the proposed SDM method via extensive simulations on ns2 and have compared our trust system with an existing solution. The results display that our SDM solution offers better performance.

Journal ArticleDOI
TL;DR: The generalized diversity combining of an energy constrained multiple antenna decode-and-forward relay network is considered and results show that system performance improves both with increasing the number of antennas and decreasing the distance between the source and relay.
Abstract: In this paper, the generalized diversity combining of an energy constrained multiple antenna decode-and-forward relay network is considered. Using power splitting and time switching architectures in consort with diversity combining at the relay, six protocols are proposed, i.e., power splitting with selection combining (PSSC), power splitting with maximum ratio combining (PSMRC), power splitting with generalized selection combining (PSGSC), time switching with selection combining (TSSC), time switching with maximum ratio combining (TSMRC), and time switching with generalized selection combining (TSGSC). The outage probability and throughput performance of each protocol is analyzed by first developing the closed form analytical expressions and then verifying these through the Monte Carlo simulation method. Simulation results show that system performance improves both with increasing the number of antennas and decreasing the distance between the source and relay. The TSSC/TSMRC/TSGSC protocols yield better outage performance whereas the PSSC/PSMRC/PSGSC protocols achieve relatively higher throughput performance. Finally, the effects of power splitting ratio, energy harvesting time ratio, energy conversion efficiency, sample down conversion noise, and the target signal-to-noise ratio on system performance are analyzed and presented.

Journal ArticleDOI
TL;DR: An experiment for evaluating the integrity of election results, and improving transparency and voter participation in electronic elections, based on distributed collection of poll tape pictures taken by voters using mobile devices, which prompted the electoral authority to announce improved support for automated verification in the next elections.
Abstract: In this work, we describe an experiment for evaluating the integrity of election results, and improving transparency and voter participation in electronic elections. The idea was based on two aspects: distributed collection of poll tape pictures, taken by voters using mobile devices; and crowdsourced comparison of these pictures with the partial electronic results published by the electoral authority. The solution allowed voters to verify if results were correctly transmitted to the central tabulator without manipulation, with granularity of individual polling places. We present results, discuss limitations of the approach and future perspectives, considering the context of the previous Brazilian presidential elections of 2014, where the proposed solution was employed for the first time. In particular, with the aid of our project, voters were able to verify 1.6 % of the total poll tapes, amounting to 4.1 % of the total votes, which prompted the electoral authority to announce improved support for automated verification in the next elections. While the case study relies on the typical workflow of a paperless DRE-based election, the approach can be improved and adapted to other types of voting technology.

Journal ArticleDOI
TL;DR: This paper analyzes C-RAN cost structure, and mathematically formulate cell-BBU pool assignment, taking into account fronthaul network expenditure, and develops solutions to develop solutions to the problem, which optimize the C- RAN costs subject to demand constraints.
Abstract: With the increase of mobile traffic demand and the need to reduce expenses to handle this demand, a novel solution, known as Cloud Radio Access Network (C-RAN), has been proposed for future radio access network. This solution involves virtualizing base stations and centralizing processing resources into a baseband processing unit (BBU) pool. C-RAN also helps fully deploying cooperative schemes used in LTE and LTE-Advanced. In this paper, we analyze C-RAN cost structure. Then, unlike previous works, we mathematically formulate cell-BBU pool assignment, taking into account fronthaul network expenditure. Two optimization models are proposed for two different architectures. We then use these formulations to develop solutions to our problem, which optimize the C-RAN costs subject to demand constraints. Through extensive experiments, cost efficiency of C-RAN architecture is discussed and the effect of different parameters is analyzed. We also derive conditions where utilizing C-RAN architecture can help cost savings.

Journal ArticleDOI
TL;DR: In this article, the exact and approximate models to compute the jitter for some non-Poisson FCFS queues with a single flow are proposed, and the approximate models are sufficiently accurate for design purposes.
Abstract: The packet delay variation, commonly called delay jitter, is an important quality of service parameter in IP networks especially for real-time applications. In this paper, we propose the exact and approximate models to compute the jitter for some non-Poisson FCFS queues with a single flow that are important for recent IP network. We show that the approximate models are sufficiently accurate for design purposes. We also show that these models can be computed sufficiently fast to be usable within some iterative procedure, e.g., for dimensioning a playback buffer or for flow assignment in a network.