scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Computer Communications and Networks in 2017"


Proceedings ArticleDOI
06 May 2017
TL;DR: Although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images, and there is little correlation in errors between DNN's and human subjects.
Abstract: Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.

350 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: Results show that Hyperledger Fabric consistently outperforms Ethereum across all evaluation metrics which are execution time, latency and throughput, and both platforms are still not competitive with current database systems in term of performances in high workload scenarios.
Abstract: This paper conducts a performance analysis of two popular private blockchain platforms, Hyperledger Fabric and Ethereum (private deployment), to assess the performance and limitations of these state-of-the-art platforms. Blockchain, a decentralized transaction and data management technology, is said to be the technology that will have similar impacts as the Internet had on people's lives. Many industries have become interested in adopting blockchain in their IT systems, but scalability is an often- cited concern of current blockchain technology. Therefore, the goals of this preliminary performance analysis are twofold. First, a methodology for evaluating a blockchain platform is developed. Second, the analysis results are presented to inform practitioners in making decisions regarding adoption of blockchain technology in their IT systems. The experimental results, based on varying number of transactions, show that Hyperledger Fabric consistently outperforms Ethereum across all evaluation metrics which are execution time, latency and throughput. Additionally, both platforms are still not competitive with current database systems in term of performances in high workload scenarios.

289 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: It is found that blockchain technology can bring the following benefits: improvements in the quality and quantity of government services, greater transparency and accessibility of government information, and development of information-sharing across different organizations, and assistance in building an individual credit system in China.
Abstract: The purpose of this article is to discuss the application of blockchain technology in e-government, particularly in the Chinese context. Chancheng District, part of Foshan City in Guangdong Province, China, has undertaken a project called "The Comprehensive Experimental Area of Big Data in Guangdong Province" since 2016. Promoting the application of blockchain technology in e-government is an essential part of this undertaking, which is the first use of blockchain in government in China. Taking Chancheng's project as a case study, this article analyzes the framework, difficulties and challenges of applying blockchain to e-government at present, and discusses how blockchain technology can contribute to the development of e-government and public services in China. This article considers the practical realities in China and discusses the application of blockchain technology in Chinese e-government, finding that blockchain technology can bring the following benefits: (1) improvements in the quality and quantity of government services, (2) greater transparency and accessibility of government information, (3) development of information-sharing across different organizations, and (4) assistance in building an individual credit system in China. However, information security, cost and reliability are still major problems in application. Thus, establishing a general application platform of blockchain technology and developing management standards are crucial for promoting and applying blockchain in e-government. Blockchain provides an effective way of making government services more efficient, but standardizing the management system, processes and responsibility for the application is necessary for its further promotion. This article, by providing an analysis of the practice of blockchain in e-government in China, could serve as a foundation for further practical work and theoretical research in government services.

197 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: The importance of creating public, open "smart city" data repositories for the research community is argued and privacy preserving techniques for the anonymous uploading of urban sensor data from vehicles are proposed.
Abstract: In the Intelligent Vehicle Grid, the car is becoming a formidable sensor platform, absorbing information from the environment, from other cars (and from the driver) and feeding it to other cars and infrastructure to assist in safe navigation, pollution control and traffic management. The Vehicle Grid essentially becomes an Internet of Things (IOT), which we call Internet of Vehicles (IOV), capable to make its own decisions about driving customers to their destinations. Like other important IOT examples (e.g., smart buildings), the Internet of Vehicles will not merely upload data to the Internet using V2I. It will also use V2V communications between peers to complement on board sensor inputs and provide safe and efficient navigation. In this paper, we first describe several vehicular applications that leverage V2V and V2I. Communications with infrastructure and with other vehicles, however, can create privacy and security violations. In the second part of the paper we address these issues and more specifically focus on the need to guarantee location privacy to mobile users. We argue on the importance of creating public, open "smart city" data repositories for the research community and propose privacy preserving techniques for the anonymous uploading of urban sensor data from vehicles.

63 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: The NDNS (NDN DNS) protocol is designed, and several fundamental differences between applications designs for host-centric IP architecture and data-centric NDN architecture are revealed.
Abstract: DNS provides a global-scale distributed lookup service to retrieve data of all types for a given name, be it IP addresses, service records, or cryptographic keys. This service has proven essential in today's operational Internet. Our experience with the design and development of Named Data Networking (NDN) suggests the need for a similar always-on lookup service. To fulfill this need we have designed the NDNS (NDN DNS) protocol, and learned several interesting lessons through the process. Although DNS's request-response operations seem closely resembling NDN's Interest-Data packet exchanges, they operate at different layers in the protocol stack. Comparing DNS's implementations over IP protocol stack with NDNS's implementation over NDN reveals several fundamental differences between applications designs for host-centric IP architecture and data-centric NDN architecture.

57 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: WiTraffic is the first WiFi-based traffic monitoring system that is non-intrusive, cost- effective, and easy-to-deploy and a machine learning technique is adopted to train vehicle classification models and efficiently categorize vehicles.
Abstract: The traffic monitoring system is an imperative tool for traffic analysis and transportation planning. In this paper, we present WiTraffic: the first WiFi-based traffic monitoring system. Compared with existing solutions, it is non-intrusive, cost- effective, and easy-to-deploy. Unique WiFi Channel State Information (CSI) patterns of passing vehicles are captured and analyzed to effectively perform vehicle classification, lane detection, and speed estimation. A machine learning technique is adopted to train vehicle classification models and efficiently categorize vehicles. An Earth Mover's Distance (EMD)-based vehicle lane detection algorithm and vehicle speed estimation mechanism are proposed to further utilize WiFi CSI to identify the lane in which a vehicle is located and to estimate the vehicle speed. We implemented WiTraffic with off-the-shelf WiFi devices and performed real-world experiments with over a week of field data collection in both local roads and highways. The results show that the mean classification accuracy, lane detection accuracy for both local road and highway settings are around 96%, and 95%, respectively. The average root-mean- square error (RMSE) of the proposed CSI-based speed estimation method on a highway was 5mph in our experimental settings.

56 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper discusses how to enable a truly portable and mobile VR experience, with light weight VR glasses wirelessly connecting with edge/cloud computing devices that perform the rendering remotely, and investigates several possible solutions.
Abstract: Triggered by several head-mounted display (HMD) devices that have come to the market recently, such as Oculus Rift, HTC Vive, and Samsung Gear VR, significant interest has developed in virtual reality (VR) systems, experiences and applications. However, the current HMD devices are still very heavy and large, negatively affecting user experience. Moreover, current VR approaches perform rendering locally either on a mobile device tethered to an HMD, or on a computer/console tethered to the HMD. In this paper, we discuss how to enable a truly portable and mobile VR experience, with light weight VR glasses wirelessly connecting with edge/cloud computing devices that perform the rendering remotely. We investigate the challenges associated with enabling the new wireless VR approach with edge/cloud computing with different application scenarios that we implement. Specifically, we analyze the challenging bitrate and latency requirements to enable wireless VR, and investigate several possible solutions.

54 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work created a set of commercial-grade ser-vices supporting a wide variety of business use cases, including a fully developed blockchain-based decentralized marketplace, secure data storage and transfer, and unique user aliases that link the owner to all services controlled by that alias.
Abstract: While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.

54 citations


Proceedings ArticleDOI
Cheng Zhang1, Jun Bi1, Yu Zhou1, Abdul Basit Dogar1, Jianping Wu1 
01 Jul 2017
TL;DR: This work implemented HyperV, a high performance hypervisor for virtualization of a P4 specific data plane, to provide both non-exclusive and uninterrupted features, and evaluated BMv2-target HyperV by comparing with Hyper4, a recently proposed hypervisor, and DPDK- target HyperVBy comparing with PISCES and Open vSwitch.
Abstract: P4 is a domain specific language designed to define the behavior of a programmable data plane. It facilitates offloading hardware-suitable Network Functions (NFs) to a data plane. Consequently, NFs can maximally benefit from high performance of hardware devices, meanwhile more CPU power can be reserved for user applications. However, since the programmable data plane provides an NF with an exclusive network context, different NFs cannot operate on the same data plane simultaneously. Besides, it is hardly possible to dynamically reconfigure programmable network devices without interrupting the operation of a data plane. Therefore, we propose HyperV, a high performance hypervisor for virtualization of a P4 specific data plane, to provide both non-exclusive and uninterrupted features.We implemented HyperV based on a P4-BMv2 target and a DPDK target respectively. Then we evaluated BMv2-target HyperV by comparing with Hyper4, a recently proposed hypervisor, and evaluated DPDK- target HyperV by comparing with PISCES and Open vSwitch. Results show that BMv2- target HyperV averagely prevails over Hyper4 2.5x in performance while reducing resource usage by 4x. DPDK-target HyperV performs comparably to Open vSwitch and PISCES, with the worst case of a throughput penalty in less than 7\%, while providing a powerful capability of virtualization which neither of them provides.

50 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a framework to group together similar contracts within the Ethereum network using only the contracts publicly available compiled code and reports on the use of unsupervised clustering techniques and a seed set of verified contracts.
Abstract: Smart contracts have recently attracted interest from diverse fields including law and finance. Ethereum in particular has grown rapidly to accommodate an entire ecosystem of contracts which run using its own crypto-currency. Smart contract developers can opt to verify their contracts so that any user can inspect and audit the code before executing the contract. However, the huge numbers of deployed smart contracts and the lack of supporting tools for the analysis of smart contracts makes it very challenging to get insights into this eco-environment, where code gets executed through transactions performing value transfer of a crypto-currency. We address this problem and report on the use of unsupervised clustering techniques and a seed set of verified contracts, in this work we propose a framework to group together similar contracts within the Ethereum network using only the contracts publicly available compiled code. We report qualitative and quantitative results on a dataset and provide the dataset and project code to the research community.

42 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: A unique study of Reddit involving a large sample comments from 11 popular subreddits with different properties is undertaken, introducing a large number of sentiment, relevance, content analysis features including some novel features customized to reddit.
Abstract: Increasingly people form opinions based on information they consume on online social media. As a result, it is crucial to understand what type of content attracts people's attention on social media and drive discussions. In this paper we focus on online discussions. Can we predict which comments and what content gets the highest attention in an online discussion? How does this content differ from community to community? To accomplish this, we undertake a unique study of Reddit involving a large sample comments from 11 popular subreddits with different properties. We introduce a large number of sentiment, relevance, content analysis features including some novel features customized to reddit. Through a comparative analysis of the chosen subreddits, we show that our models are correctly able to retrieve top replies under a post with great precision. In addition, we explain our findings with a detailed analysis of what distinguishes high scoring posts in different communities that differ along the dimensions of the specificity of topic and style, audience and level of moderation.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The design of nTorrent is presented, which provides BitTorrent-like functions natively in Named Data Networking (NDN), and simulations are used to examine how well the NDN's data-centric communication model can natively support such an application.
Abstract: BitTorrent is a popular application for peer-to-peer file sharing in today's Internet. To achieve robust and efficient data dissemination as an application overlay, BitTorrent implements a data-centric paradigm on top of TCP/IP's point-to-point packet delivery, which requires each peer to obtain network layer connectivity information (e.g., peer IP address, distance to each peer, routing policies) that is exclusively available at the network layer in order to select the best peers for data retrieval. This paper presents the design of nTorrent, which provides BitTorrent-like functions natively in Named Data Networking (NDN). We use simulations to examine how well the NDN's data-centric communication model can natively support such an application. Our work exposes the differences between the IP- based BitTorrent and nTorrent, and the issues and impact of moving IP-based applications to NDN-enabled networks.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work studies Spark as a representative dataflow system, PMLS as a parameter- server system, and TensorFlow and MXNet as examples of more advanced dataflow systems, and analyzes the communication and control bottlenecks for these approaches.
Abstract: The proliferation of big data and big computing boosted the adoption of machine learning across many application domains. Several distributed machine learning platforms emerged recently. We investigate the architectural design of these distributed machine learning platforms, as the design decisions inevitably affect the performance, scalability, and availability of those platforms. We study Spark as a representative dataflow system, PMLS as a parameter- server system, and TensorFlow and MXNet as examples of more advanced dataflow systems. We take a distributed systems perspective, and analyze the communication and control bottlenecks for these approaches. We also consider fault-tolerance and ease-of-development in these platforms. In order to provide a quantitative evaluation, we evaluate the performance of these three systems with basic machine learning tasks: logistic regression, and an image classification example on the MNIST dataset.

Proceedings ArticleDOI
Lei Xu1, Lin Chen1, Zhimin Gao1, Yang Lu1, Weidong Shi1 
01 Jul 2017
TL;DR: Copper, a novel supply chain management system based on hybrid decentralized ledger is proposed, which develops an efficient block construction method with the model and security mechanism to prevent unauthorized access to data stored on the ledger.
Abstract: Modern supply chain is a complex system and plays an important role for different sectors under the globalization economic integration background. Supply chain management system is proposed to handle the increasing complexity and improve the efficiency of flows of goods. It is also useful to prevent potential frauds and guarantee trade compliance. Currently, most companies maintain their own IT system for supply chain management. However, this approach has some limitations that prevent one to get most of the supply chain information. Using emerging decentralized ledger technology to build supply chain management system is a promising direction. However, decentralized ledger usually suffers from low performance and lack of capability to protect information stored on the ledger. To overcome these challenges, we propose CoC, a novel supply chain management system based on hybrid decentralized ledger. We develop an efficient block construction method with the model and security mechanism to prevent unauthorized access to data stored on the ledger.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, smart contracts are used to express and enforce interorganizational agreements, and their basis in a common formalism may ensure effective evaluation and comparison between different intra-organizational contracts.
Abstract: The blockchain constitutes a technology-based, rather than social or regulation based, means to lower uncertainty about one another in order to exchange value. However, its use may very well also lead to increased complexity resulting from having to subsume work that displaced intermediary institutions had performed. We present our perspective that smart contracts may be used to mitigate this increased complexity. We further posit that smart contracts can be delineated according to complexity: Smart contracts that can be verified objectively without much uncertainty belong in an inter- organizational context; those that cannot be objectively verified belong in an intra- organizational context. We state that smart contracts that implement a formal (e.g. mathematical or simulation) model are especially beneficial for both contexts: They can be used to express and enforce inter-organizational agreements, and their basis in a common formalism may ensure effective evaluation and comparison between different intra-organizational contracts. Finally, we present a case study of our perspective by describing Intellichain, which implements formal, agent-based simulation model as a smart contract to provide epidemiological decision support.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A novel scheduling policy is proposed, named AutoPath, which can effectively reduce the overall makespan of such kind of applications by detecting and leveraging the parallel path, and adaptively assigning computing resources based on the estimated workload demands during runtime.
Abstract: Due to the flexibility of data operations and scalability of in- memory cache, Spark has revealed the potential to become the standard distributed framework to replace Hadoop for data-intensive processing in both industry and academia. However, we observe that the built-in scheduling algorithms in Spark (i.e., FIFO and FAIR) are not optimized for the applications with multiple parallel and independent branches in stages. Specifically, the child stage needs to wait and collect data from all its parent branches, but this wait has no guaranteed upper bound since it is tightly coupled with each branch's workload characteristic, stage order, and their corresponding allocated computing resource. To address this challenge, we investigate a superior solution which ensures all branches acquire suitable resources according to their workload demand in order to let the finish time of each branch be as close as possible. Based on this, we propose a novel scheduling policy, named AutoPath, which can effectively reduce the overall makespan of such kind of applications by detecting and leveraging the parallel path, and adaptively assigning computing resources based on the estimated workload demands during runtime. We implemented the new scheduling scheme in Spark v1.5.0 and evaluated it with selected representative workloads. The experiments demonstrate that our new scheduler effectively reduces the makespan and improves resource utilizations for these applications, compared to the current FIFO and FAIR schedulers.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper proposes a threat-aware system based on machine-learning for timely detection and response against network intrusion in SDN and achieves high performance and significantly reduces uncertainty in the decision process with a small number of feature sets.
Abstract: Software-Defined Networking (SDN) is an emerging network architecture that decouples the control plane and the data plane to provide unprecedented programmability, automation, and network control. The SDN controller exercises centralized control over network software, and in doing so, it can monitor and respond to malicious traffic for network protection. This paper proposes a threat-aware system based on machine-learning for timely detection and response against network intrusion in SDN. Our proposed system consists of data preprocessing for feature selection, predictive data modeling for machine-learning and anomaly detection, and decision making for intrusion response in SDN. Due to the time-critical nature of SDN, we propose a practical approach utilizing machine-learning techniques to protect against network intrusion and reduce uncertainty in decision-making outcomes. The maliciousness of most uncertain network traffic subsets is evaluated with selected significant feature sets. Our experimental results show that the proposed approach achieves high performance and significantly reduces uncertainty in the decision process with a small number of feature sets. The results help the SDN controller to properly react against known or unknown attacks that cannot be prevented by signature-based network intrusion detection systems.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: An overview of the existing access control mechanisms and their applicability in IoT systems is presented and both the challenges in the IoT access control design and the goals for future IoTAccess control design are identified and discussed.
Abstract: The integration of the physical world and the cyber system in IoT brings significant challenges to the design of security solutions. Access control is considered to be a critical system component for the protection of data, cyberinfrastructure, and even the physical systems in IoT; however, because of the new characteristics of IoT systems, such as the resource constraints, the large scale and the device heterogeneity, many traditional security solutions including existing access control mechanisms may not be directly applicable in IoT environment. This paper first presents an overview of the existing access control mechanisms and analyzes their applicability in IoT systems. Then, both the challenges in the IoT access control design and the goals for future IoT access control design are identified and discussed.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The results demonstrate the advantages of (i) joint compared to sequential optimization, (ii) stochastic compared to deterministic optimization, and (iii) adaptive compared to static optimization.
Abstract: Software-defined cellular networks (SDCN) have been recently introduced to enable flexible cellular network design that facilitates fulfilling 5G design requirements. Placement of controllers within the SDCN plays a crucial role in optimizing its performance. In this paper, we study the controller placement problem in SDCN, considering the uncertainty in cellular user locations. Specifically, our contributions are as follows. First, we develop C3P2, a static joint stochastic controller placement and evolved node B (eNB)- controller assignment problem. The objective of C3P2 is to minimize the number of controllers needed to control all eNBs, while ensuring that the response time to each eNB will exceed delta seconds with probability less than 1 - beta. Second, we develop CPPA, a joint stochastic controller placement and adaptive eNB controller assignment problem. In contrast to C3P2, in CPPA the eNB controller assignment adapts to variations in the eNB request rates, resulting from the variations in the cellular user locations. Finally, we use sample average approximation combined with various linearization techniques to solve and evaluate C3P2 and CPPA under various system parameters. Our results demonstrate the advantages of (i) joint compared to sequential optimization, (ii) stochastic compared to deterministic optimization, and (iii) adaptive compared to static optimization.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper investigates a probing strategy to assess unknown third-party services by offloading micro tasks and accurately predicting the performance for larger offloading tasks using regression models, and presents the first supplement approach for offloading decision support requiring no prior knowledge about these offloading systems and making no assumptions for real-world deployments.
Abstract: Mobile Cloud Computing (MCC) leverages resourceful data centers that are distant (aka the cloud) or closely located (aka edge servers) for computational offloading to overcome resource limitations of modern mobile systems like smartphones or IoT devices. Many research works investigate context-aware offloading decision algorithms aiming to find the best offloading system at runtime. However, all approaches require prior knowledge of the offloading systems or a running service profiler on the backend system. In this paper, we present a novel approach that overcomes this issue by first probing available unknown services such as nearby cloudlets or the distant cloud, and networks in an energy-efficient way at runtime to make better offloading decisions. For that, we investigate a probing strategy to assess these unknown services by offloading micro tasks and accurately predicting the performance for larger offloading tasks using regression models. Our evaluation on three algorithms with different time complexities shows that we achieve high prediction accuracies up to 85.5%, already after probing of two micro tasks running in the range of few milliseconds. To the best of our knowledge, this is the first supplement approach for offloading decision support that can handle unknown third-party services requiring no prior knowledge about these offloading systems and making no assumptions for real-world deployments.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The proposed THE-Driven Anomaly Detector is shown to be able to effectively detect suspicious traffic in BAS networks with small false alarm rate and is evaluated using k-fold cross validation and synthetic attacks.
Abstract: Building Automation Systems (BAS) are distributed networks of hardware and software that monitor and control heating, ventilation, and air-conditioning (HVAC), as well as lighting and security of smart buildings. BACnet is a standard data communication protocol designed to operate across many types of BAS field panels and controllers. This paper studies BACnet traffic in a real-world BAS from various vantage points and develops an anomaly detector for BAS networks. Our analysis of BACnet traffic through several measures reveals that BACnet traffic is neither strictly periodic as expected of control traffic nor exhibits diurnal patterns of IP network traffic. BACnet traffic is a combination of multiple flow-service streams that belong to "THE-driven'" categories: Time-driven, Human-driven, and Event-driven. Time-driven traffic follows periodic patterns, regular patterns, or on/off models. Human-driven and event- driven traffic present non-periodic patterns. We construct flow-service models for time-driven traffic and develop THE-Driven Anomaly Detector which adopts different mechanisms for each category of traffic. We evaluate the anomaly detector using k-fold cross validation and synthetic attacks. The proposed THE-Driven Anomaly Detector is shown to be able to effectively detect suspicious traffic in BAS networks with small false alarm rate.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The results show how user-space relaying causes a sizable increase in latency which can be detected with a low number of samples, while relaying using kernel-space forwarding requires larger samples sets in order to be discovered.
Abstract: The Link Fabrication Attack (LFA) in Software-Defined Networking (SDN) involves an attacker forging a new link in the network, providing them with control over traffic which traverses the new malicious link. One method to perform this attack is through the relaying of topology discovery traffic, for which no comprehensive defense exists. This paper proposes to detect this attack using statistical analysis of link latencies. A novel solution has been designed requiring a new link to undergo a vetting period during which its latency is evaluated. This is subsequently compared against a baseline model for benign links. This solution is assessed against several implementations of the relay-type LFA. The trade-off between the length of the vetting period and the accuracy of the attack detection is analyzed. The results show how user-space relaying causes a sizable increase in latency which can be detected with a low number of samples, while relaying using kernel-space forwarding requires larger samples sets in order to be discovered.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The proposed network partition approach, termed Clustering-based DAP Placement (CDP), is proposed to solve the DAP placement problem in a delay-sensitive smart meter NAN and is able to remarkably reduce the maximum propagation latency of data between DAP and their associated smart meters.
Abstract: A smart meter Neighborhood Area Network (NAN) is a significant component for smart grid. The delay- sensitive communication in NAN, such as the interaction of power system control signal, usually requires the maximum allowed delay in the order of a few milliseconds. Therefore, it is crucial to investigate how to shorten the latency and guarantee real-time communications. Since the location of Data Aggregation Points (DAPs) significantly affects the propagation latency between DAPs and their associated smart meters, in this paper, we aim at tackling the DAP placement problem in a delay-sensitive smart meter NAN. Specifically, the DAP placement problem is formulated first. Then, a network partition approach, termed Clustering-based DAP Placement (CDP), is proposed to solve the problem. Extensive simulations are conducted based on an actual neighborhood topology. The simulation results demonstrate that the proposed CDP is able to remarkably reduce the maximum propagation latency of data between DAP and their associated smart meters.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: New lower bounds for the maximum and average CSI staleness of any protocol are derived and a simple one-step greedy protocol is proposed for any network size and any number of CSI estimates disseminated per packet.
Abstract: This paper studies an ``age of information" problem in fully-connected wireless networks with time-varying reciprocal channels and packetized transmissions. Specifically, a scenario where each node in the network wishes to maintain a table of global channel state information (CSI) is considered. Each node updates its global CSI table in two ways: (i) direct channel measurements through standard channel estimation techniques and (ii) indirect observations of channels through CSI dissemination from other nodes in the network. Information aging, i.e., CSI staleness, occurs due to timeslotting and contention for the common channel resources. This paper derives new lower bounds for the maximum and average CSI staleness of any protocol. These bounds generalize previously developed bounds by allowing for any number of CSI estimates to be disseminated in each packet. A simple one-step greedy protocol is also proposed for any network size and any number of CSI estimates disseminated per packet. Numerical results are provided to demonstrate the achieved staleness of the greedy protocol with respect to the bounds and to also quantify CSI staleness in terms of various network parameters.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: Two models for UAV performance are proposed that consider basic UAV operations, while the second model considers the UAV physical characteristics as well as the mission specifications to predict the Uav flight time and the number of waypoints it can safely traverse.
Abstract: Unmanned Aerial Vehicles (UAVs) are increasingly being adopted for military and civilian applications. UAVs available on the market are well known to be resource constrained, especially in terms of available energy. As a result, it is very challenging to predict the critical performance characteristics of a UAV, such as flight time or the ability of a UAV to complete a mission, given the system parameters. Nevertheless, such predictions would have several benefits, such as improving the effectiveness of mission planners and optimization algorithms in general, as well as enabling researchers to perform more realistic simulations. The goal of this paper is to gain understanding in how physical, mechanical, or electrical hardware aspects of a UAV affect the UAV performance and ultimately its capability to accomplish a mission. We propose two models for UAV performance. The first model considers basic UAV operations, while the second model considers the UAV physical characteristics as well as the mission specifications to predict the UAV flight time and the number of waypoints it can safely traverse. We validate our models thorough experiments using a real test-bed based on 3DR Solo UAVs. The results show that our approach is able to reliably predict performance to within less than 5\% margin of error.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper presents LASER, a deep learning approach for speculative execution and replication of deadline-critical jobs, and compares it with SRQuant, a speculative- resume strategy that is based on quantitative analysis that outperform Hadoop without speculation.
Abstract: Meeting desired application deadlines is crucial as the nature of cloud applications is becoming increasingly mission-critical and deadline-sensitive. Empirical studies on large-scale clusters reveal that a few slow tasks, known as stragglers, could significantly stretch job execution times. A number of strategies are proposed to mitigate stragglers by launching speculative or clone (task) attempts. These strategies often rely on a model-based approach to optimize key operating parameters and are prone to inaccuracy/incompleteness in the underlying models. In this paper, we present LASER, a deep learning approach for speculative execution and replication of deadline-critical jobs. Machine learning has been successfully used to solve a large variety of classification and prediction problems. In particular, the deep neural network (DNN), consisting of multiple hidden layers of units between input and output layers, can provide more accurate regression (prediction) than traditional machine learning algorithms. We compare LASER with SRQuant, a speculative- resume strategy that is based on quantitative analysis. Both these scheduling algorithms aim to improve Probability of Completion before Deadlines (PoCD), i.e., the probability that MapReduce jobs meet their desired deadlines, and reduce the cost of speculative execution, measured by the total (virtual) machine time. We evaluate and compare the two strategies through testbed experiments. The results show that our two strategies outperform Hadoop without speculation (Hadoop-NS) and Hadoop with speculation (Hadoop-S) by up to 89% in PoCD and 13% in cost.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper investigates storage layer design in a heterogeneous system considering a new type of bundled jobs where the input data and associated application jobs are submitted in a bundle, and develops a sampling-based randomized algorithm to determine the placement of input data blocks.
Abstract: Big data processing frameworks such as Hadoop have been widely adopted to process a large volume of data. A lot of prior work has focused on the allocation of resources and the execution order of jobs/tasks to improve the performance in a homogeneous cluster. In this paper, we investigate storage layer design in a heterogeneous system considering a new type of bundled jobs where the input data and associated application jobs are submitted in a bundle. Our goal is to break the barrier between resource management and the underlying storage layer, and improve data locality, an important performance factor for resource management, from the aspect of storage system. We develop a sampling-based randomized algorithm for the network file system to determine the placement of input data blocks. The main idea is to query a selected set of candidate nodes, and estimate their workload at run time combining centralized and per-node information. The node with the smallest workload is selected to host the data block. Our evaluation is based with system implementation and comprehensive experiments on NSF CloudLab platforms. We have also conducted simulation for large-scale clusters. The results show significant performance improvements in terms of execution time and data locality.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper proposes EDOS, an edge assisted offloading system that consists of two major components, an Edge Assistant (EA) and Offload Agent (OA), which runs on the routers/towers to manage registered remote cloud servers and local service providers and OA operates on the users' devices to discover the services in proximity.
Abstract: Offloading resource-intensive jobs to the cloud and nearby users is a promising approach to enhance mobile devices. This paper investigates a hybrid offloading system that takes both infrastructure-based networks and Ad-hoc networks into the scope. Specifically, we propose EDOS, an edge assisted offloading system that consists of two major components, an Edge Assistant (EA) and Offload Agent (OA). EA runs on the routers/towers to manage registered remote cloud servers and local service providers and OA operates on the users' devices to discover the services in proximity. We present the system with a suite of protocols to collect the potential service providers and algorithms to allocate tasks according to user-specified constraints. To evaluate EDOS, we prototype it on commercial mobile devices and evaluate it with both experiments on a small-scale testbed and simulations. The results show that EDOS is effective and efficient for offloading jobs.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A mobile sensing technique to detect a nearby active television, the channel it is tuned to, and whether it is receiving this channel over the air or not and whether programming is received over-the-air or through alternate means such as cable or satellite TV is introduced.
Abstract: We introduce a mobile sensing technique to detect a nearby active television, the channel it is tuned to, and whether it is receiving this channel over the air or not. This technique can find applications in tracking TV viewership, second screen services and advertising, as well as improving the efficiency of TV white space spectrum usage. The technique uses a three-stage detection process: It first uses a Gaussian mixture model on audio recordings from mobile phones to detect likely TV sounds in the area. It then correlates the recording with known TV channel audio to identify the channel and improve detection robustness. Finally, it applies a latency analysis to determine whether programming is received over-the-air or through alternate means such as cable or satellite TV. Our system is evaluated using diverse datasets that take into account different realistic scenarios of indoor environments for several users. The results show that the system can achieve an area under the curve (AUC) of 0.9979 and a false negative rate of 0.0132.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The Byte Uniform Sampling (BUS) as discussed by the authors is a sampling method for estimating flow volumes, which can be combined with existing unweighted estimation algorithms and the result is a weighted algorithm that enables an asymptotic update time improvement as existing weighted algorithms are slower.
Abstract: Monitoring flow volumes is a fundamental capability in network measurement. Sampling is often used to cope with the line speed and the applied methods typically rely on uniform packet sampling. However, it is inaccurate when there is a large variance in packet sizes. In this work we introduce Byte Uniform Sampling (BUS), a sampling method for estimating flow volumes. We show that BUS can be combined with existing unweighted estimation algorithms and that the result is a weighted algorithm. BUS enables an asymptotic update time improvement as existing weighted algorithms are slower. We formally analyze BUS and evaluate it on five Internet traces. Finally, we extend the DPDK version of Open vSwitch to support BUS and demonstrate similar throughput when compared to uniform packet samples