scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Computer Communications and Networks in 2019"


Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper provides a comprehensive analysis of potential opportunities, new requirements, and principles of designing blockchain-based SCM systems, and discusses four crucial technical challenges in terms of scalability, throughput, access control, and data retrieval.
Abstract: Supply chain management (SCM) is fundamental for gaining financial, environmental and social benefits in the supply chain industry. However, traditional SCM mechanisms usually suffer from a wide scope of issues such as lack of information sharing, long delays for data retrieval, and unreliability in product tracing. Recent advances in blockchain technology show great potential to tackle these issues due to its salient features including immutability, transparency, and decentralization. Although there are some proof-of-concept studies and surveys on blockchain-based SCM from the perspective of logistics, the underlying technical challenges are not clearly identified. In this paper, we provide a comprehensive analysis of potential opportunities, new requirements, and principles of designing blockchain-based SCM systems. We summarize and discuss four crucial technical challenges in terms of scalability, throughput, access control, data retrieval and review the promising solutions. Finally, a case study of designing blockchain-based food traceability system is reported to provide more insights on how to tackle these technical challenges in practice.

87 citations


Proceedings ArticleDOI
Zhenjie Yang1, Yong Cui1, Baochun Li, Yadong Liu1, Yi Xu 
01 Jul 2019
TL;DR: As SD-WAN based multi-objective networking has been widely discussed to provide high-quality and complicated services, the opportunities and challenges brought by new techniques and network protocols are explored.
Abstract: Emerging applications and operational scenarios raise strict requirements for long-distance data transmission, driving network operators to design wide area networks from a new perspective. Software-defined wide area network, i.e., SD-WAN, has been regarded as the promising architecture of next-generation wide area network. To demystify software-defined wide area network, we revisit the status and challenges of legacy wide area network. We briefly introduce the architecture of software-defined wide area network. In the order from bottom to top, we survey the representative advances in each layer of software-defined wide area network. As SD-WAN based multi-objective networking has been widely discussed to provide high-quality and complicated services, we explore the opportunities and challenges brought by new techniques and network protocols.

65 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: New Data Mining approaches particularly tailored for the IoT scenario have been investigated, in particular with respect to the promising, emerging novel distributed computing paradigm of Edge Computing.
Abstract: The Internet of Things (IoT) enables the interconnection of new cyber-physical devices which generate significant traffic of distributed, heterogeneous and dynamic data at the network edge. Since several IoT applications demand for short response times (e.g., industrial applications, emergency management, real-time systems) and, at the same time, rely on resource-constrained devices, the adoption of traditional Data Mining techniques is neither effective nor efficient. Therefore, conventional Data Mining techniques need to be adjusted for optimizing response times, energy consumption and data traffic while still providing adequate accuracy as required by the IoT application. In this paper, new Data Mining approaches particularly tailored for the IoT scenario have been investigated, in particular with respect to the promising, emerging novel distributed computing paradigm of Edge Computing. In detail, two approximated versions of K-Means clustering algorithm, centralized and distributed, have been implemented in the EdgeCloudSim simulation framework and validated on a real system. As highlighted by the algorithm performance analysis, choosing an approximated and distributed clustering solution can provide benefits in terms of computation, communication and energy consumption, while maintaining high levels of accuracy. The management of such trade-off, obviously, has to be done in the light of the specific IoT application requirements.

33 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this paper, a vision for a blockchain-based Mobility-as-a-Service (MaaS) as an application of edge computing is presented, which has the potential to emerge as the main component for a smart city transportation offering efficiency and reducing carbon dioxide emissions.
Abstract: In this paper, we present a vision for a blockchain-based Mobility-as-a-Service (MaaS) as an application of edge computing. In current MaaS systems, a central MaaS operator plays a crucial role serving an intermediate layer which manages and controls the connections between transportation providers and passengers with several other features. Since the willingness of public and private transportation providers to connect to this layer is essential in the current realization of MaaS, in our vision, to eliminate this layer, a novel blockchain-based MaaS is proposed. The solution also improves trust and transparency for all stakeholders as well as eliminates the need to make commercial agreements with separate MaaS agents. From a technical perspective, the power of computing and resources are distributed to different transportation providers at the edge of the network providing trust in a decentralised way. The blockchain-based MaaS has the potential to emerge as the main component for a smart city transportation offering efficiency and reducing carbon dioxide emissions.

26 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper defines the new SSEC paradigm that is motivated by a few underlying technology trends and presents a few representative real-world case studies of SSEC applications and several key research challenges that exist in those applications.
Abstract: This paper overviews the state of the art, research challenges, and future opportunities in an emerging research direction: Social Sensing based Edge Computing (SSEC). Social sensing has emerged as a new sensing application paradigm where measurements about the physical world are collected from humans or from devices on their behalf. The advent of edge computing pushes the frontier of computation, service, and data along the cloud-to-things continuum. The merging of these two technical trends generates a set of new research challenges that need to be addressed. In this paper, we first define the new SSEC paradigm that is motivated by a few underlying technology trends. We then present a few representative real-world case studies of SSEC applications and several key research challenges that exist in those applications. Finally, we envision a few exciting research directions in future SSEC. We hope this paper will stimulate discussions of this emerging research direction in the community.

24 citations


Proceedings ArticleDOI
Tahmid Rashid1, Daniel Zhang1, Zhiyu Liu1, Hai Lin1, Dong Wang1 
01 Jul 2019
TL;DR: A novel spatiotemporal aware drone sensing system that is driven by harnessing social media signals, a process known as social sensing is developed that significantly outperforms current drone and social sensing baselines in terms of accuracy and deadline hit rate.
Abstract: While autonomous unmanned aerial vehicles (UAVs) have attained a reputable stance in modern disaster response applications, their practical adoptions are impeded due to various constraints (e.g., requiring manual input, battery life). In this paper, we develop a novel spatiotemporal aware drone sensing system that is driven by harnessing social media signals, a process known as social sensing. Social sensing has emerged as a new sensing paradigm where humans act as "sensors" to report their observations about the physical world. However, maneuvering drones with "social signals" introduces a new realm of challenges. The first challenge is to drive the drones by leveraging noisy and unreliable social media signals. The second challenge is to optimize the drone deployment by exploring the highly dynamic and latent correlations among event locations. In this paper, we present CollabDrone that devises a new spatiotemporal correlation inference model and game-theoretic drone dispatching mechanism to address the above challenges. The evaluation results on a real-world case study show that CollabDrone significantly outperforms current drone and social sensing baselines in terms of accuracy and deadline hit rate.

22 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this paper, the allocation of deep learning at the network edge and directly in the Internet of Things (IoT) devices is considered and an online learning approach is designed to enable small data subset training and continuous model updating to ensure accuracy requirements in time-sensitive environments.
Abstract: Deep learning, as an increasingly powerful and popular data analysis tool, has the potential to improve smart grid operation. One critical issue is that the accuracy of deep learning relies heavily on the integrity of the training dataset, and the data collection process is time-consuming and complex, resulting in that the applying deep learning may not satisfy the needs of time-sensitive applications. Moreover, in the smart grid, predictions must be timely, and cannot wait for the initial dataset to be completely collected by the sensors. Also, the traditional centralized data analytics structure requires the entire dataset to be uploaded to the cloud datacenter for analysis, which incurs significant network resource and increases network congestion. To address these problems, in this paper we consider the allocation of deep learning at the network edge and directly in the Internet of Things (IoT) devices and design an online learning approach to enable small data subset training and continuous model updating to ensure accuracy requirements in time-sensitive environments. In our online learning approach, we implement the Just Another Network model, an optimized Long-Short Term Memory neural network model, to reduce the computation overhead for the deep learning training process. We evaluate our approach using real-world smart grid dataset. Our experimental results show that our online learning approach significantly reduces the training time while satisfying the accuracy requirements.

19 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: This work performs controlled and real world experiments over multiple paths with differing loss rates and round trip latencies to assess the effect of primary path selection, and the range of issues that arise from selecting the under-performing path.
Abstract: Today's smartphones are equipped with both Wi-Fi and cellular interfaces, creating usage opportunities for protocols such as Multi-path TCP (MPTCP), which enable devices to use more than one interface concurrently. One of the biggest hurdles in implementing MPTCP is the heterogeneity in performance characteristics that exists across multiple interfaces. This makes the selection of primary interface of paramount importance, as this interface is also used for DNS resolution. In this work, we explore performance and IP reachability over real world networks. Our findings indicate that widespread MPTCP deployment faces significant obstacles. In particular, we perform controlled and real world experiments over multiple paths with differing loss rates and round trip latencies to assess the effect of primary path selection, and the range of issues that arise from selecting the under-performing path. Using results from our experiments, we show how heterogeneous paths can adversely affect MPTCP performance, especially when one path is lossy.

18 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper contributes a useful French satiric dataset to the research community and provides a satiric news detection system using machine learning to automate classifications significantly, and presents the preliminary results of the research designed to discriminate real news from satiric stories, and thus ultimately reduce false and satiric News distribution.
Abstract: The topic of deceptive and satiric news has drawn attention from both the public and the academic community, as such misinformation has the potential to have extremely adverse effects on individuals and society. Detecting false and satiric news automatically is a challenging problem in deception detection, and it has tremendous real-word political and social influences. In this paper, we contribute a useful French satiric dataset to the research community and provide a satiric news detection system using machine learning to automate classifications significantly. In addition, we present the preliminary results of our research designed to discriminate real news from satiric stories, and thus ultimately reduce false and satiric news distribution.

17 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: A hardware/software platform that integrates IoT technologies for deploying a smart water network in a domestic environment and preliminary results according to the data available after two months of operation are presented.
Abstract: The paper presents a hardware/software platform that integrates IoT technologies for deploying a smart water network in a domestic environment. Low cost sensors and edge nodes have been deployed in a real pilot for monitoring water fixture use. A data collection platform, which can run in the Fog or in Cloud, allows for processing of raw data and monitoring. It makes available raw and intermediate data to offload the computation of complex Big Data analytic. The paper presents preliminary results according to the data available after two months of operation.

16 citations


Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this article, a distributed pub-sub communication framework for building management systems (BMS) over the Named Data Networking (NDN) architecture is presented, which employs a data synchronization mechanism to aggregate multiple data streams published by multiple sensing devices and achieve efficient notification of new data for the consumers.
Abstract: Publish-subscribe (pub-sub) has been recognized as a common communication pattern in IoT applications. In this paper we present ndnBMS-PS, a distributed pub-sub communication framework for building management systems (BMS), an important area of IoT, over the Named Data Networking (NDN) architecture. ndnBMS-PS utilizes distributed NDN repositories to store and republish large quantities of BMS data that can be consumed by different applications. It employs a data synchronization mechanism to aggregate multiple data streams published by multiple sensing devices and achieve efficient notification of new data for the consumers. ndnBMS-PS also provides data authentication by utilizing NDN's security building blocks. This design exercise demonstrates that the information-centric architecture enables a simple design for complex IoT systems and provides superior system efficiency and security over TCP/IP-based alternatives.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: Simulation results show that this proposed game-based combinatorial double auction model for cloud resource allocation not only forms a fair incentive mechanism for all parties in the transaction, but also optimizes the social welfare.
Abstract: Cloud computing integrates a large number of resources through virtualization technology, and then provides users with personalized services on an on-demand basis. In response to this service model, this paper draws on the economic theories and proposed a game-based combinatorial double auction model for cloud resource allocation. Firstly, through Harsanyi transformation, the incomplete information game for cloud resource allocation is converted into a complete but imperfect information game, and the Bayesian Nash equilibrium solution is obtained. Then, we designed the resource allocation model supporting multiple infrastructure providers and service providers bidding on various combinations of resources. Considering both parties' interests, this model ensures fairness and high resource utilization. Simulation results show that this method not only forms a fair incentive mechanism for all parties in the transaction, but also optimizes the social welfare.

Proceedings ArticleDOI
29 Jul 2019
TL;DR: The contribution of this article is to formalize the relevant multi-objective optimization problem, and develop Particle Swarm Optimization (PSO) based techniques for optimization of the individual objectives, which are then exploited in an iterative manner.
Abstract: In this work, we consider a wireless communication system consisting of multiple rotary-wing unmanned aerial vehicles (UAVs) used as aerial base stations (ABSs) in order to provide downlink connectivity to the user terminals (UEs) on the ground. Towards investigating power efficient deployment strategies for such a system, the contribution of this article is twofold: we formalize the relevant multi-objective optimization problem, and secondly develop Particle Swarm Optimization (PSO) based techniques for optimization of the individual objectives, which are then exploited in an iterative manner. The relevant optimization objectives for reducing the total power consumed are the number of base stations (BSs) and their transmit powers. The optimization is performed while assuring minimum quality-of-service constraints (QoSs) such as \textit{per-user coverage probability} and \textit{per-user rate}. Through system level simulations, we show that the developed approach ensures great reductions for both the number of base stations as well as their individual transmit power, thus saving initial deployment cost as well as reducing operational costs induced due to energy consumption.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A hybrid Online Offline system in which the Offline model retains general characteristics of network traffic while the Online model continuously learns, which achieves over 95% accuracy on known anomalies and over 60% detection rate on most of the unknown anomalies.
Abstract: With the advancement in technology, normal network traffic is becoming more heterogeneous. In this scenario, the problem of detecting anomalies is intensified. In the literature, offline methods see more data and can be optimised to achieve lower false positive rates. However, they cannot readily adapt to changing network conditions or capture concept-drift. This necessitates an incremental online learning model. On the other hand, online training is easily affected by noise. In this paper, we propose a hybrid Online Offline system in which the Offline model retains general characteristics of network traffic while the Online model continuously learns. The Offline model acts as a bias for the Online model to select new data to learn from. The Online model retains its knowledge and adapts to the changing ground truth. They are put to work together to detect anomalies. We implement this idea with an Online Support Vector Machine (SVM) which retains its support vectors and shifts its decision boundary guided by an Offline Radius Nearest Neighbor (Rad-NN). The method is evaluated on the NSL-KDD 2009 dataset. This relatively simple model achieves over 95% accuracy on known anomalies and over 60% detection rate on most of the unknown anomalies.

Proceedings ArticleDOI
29 Jul 2019
TL;DR: KCBP and KCBP-WC, two container image placement algorithms which aim to reduce the maximum retrieval time of container images, are presented, based on k-Center optimization.
Abstract: Edge computing promises to extend Clouds by moving computation close to data sources to facilitate short-running and low-latency applications and services. Providing fast and predictable service provisioning time presets a new and mounting challenge, as the scale of Edge-servers grows and the heterogeneity of networks between them increases. This paper is driven by a simple question: can we place container images across Edge-servers in such a way that an image can be retrieved to any Edge-server fast and in a predictable time. To this end, we present KCBP and KCBP-WC, two container image placement algorithms which aim to reduce the maximum retrieval time of container images. KCBP and KCBP-WC are based on k-Center optimization. However, KCBP-WC tries to avoid placing large layers of a container image on the same Edge-server. Evaluations using trace-driven simulations show that KCBP and KCBP-WC can be applied to various network configurations and reduce the maximum retrieval time of container images by 1.1x to 4x compared to state-of-the-art placements (i.e., Best-Fit and Random).

Proceedings ArticleDOI
01 Jul 2019
TL;DR: The paper examines the development of production systems from automated to data analysis-supported process control and answers the question which requirements an IT architecture for prescriptive automation has to fulfill.
Abstract: The paper examines the development of production systems from automated to data analysis-supported process control. In current concepts, process optimization is carried out by data analysis with the help of a decision support system after the production process. Prescriptive automation envisages controlling the process before and autonomously on the basis of a prescriptive analytics model. The development of an IT architecture is identified as an essential part of the overall concept. On the basis of expert interviews and current literature reviews, the question is answered, which requirements an IT architecture for prescriptive automation has to fulfill. These requirements are opposed to solution components with the goal of a modular architectural concept. On basis of the requirements and therefore needed solution components, a reference architecture is identified on the assumption of the data processing resources. The main processing components of this architecture are a combination of edge and cloud computing.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This work explores applicability of two well-known machine learning approaches, which are, Artificial Neural Networks (ANN) and Support Vector Machines (SVM), to detect intrusions or anomalous behavior in the cloud environment, and observes that with proper features set, SVM and ANN techniques have been able to achieve anomaly detection accuracy of 91% and 92% respectively.
Abstract: Cloud computing is gaining significant traction and virtualized data centers are becoming popular as a cost-effective infrastructure in telecommunication industry. Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) are being widely deployed and utilized by end users, including many private as well as public organizations. Despite its wide-spread acceptance, security is still the biggest threat in cloud computing environments. Users of cloud services are under constant fear of data loss, security breaches, information theft and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning (ML) techniques. In this work, we explore applicability of two well-known machine learning approaches, which are, Artificial Neural Networks (ANN) and Support Vector Machines (SVM), to detect intrusions or anomalous behavior in the cloud environment. We have developed ML models using ANN and SVM techniques and have compared their performances. We have used UNSW-NB-15 dataset to train and test the models. In addition, we have performed feature engineering and parameter tuning to find out optimal set of features with maximum accuracy to reduce the training time and complexity of the ML models. We observe that with proper features set, SVM and ANN techniques have been able to achieve anomaly detection accuracy of 91% and 92% respectively, which is higher compared against that of the one achieved in the literature, with reduced number of features needed to train the models.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper proposes a novel congestion-aware IFA detection and mitigation solution and performs extensive simulations and the results clearly depict the efficiency of this proposal in detecting truly occurring IFA attacks.
Abstract: Named Data Networking (NDN) is a promising candidate for future internet architecture. It is one of the implementations of the Information-Centric Networking (ICN) architectures where the focus is on the data rather than the owner of the data. While the data security is assured by definition, these networks are susceptible of various Denial of Service (DoS) attacks, mainly Interest Flooding Attacks (IFA). IFAs overwhelm an NDN router with a huge amount of interests (Data requests). Various solutions have been proposed in the literature to mitigate IFAs; however; these solutions do not make a difference between intentional and unintentional misbehavior due to the network congestion. In this paper, we propose a novel congestion-aware IFA detection and mitigation solution. We performed extensive simulations and the results clearly depict the efficiency of our proposal in detecting truly occurring IFA attacks.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: An engineering method is proposed, based on the exploitation and extension of social network analysis approaches combined with well-known clustering techniques and association rules, to identify similarities as well as groups of users associated to specific illegal activities such as drugs, weapons and human trafficking.
Abstract: Terrorist Networks (TNs) and Organized Crime (OC) are nowadays an increasing threat in the modern society Due to the strong adoption of the IT technology, an emergent phenomenon is represented by the exploitation of social media, such as Twitter, Facebook, YouTube to disseminate and promote illegal activities, recruit terrorists and establish collaborations The traditional approaches and countermeasures against cyber-crimes result inadequate in the cyber-space In this context, the paper proposes an engineering method, centered on three main phases, to support the analysis of suspicious users on social media related to OC and TNs It is based on the exploitation and extension of social network analysis approaches combined with well-known clustering techniques and association rules It aims to identify similarities as well as groups of users associated to specific illegal activities such as drugs, weapons and human trafficking Moreover, it supports the identification process of leaders in groups and mediators between them A software, which enables the automatic execution of the proposed method, is developed and experimented on the Twitter social media The results show both the identification of groups of users related to OC and TNs along with their intra-group activities as well as inter-group relationships through potential mediators

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper proposes an Affinity based simulated annealing (ABSA) heuristic approach for cost optimized delay aware placement of VNFs along with a set of greedy approaches and evaluates the implemented approaches against the optimal solution.
Abstract: Network Function Virtualization (NFV) with minimum-delay service function chains will be a key enabling technology for next generation mobile networks, such as 5G. In this paper, we present a new approach to generate problem instances for the cost optimized delay sensitive Virtualized Network Function (VNF) placement and routing problem, where we know by construction the optimal solution for the given objective function. Our approach produces problem instances, which can be used to test and benchmark heuristic algorithms against. We then implement an Affinity based simulated annealing (ABSA) heuristic approach for cost optimized delay aware placement of VNFs along with a set of greedy approaches. We evaluate the implemented approaches against the optimal solution by generating several problem instances using our proposed method.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A dynamic programming algorithm is proposed to optimally solve the increase ratio of the size of the dispatch region for a case that different dispatch regions are not overlapped and can effectively reduce the expected driver pickup distance and keep the dispatches short.
Abstract: Ride-sharing companies such as Didi and Uber have served billions of passenger requests from all over the world. The efficiency of the ride-sharing is highly depended on the order dispatch system which assigns passenger requests to idle drivers. However, designing such a dispatch system is challenging because of the spatial-temporal dynamic of passenger requests, and the trade-off between the benefits for passengers and drivers. Existing order dispatch systems use either a system-assigning approach or a driver-grabbing approach. However, either approach has its own flaws. In this paper, we propose to combine the two existing approaches and jointly considers both passengers'' and drivers'' interest. In our approach, a passenger request is broadcast to the drivers in a dispatch region chosen by the system. The size of the dispatch region could iteratively increase until the request is accepted. We formulate an optimization problem to determine the increase speed of the dispatch region. Drivers'' idle driving distances and passengers'' waiting time are jointly considered. We propose a dynamic programming algorithm to optimally solve the increase ratio of the size of the dispatch region for a case that different dispatch regions are not overlapped. We further investigate the overlapped case and modify the dynamic programming algorithm correspondingly. We provide a discussion on the effect of the overlapping in a spatial case, where the driver and passenger locations are uniformly distributed. Experiments are conducted based on the synthetic dataset and the real-world dataset from Didi Inc. Results show that our approach can effectively reduce the expected driver pickup distance and keep the dispatching time short, which balances both passengers'' and drivers'' interests.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: The Italian Twitter community interested in diabetes, a chronic blood disease, is taken into consideration and it is shown that this is a calm community interested into several distinct topics, none of which is really predominant over the others.
Abstract: Social media are nowadays used by people to talk about any kind of topic, either private or not, and among others health problems. Concerning health, on social media one can find large communities discussing about chronic diseases. Here people look for advise, information, support, and so on. In this paper we take into consideration the Italian Twitter community interested in diabetes, a chronic blood disease. Our aim is to understand which are the main conversation topics in this community. Thus, we analyzed about 9K tweets written in Italian that contained the world {\em diabete} (diabetes) and performed both a hashtag analysis and a lexicon analysis, the latter by means of ad-hoc dictionaries. Results show that this is a calm community interested into several distinct topics (e.g., disease related information gathering, recipes for diabetes patients, fundraising campaigns), none of which is really predominant over the others.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: The results show that channelization is effective but 60 GHz channels have non-negligible adjacent and non-adjacent channel interference, and it is possible to perform interference-aware sector selection to reduce interference but its gains can be limited in indoor environment with reflections, and sector selection should consider fairness in medium access and avoid asymmetric interference.
Abstract: Dense deployment of access points in 60 GHz WLANs can provide always-on gigabit connectivity and robustness against blockages to mobile clients. However, this dense deployment can lead to harmful interference between the links, affecting link data rates. In this paper, we attempt to better understand the interference characteristics and effectiveness of interference mitigation techniques using 802.11ad COTS devices and 60 GHz software radio based measurements. We first find that current 802.11ad COTS devices do not consider interference in sector selection, resulting in high interference and low spatial reuse. We consider three techniques of interference mitigation - channelization, sector selection and receive beamforming. First, our results show that channelization is effective but 60 GHz channels have non-negligible adjacent and non-adjacent channel interference. Second, we show that it is possible to perform interference-aware sector selection to reduce interference but its gains can be limited in indoor environment with reflections, and such sector selection should consider fairness in medium access and avoid asymmetric interference. Third, we characterize the efficacy of receive beamforming in combating interference and quantify the related overhead involved in the search for receive sector, especially in presence of blockages. We elaborate on the insights gained through the characterization and point out important outstanding problems through the study.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper provides a comprehensive framework for distributed and decentralized Edge Computing systems dynamically formed out of edge computing resources with the objective of harnessing the increasingly available computing at the edge.
Abstract: Over the course of the last decades there has been significant growth in smartphone penetration and capacities. This trend presently complements the rise of IoT and ever complex and smarter connected devices. On balance, considerable contribution has been made to the emergence of unprecedented computing capacity at the edge of the network, which is only expected to have a long-lasting impact with the rise of connected vehicles, robots and drones. Considering the abovementioned context, computing will cease to be confined to certain devices located into large data centers or stationary edge devices, instead it will be embedded and pervasive to virtually everything. This paper provides a comprehensive framework for distributed and decentralized Edge Computing systems dynamically formed out of edge computing resources with the objective of harnessing the increasingly available computing at the edge. In order to facilitate this, we propose an architecture for this Ad-hoc Edge Computing infrastructure. Furthermore, two critical aspects for Resource Management are evaluated in the present framework: namely scalability and implications of node volatility.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: Wang et al. as discussed by the authors developed a deep learning based approach for estimating people's socioeconomic status (SES) based on their Smart Card Data (SCD), which records the temporal and spatial mobility behavior of a large population of users.
Abstract: The notion of socioeconomic status (SES) of a person or family reflects the corresponding entity's social and economic rank in society. Such information may help applications like bank loaning decisions and provide measurable inputs for related studies like social stratification, social welfare and business planning. Traditionally, estimating SES for a large population is performed by national statistical institutes through a large number of household interviews, which is highly expensive and time-consuming. Recently researchers try to estimate SES from data sources like mobile phone call records and online social network platforms, which is much cheaper and faster. Instead of relying on these data about users' cyberspace behaviors, various alternative data sources on real-world users' behavior such as mobility may offer new insights for SES estimation. In this paper, we leverage Smart Card Data (SCD) for public transport systems which records the temporal and spatial mobility behavior of a large population of users. More specifically, we develop S2S, a deep learning based approach for estimating people's SES based on their SCD. Essentially, S2S models two types of SES-related features, namely the temporal-sequential feature and general statistical feature, and leverages deep learning for SES estimation. We evaluate our approach in an actual dataset, Shanghai SCD, which involves millions of users. The proposed model clearly outperforms several state-of-art methods in terms of various evaluation metrics.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper presents a novel detection scheme with the raw ECG signal in wearable telehealth system that benefits from the concept of big data, sensing and pervasive computing and the emerging deep learning technology.
Abstract: The electrocardiogram (ECG) signal, as one of the most important vital signs, can provide indications of many heart-related diseases. Nonetheless, in the case of telehealth context, the automated analysis and accurate detection of ECG signals remain unsolved issues, because the poor data quality collected by the wearable devices and unprofessional users further increases the complexity of hand-crafted feature extraction, ultimately affecting the efficiency of feature extraction and the detection accuracy. To address this issue and improve the detection accuracy, in this paper we present a novel detection scheme with the raw ECG signal in wearable telehealth system. Our systembenefits from the concept of big data, sensing and pervasive computing and the emerging deep learning technology. In particular, a Deep Heartbeat Classification (DHC) scheme is proposed to analyze the ECG signal for arrhythmia detection. Distinct from existing solutions, the detection model in DHC can be trained directly on the raw ECG signal without hand-crafted feature extraction. A cloud-based prototypical system is also designed and implemented with the functions of data acquisition, wireless transmission, back-end data management, and ECG detection. The experimental results demonstrate that our prototypical system is feasible and effective in real-world practice, and extensive experimentation based on the MIT-BIH database demonstrates that the proposed DHC scheme outperforms baseline schemes.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A preliminary phase of data analysis is presented on a collection of Enterprise Collaboration Software (ECS) data that presents the concept of employee-attitude-oriented pattern as a mean to derive significant views over the overall graph and discusses Social Network Analysis (SNA) approaches that can be exploited for these purposes.
Abstract: The digital transformation of organizations is making workplace collaboration more and more powerful and work always "observable"; however, the informational and managerial potential of the generated data is still largely unutilized in Human Resource Management (HRM). Our research, conducted in collaboration with business engineers and economists, aims at exploring the relationship between digital work behaviors and employee attitudes. This paper is a work-in-progress contribution that presents a preliminary phase of data analysis we performed on a collection of Enterprise Collaboration Software (ECS) data. In the exploratory data analysis step, we analyze data in their original table format and elaborate it according to the user who performed the action and the performed action. Then, we move to a graph representation in order to make explicit the interaction between users and the objects of their actions. Finally, we introduce the concept of employee-attitude-oriented pattern as a mean to derive significant views over the overall graph and discuss Social Network Analysis (SNA) approaches that can be exploited for our purposes.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A routing recommendation system, called Vehicle Routing Trifecta (VRT), which can jointly blend different considered factors with different weights entered by users while still producing well-balanced routes that conform user normal desire is proposed.
Abstract: In recent years, driving route recommendation has attracted growing interest from researchers and industries. However, previously proposed route recommendation systems cannot jointly consider different factors (e.g., fuel consumption, travel time, air quality) in parallel with different weights entered by users. In addition, as users set the weights based on their own evaluation (e.g., much higher weight on air quality than fuel consumption), which may lead to a very unbalanced route (e.g., worst travel time and worst fuel consumption) that actually is not what the users desire. To handle these issues, in this paper, we propose a routing recommendation system, called Vehicle Routing Trifecta (VRT), which can jointly blend different considered factors with different weights entered by users while still producing well-balanced routes that conform user normal desire. VRT consists of two innovative components. First, we establish three different predictors for air quality, travel time, and fuel consumption estimations of each road segment in the road network. Second, we design an optimal route selector, which consists of the solution of a multi-criteria optimization problem based on the given user preference on three different aspects (e.g., air quality, travel time, and fuel consumption). We conduct extensively simulation studies based on the real-world, geo-tagged datasets to evaluate VRT. The comparative studies with other existing routing recommendation systems show the superior performance of VRT in terms of recommending routes that meet user entered preference.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: It is demonstrated that Sharma and Kalra's scheme is vulnerable to identity and password guessing, replay and session key disclosure attacks, and a secure multifactor authentication protocol is proposed that can be applied to real cloud-IoT environment securely.
Abstract: With the development of internet of things (IoT) and communication technology, the sensors and embedded devices collect a large amount of data and handle it. However, IoT environment cannot efficiently treat the big data and is vulnerable to various attacks because IoT is comprised of resource limited devices and provides a service through a open channel. In 2018, Sharma and Kalra proposed a lightweight multi-factor authentication protocol for cloud-IoT environment to overcome this problems. We demonstrate that Sharma and Kalra's scheme is vulnerable to identity and password guessing, replay and session key disclosure attacks. We also propose a secure multifactor authentication protocol to resolve the security problems of Sharma and Kalra's scheme, and then we analyze the security using informal analysis and compare the performance with Sharma and Kalra's scheme. The proposed scheme can be applied to real cloud-IoT environment securely.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: A retrospective on Amazon's real-time spot market is provided, including its advantages and disadvantages for allocating transient servers compared to current fixed-price approaches, as well as addressing the problems that likely led to its elimination.
Abstract: Amazon introduced spot instances in December 2009, enabling ``customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price.'' Amazon's real-time computational spot market was novel in multiple respects. For example, it was the first (and to date only) large-scale public implementation of market-based resource allocation based on dynamic pricing after decades of research, and it provided users with useful information, control knobs, and options for optimizing the cost of running cloud applications. Spot instances also introduced the concept of transient cloud servers derived from variable idle capacity that cloud platforms could revoke at any time. Transient servers have since become central to efficient resource management of modern clusters and clouds. As a result, Amazon's spot market was the motivation for substantial research over the past decade. Yet, in November 2017, Amazon effectively ended its realtime spot market by announcing that users no longer needed to place bids and that spot prices will ``...adjust more gradually, based on longer-term trends in supply and demand.'' The changes made spot instances more similar to the fixed-price transient servers offered by other cloud platforms. Unfortunately, while these changes made spot instances less complex, they eliminated many benefits to sophisticated users in optimizing their applications. This paper provides a retrospective on Amazon's real-time spot market, including its advantages and disadvantages for allocating transient servers compared to current fixed-price approaches. We also discuss some fundamental problems with Amazon's spot market, which we identified in prior work (from 2016), that predicted its eventual end. We then discuss potential options for allocating transient servers that combine the advantages of Amazon's real-time spot market, while also addressing the problems that likely led to its elimination.