scispace - formally typeset
Search or ask a question

Showing papers on "Scalability published in 2016"


Proceedings ArticleDOI
24 Oct 2016
TL;DR: ELASTICO is the first candidate for a secure sharding protocol with presence of byzantine adversaries, and scalability experiments on Amazon EC2 with up to $1, 600$ nodes confirm ELASTICO's theoretical scaling properties.
Abstract: Cryptocurrencies, such as Bitcoin and 250 similar alt-coins, embody at their core a blockchain protocol --- a mechanism for a distributed network of computational nodes to periodically agree on a set of new transactions. Designing a secure blockchain protocol relies on an open challenge in security, that of designing a highly-scalable agreement protocol open to manipulation by byzantine or arbitrarily malicious nodes. Bitcoin's blockchain agreement protocol exhibits security, but does not scale: it processes 3--7 transactions per second at present, irrespective of the available computation capacity at hand. In this paper, we propose a new distributed agreement protocol for permission-less blockchains called ELASTICO. ELASTICO scales transaction rates almost linearly with available computation for mining: the more the computation power in the network, the higher the number of transaction blocks selected per unit time. ELASTICO is efficient in its network messages and tolerates byzantine adversaries of up to one-fourth of the total computational power. Technically, ELASTICO uniformly partitions or parallelizes the mining network (securely) into smaller committees, each of which processes a disjoint set of transactions (or "shards"). While sharding is common in non-byzantine settings, ELASTICO is the first candidate for a secure sharding protocol with presence of byzantine adversaries. Our scalability experiments on Amazon EC2 with up to $1, 600$ nodes confirm ELASTICO's theoretical scaling properties.

1,036 citations


Journal ArticleDOI
TL;DR: This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Abstract: Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

719 citations


Journal ArticleDOI
TL;DR: A combined IoT-based system for smart city development and urban planning using Big Data analytics, consisting of various types of sensor deployment, including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects is proposed.

701 citations


Journal ArticleDOI
TL;DR: A novel approach to mobile edge computing for the IoT architecture, edgeIoT, to handle the data streams at the mobile edge by proposing a hierarchical fog computing architecture in each fog node to provide flexible IoT services while maintaining user privacy.
Abstract: In order to overcome the scalability problem of the traditional Internet of Things architecture (i.e., data streams generated from distributed IoT devices are transmitted to the remote cloud via the Internet for further analysis), this article proposes a novel approach to mobile edge computing for the IoT architecture, edgeIoT, to handle the data streams at the mobile edge. Specifically, each BS is connected to a fog node, which provides computing resources locally. On the top of the fog nodes, the SDN-based cellular core is designed to facilitate packet forwarding among fog nodes. Meanwhile, we propose a hierarchical fog computing architecture in each fog node to provide flexible IoT services while maintaining user privacy: each user's IoT devices are associated with a proxy VM (located in a fog node), which collects, classifies, and analyzes the devices' raw data streams, converts them into metadata, and transmits the metadata to the corresponding application VMs (which are owned by IoT service providers). Each application VM receives the corresponding metadata from different proxy VMs and provides its service to users. In addition, a novel proxy VM migration scheme is proposed to minimize the traffic in the SDNbased core.

594 citations


Journal ArticleDOI
TL;DR: A survey of integration components: Cloud platforms, Cloud infrastructures and IoT Middleware is presented and some integration proposals and data analytics techniques are surveyed as well as different challenges and open research issues are pointed out.

574 citations


Proceedings ArticleDOI
15 Oct 2016
TL;DR: A new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications, and is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication.
Abstract: Hyperscale datacenter providers have struggled to balance the growing need for specialized hardware (efficiency) with the economic benefits of homogeneity (manageability) In this paper we propose a new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications This Configurable Cloud architecture places a layer of reconfigurable logic (FPGAs) between the network switches and the servers, enabling network flows to be programmably transformed at line rate, enabling acceleration of local applications running on the server, and enabling the FPGAs to communicate directly, at datacenter scale, to harvest remote FPGAs unused by their local servers We deployed this design over a production server bed, and show how it can be used for both service acceleration (Web search ranking) and network acceleration (encryption of data in transit at high-speeds) This architecture is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication By coupling to the network plane, direct FPGA-to-FPGA messages can be achieved at comparable latency to previous work, without the secondary network Additionally, the scale of direct inter-FPGA messaging is much larger The average round-trip latencies observed in our measurements among 24, 1000, and 250,000 machines are under 3, 9, and 20 microseconds, respectively The Configurable Cloud architecture has been deployed at hyperscale in Microsoft's production datacenters worldwide

512 citations


Journal ArticleDOI
18 Jun 2016
TL;DR: The basic architecture of the Neurocube is presented and an analysis of the logic tier synthesized in 28nm and 15nm process technologies are presented and the performance is evaluated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.
Abstract: This paper presents a programmable and scalable digital neuromorphic architecture based on 3D high-density memory integrated with logic tier for efficient neural computing. The proposed architecture consists of clusters of processing engines, connected by 2D mesh network as a processing tier, which is integrated in 3D with multiple tiers of DRAM. The PE clusters access multiple memory channels (vaults) in parallel. The operating principle, referred to as the memory centric computing, embeds specialized state-machines within the vault controllers of HMC to drive data into the PE clusters. The paper presents the basic architecture of the Neurocube and an analysis of the logic tier synthesized in 28nm and 15nm process technologies. The performance of the Neurocube is evaluated and illustrated through the mapping of a Convolutional Neural Network and estimating the subsequent power and performance for both training and inference.

415 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is applied to the distributed economic dispatch problem in power grids, to demonstrate how it can achieve the global optimum in a scalable way, even when the generation cost, or system load, or network configuration, is changing.

375 citations


Journal ArticleDOI
TL;DR: A novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks is developed.
Abstract: A future Internet of Things (IoT) system will connect the physical world into cyberspace everywhere and everything via billions of smart objects. On the one hand, IoT devices are physically connected via communication networks. The service oriented architecture (SOA) can provide interoperability among heterogeneous IoT devices in physical networks. On the other hand, IoT devices are virtually connected via social networks. In this paper we propose adaptive and scalable trust management to support service composition applications in SOA-based IoT systems. We develop a technique based on distributed collaborative filtering to select feedback using similarity rating of friendship, social contact, and community of interest relationships as the filter. Further we develop a novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks. For scalability, we consider a design by which a capacity-limited node only keeps trust information of a subset of nodes of interest and performs minimum computation to update trust. We demonstrate the effectiveness of our proposed trust management through service composition application scenarios with a comparative performance analysis against EigenTrust and PeerTrust.

362 citations


Proceedings Article
18 May 2016
TL;DR: The performance metrics of a single LoRaWAN end device, namely uplink throughput and data transmission time, are derived and few issues which need to be taken into account when making an application using LoRa or deploying a LoRa network are pointed out.
Abstract: In this paper we discuss and analyze the recently proposed LoRa low power wide area network (LPWAN) technology when used under the European frequency regulations First of all, we derive the performance metrics of a single LoRaWAN end device, namely uplink throughput and data transmission time Then we analyze for several illustrative application scenarios the maximum number of end devices which can be served by a single LoRaWAN base station and discuss the spatial distribution of these devices It is shown that subject to the channel composition and application requirements, a single cell may include several millions of devices Also, we show that the capacity of the uplink channel available to a LoRaWAN node strongly depends on the distance from the base station and does not exceed 2 kbit/s In the concluding section we summarize and discuss the obtained results, and point out few issues which need to be taken into account when making an application using LoRa or deploying a LoRa network

359 citations


Journal ArticleDOI
TL;DR: A computation-efficient solution is proposed based on the formulation and validated by extensive simulation based studies to deal with the high computation complexity of fog computing supported software-defined embedded system.
Abstract: Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.

Posted Content
TL;DR: A simulator, called iFogSim, is proposed to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost.
Abstract: Internet of Things (IoT) aims to bring every object (e.g. smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive amounts of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need a evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in terms of latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit in terms of RAM consumption and execution time is verified under different circumstances.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: Gemini is presented, a distributed graph processing system that applies multiple optimizations targeting computation performance to build scalability on top of efficiency and significantly outperforms all well-known existing distributed graphprocessing systems.
Abstract: Traditionally distributed graph processing systems have largely focused on scalability through the optimizations of inter-node communication and load balance. However, they often deliver unsatisfactory overall processing efficiency compared with shared-memory graph computing frameworks. We analyze the behavior of several graph-parallel systems and find that the added overhead for achieving scalability becomes a major limiting factor for efficiency, especially with modern multi-core processors and high-speed interconnection networks.Based on our observations, we present Gemini, a distributed graph processing system that applies multiple optimizations targeting computation performance to build scalability on top of efficiency. Gemini adopts (1) a sparse-dense signal-slot abstraction to extend the hybrid push-pull computation model from shared-memory to distributed scenarios, (2) a chunk-based partitioning scheme enabling low-overhead scaling out designs and locality-preserving vertex accesses, (3) a dual representation scheme to compress accesses to vertex indices, (4) NUMA-aware sub-partitioning for efficient intra-node memory accesses, plus (5) locality-aware chunking and fine-grained work-stealing for improving both internode and intra-node load balance, respectively. Our evaluation on an 8-node high-performance cluster (using five widely used graph applications and five real-world graphs) shows that Gemini significantly outperforms all well-known existing distributed graph processing systems, delivering up to 39.8× (from 8.91×) improvement over the fastest among them.

Journal ArticleDOI
TL;DR: In this article, a big-data-enabled architecture for proactive content caching in 5G wireless networks is investigated in which a vast amount of data is harnessed for content popularity estimation, and strategic contents are cached at BSs to achieve higher user satisfaction and backhaul offloading.
Abstract: In order to cope with the relentless data tsunami in 5G wireless networks, current approaches such as acquiring new spectrum, deploying more BSs, and increasing nodes in mobile packet core networks are becoming ineffective in terms of scalability, cost, and flexibility. In this regard, context- aware 5G networks with edge/cloud computing and exploitation of big data analytics can yield significant gains for mobile operators. In this article, proactive content caching in 5G wireless networks is investigated in which a big-data-enabled architecture is proposed. In this practical architecture, a vast amount of data is harnessed for content popularity estimation, and strategic contents are cached at BSs to achieve higher user satisfaction and backhaul offloading. To validate the proposed solution, we consider a real-world case study where several hours worth of mobile data traffic is collected from a major telecom operator in Turkey, and big-data-enabled analysis is carried out, leveraging tools from machine learning. Based on the available information and storage capacity, numerical studies show that several gains are achieved in terms of both user satisfaction and backhaul offloading. For example, in the case of 16 BSs with 30 percent of content ratings and 13 GB storage size (78 percent of total library size), proactive caching yields 100 percent user satisfaction and offloads 98 percent of the backhaul.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: FaSST eschews one-sided RDMA for fast RPCs using two-sided unreliable datagrams, which provides better performance, scalability, and simplicity, without requiring expensive reliability mechanisms in software.
Abstract: FaSST is an RDMA-based system that provides distributed in-memory transactions with serializability and durability. Existing RDMA-based transaction processing systems use one-sided RDMA primitives for their ability to bypass the remote CPU. This design choice brings several drawbacks. First, the limited flexibility of one-sided RDMA reduces performance and increases software complexity when designing distributed data stores. Second, deep-rooted technical limitations of RDMA hardware limit scalability in large clusters. FaSST eschews one-sided RDMA for fast RPCs using two-sided unreliable datagrams, which we show drop packets extremely rarely on modern RDMA networks. This approach provides better performance, scalability, and simplicity, without requiring expensive reliability mechanisms in software. In comparison with published numbers, FaSST outperforms FaRM on the TATP benchmark by almost 2x while using close to half the hardware resources, and it outperforms DrTM+R on the SmallBank benchmark by around 1.7x without making data locality assumptions.

Journal ArticleDOI
TL;DR: The scalability, adaptability, and open source software/hardware featured in the proposed platform enable the user to implement the energy management with the customized control-as-services, while minimizing the implementation cost and time-to-market.
Abstract: By introducing microgrids, energy management is required to control the power generation and consumption for residential, industrial, and commercial domains, e.g., in residential microgrids and homes. Energy management may also help us to reach zero net energy (ZNE) for the residential domain. Improvement in technology, cost, and feature size has enabled devices everywhere, to be connected and interactive, as it is called Internet of Things (IoT). The increasing complexity and data, due to the growing number of devices like sensors and actuators, require powerful computing resources, which may be provided by cloud computing. However, scalability has become the potential issue in cloud computing. In this paper, fog computing is introduced as a novel platform for energy management. The scalability, adaptability, and open source software/hardware featured in the proposed platform enable the user to implement the energy management with the customized control-as-services, while minimizing the implementation cost and time-to-market. To demonstrate the energy management-as-a-service over fog computing platform in different domains, two prototypes of home energy management (HEM) and microgrid-level energy management have been implemented and experimented.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: FireCaffe is presented, which successfully scales deep neural network training across a cluster of GPUs, and finds that reduction trees are more efficient and scalable than the traditional parameter server approach.
Abstract: Long training times for high-accuracy deep neural networks (DNNs) impede research into new DNN architectures and slow the development of high-accuracy DNNs. In this paper we present FireCaffe, which successfully scales deep neural network training across a cluster of GPUs. We also present a number of best practices to aid in comparing advancements in methods for scaling and accelerating the training of deep neural networks. The speed and scalability of distributed algorithms is almost always limited by the overhead of communicating between servers, DNN training is not an exception to this rule. Therefore, the key consideration here is to reduce communication overhead wherever possible, while not degrading the accuracy of the DNN models that we train. Our approach has three key pillars. First, we select network hardware that achieves high bandwidth between GPU servers – Infiniband or Cray interconnects are ideal for this. Second, we consider a number of communication algorithms, and we find that reduction trees are more efficient and scalable than the traditional parameter server approach. Third, we optionally increase the batch size to reduce the total quantity of communication during DNN training, and we identify hyperparameters that allow us to reproduce the small-batch accuracy while training with large batch sizes. When training GoogLeNet and Network-in-Network on ImageNet, we achieve a 47x and 39x speedup, respectively, when training on a cluster of 128 GPUs.

Journal ArticleDOI
TL;DR: To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud by using the BGV encryption scheme and employing cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently forDeep computation model training.
Abstract: To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud. Privacy concerns become evident because there are a large number of private data by various applications in the smart city, such as sensitive data of governments or proprietary information of enterprises. To protect the private data, the proposed model uses the BGV encryption scheme to encrypt the private data and employs cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently for deep computation model training. Furthermore, the proposed scheme approximates the Sigmoid function as a polynomial function to support the secure computation of the activation function with the BGV encryption. In our scheme, only the encryption operations and the decryption operations are performed by the client while all the computation tasks are performed on the cloud. Experimental results show that our scheme is improved by approximately 2.5 times in the training efficiency compared to the conventional deep computation model without disclosing the private data using the cloud computing including ten nodes. More importantly, our scheme is highly scalable by employing more cloud servers, which is particularly suitable for big data.

Journal ArticleDOI
TL;DR: This paper aims to shed light on SDN related issues and give insight into the challenges facing the future of this revolutionary network model, from both protocol and architecture perspectives, and present different existing solutions and mitigation techniques that address SDN scalability, elasticity, dependability, reliability, high availability, resiliency, security, and performance concerns.
Abstract: With the advent of cloud computing, many new networking concepts have been introduced to simplify network management and bring innovation through network programmability. The emergence of the software-defined networking (SDN) paradigm is one of these adopted concepts in the cloud model so as to eliminate the network infrastructure maintenance processes and guarantee easy management. In this fashion, SDN offers real-time performance and responds to high availability requirements. However, this new emerging paradigm has been facing many technological hurdles; some of them are inherent, while others are inherited from existing adopted technologies. In this paper, our purpose is to shed light on SDN related issues and give insight into the challenges facing the future of this revolutionary network model, from both protocol and architecture perspectives. Additionally, we aim to present different existing solutions and mitigation techniques that address SDN scalability, elasticity, dependability, reliability, high availability, resiliency, security, and performance concerns. Copyright © 2017 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper studies the cloudlet placement problem in a large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs) with the objective to minimize the average access delay between mobile users and the cloudlets serving the users.
Abstract: Mobile cloud computing is emerging as a main ubiquitous computing platform to provide rich cloud resources for various applications of mobile devices. Although most existing studies in mobile cloud computing focus on energy savings of mobile devices by offloading computing-intensive jobs from mobile devices to remote clouds, the access delays between mobile users and remote clouds usually are long and sometimes unbearable. Cloudlet as a new technology is capable to bridge this gap, and can enhance the performance of mobile devices significantly while meeting the crisp response time requirements of mobile users. In this paper, we study the cloudlet placement problem in a large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs). We first formulate the problem as a novel capacitated cloudlet placement problem that places $K$ cloudlets to some strategic locations in the WMAN with the objective to minimize the average access delay between mobile users and the cloudlets serving the users. We then propose an exact solution to the problem by formulating it as an Integer Linear Programming (ILP). Due to the poor scalability of the ILP, we instead propose an efficient heuristic for the problem. For a special case of the problem where all cloudlets have identical computing capacities, we devise novel approximation algorithms with guaranteed approximation ratios. We also devise an online algorithm for dynamically allocating user requests to different cloudlets, if the $K$ cloudlets have already been placed. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising and scalable.

Proceedings Article
09 Jul 2016
TL;DR: This paper proposes to learn a network embedding with the followership/ followee-ship of each user explicitly modeled as input/output context vector representations so as to preserve the proximity of users with "similar" followers/followees in the embedded space.
Abstract: In this paper, we adopt the representation learning approach to align users across multiple social networks where the social structures of the users are exploited. In particular, we propose to learn a network embedding with the followership/ followee-ship of each user explicitly modeled as input/output context vector representations so as to preserve the proximity of users with "similar" followers/followees in the embedded space. For the alignment, we add both known and potential anchor users across the networks to facilitate the transfer of context information across networks. We solve both the network embedding problem and the user alignment problem simultaneously under a unified optimization framework. The stochastic gradient descent and negative sampling algorithms are used to address scalability issues. Extensive experiments on real social network datasets demonstrate the effectiveness and efficiency of the proposed approach compared with several state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots.
Abstract: We propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots. The model assumes that each user lies in an unobserved latent space, and interactions are more likely to occur between similar users in the latent space representation. In addition, the model allows each user to gradually move its position in the latent space as the network structure evolves over time. We present a global optimization algorithm to effectively infer the temporal latent space. Two alternative optimization algorithms with local and incremental updates are also proposed, allowing the model to scale to larger networks without compromising prediction accuracy. Empirically, we demonstrate that our model, when evaluated on a number of real-world dynamic networks, significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power.

Journal ArticleDOI
TL;DR: The higher-order network (HON) representation is proposed, including accuracy, scalability, and direct compatibility with the existing suite of network analysis methods, and it is illustrated how HON can be applied to a broad variety of tasks, such as random walking, clustering, and ranking.
Abstract: To ensure the correctness of network analysis methods, the network (as the input) has to be a sufficiently accurate representation of the underlying data. However, when representing sequential data from complex systems, such as global shipping traffic or Web clickstream traffic as networks, conventional network representations that implicitly assume the Markov property (first-order dependency) can quickly become limiting. This assumption holds that, when movements are simulated on the network, the next movement depends only on the current node, discounting the fact that the movement may depend on several previous steps. However, we show that data derived from many complex systems can show up to fifth-order dependencies. In these cases, the oversimplifying assumption of the first-order network representation can lead to inaccurate network analysis results. To address this problem, we propose the higher-order network (HON) representation that can discover and embed variable orders of dependencies in a network representation. Through a comprehensive empirical evaluation and analysis, we establish several desirable characteristics of HON, including accuracy, scalability, and direct compatibility with the existing suite of network analysis methods. We illustrate how HON can be applied to a broad variety of tasks, such as random walking, clustering, and ranking, and we demonstrate that, by using it as input, HON yields more accurate results without any modification to these tasks.

Journal ArticleDOI
TL;DR: The fuzzy sets theory is used to express vagueness in the subjective preferences of the customers and the service selection is resolved with the distributed application of fuzzy inference or Dempster-Shafer theory of evidence.
Abstract: Cloud platforms encompass a large number of storage services that can be used to manage the needs of customers. Each of these services, offered by a different provider, is characterized by specific features, limitations and prices. In presence of multiple options, it is crucial to select the best solution fitting the customer requirements in terms of quality of service and costs. Most of the available approaches are not able to handle uncertainty in the expression of subjective preferences from customers, and can result in wrong (or sub-optimal) service selections in presence of rational/selfish providers, exposing untrustworthy indications concerning the quality of service levels and prices associated to their offers. In addition, due to its multi-objective nature, the optimal service selection process results in a very complex task to be managed, when possible, in a distributed way, for well-known scalability reasons. In this work, we aim at facing the above challenges by proposing three novel contributions. The fuzzy sets theory is used to express vagueness in the subjective preferences of the customers. The service selection is resolved with the distributed application of fuzzy inference or Dempster-Shafer theory of evidence. The selection strategy is also complemented by the adoption of a game theoretic approach for promoting truth-telling ones among service providers. We present empirical evidence of the proposed solution effectiveness through properly crafted simulation experiments.

Journal ArticleDOI
TL;DR: The coupling tool preCICE is presented which offers the complete coupling functionality required for a fast development of a multi-physics environment using existing, possibly black-box solvers and numerical examples show the high flexibility, the correctness, and the high performance and parallel scalability of coupled simulations withpreCICE as the coupling unit.

Proceedings Article
18 May 2016
TL;DR: This paper proposes a novel heuristicbased admission control mechanism able to dynamically allocate network resources to different slices in order to maximize the satisfaction of the users while guaranteeing to meet the requirements of the slices they belong to.
Abstract: 5G mobile network is expected to serve flexible requirements hence dynamically allocate network resources according to the demands. Network slicing, where network resources are packaged and assigned in an isolated manner to set of users according to their specific requirements, is considered as a key paradigm to fulfil diversity of requirements. There will clearly be conflicting demands in allocation of such slices, and the effective provisioning of network slicing poses several challenges. Indeed, network slicing has a twofold impact in terms of user/traffic prioritization as it dictates for the simultaneous management of the priority among different slices (i.e., interslice) and the priority among the users belonging to the same slice (i.e., intra-slice). In this paper, we propose a novel heuristicbased admission control mechanism able to dynamically allocate network resources to different slices in order to maximize the satisfaction of the users while guaranteeing to meet the requirements of the slices they belong to. Through simulations, we demonstrate how our proposal provides (i) higher user experience in individual slices, (ii) increased utilization of network resources and (iii) higher scalability when the number of users in each slice increases.

Journal ArticleDOI
TL;DR: The proposed Spatial-Temporal Weighted K-Nearest Neighbor model, named STW-KNN, is implemented on a widely adopted Hadoop distributed computing platform with the MapReduce parallel processing paradigm, to enhance the accuracy and efficiency of short-term traffic flow forecasting.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: ERA is a tool for efficient reasoning about network reachability that directly reasons about the network "control plane" that generates these incarnations, and is used to successfully find both known and new violations of a range of common intended polices.
Abstract: To guarantee network availability and security, operators must ensure that their reachability policies (e.g., A can or cannot talk to B) are correctly implemented. This is a difficult task due to the complexity of network configuration and the constant churn in a network's environment, e.g., new route announcements arrive and links fail. Current network reachability analysis techniques are limited as they can only reason about the current "incarnation" of the network, cannot analyze all configuration features, or are too slow to enable exploration of many environments. We build ERA, a tool for efficient reasoning about network reachability. Instead of reasoning about individual incarnations of the network, ERA directly reasons about the network "control plane" that generates these incarnations. We address key expressiveness and scalability challenges by building (i) a succinct model for the network control plane (i.e., various routing protocols and their interactions), and (ii) a repertoire of techniques for scalable (taking a few seconds for a network with > 1000 routers) exploration of this model. We have used ERA to successfully find both known and new violations of a range of common intended polices.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The microservice approach has created a hype in the domain of cloud and enterprise application business as mentioned in this paper, where one monolithic application is split up into a set of distributed services and those are strongly decoupled to enable high maintainability and scalability.
Abstract: The microservice approach has created a hype in the domain of cloud and enterprise application business. Before, grown, monolithic, software has been pushed to the limits of maintainability and scalability. The microservice architecture approach utilizes the service oriented architecture together with best practices and recent developments in software virtualization to overcome those issues. One monolithic application is split up into a set of distributed services. Those are strongly decoupled to enable high maintainability and scalability. In this case an application is split up in a top down manner.

Proceedings ArticleDOI
22 Aug 2016
TL;DR: SNAP offers a simpler "centralized" stateful programming model, by allowing programmers to develop programs on top of one big switch rather than many, and relieves programmers of having to worry about how to distribute, place, and optimize access to these stateful arrays.
Abstract: Early programming languages for software-defined networking (SDN) were built on top of the simple match-action paradigm offered by OpenFlow 1.0. However, emerging hardware and software switches offer much more sophisticated support for persistent state in the data plane, without involving a central controller. Nevertheless, managing stateful, distributed systems efficiently and correctly is known to be one of the most challenging programming problems. To simplify this new SDN problem, we introduce SNAP. SNAP offers a simpler "centralized" stateful programming model, by allowing programmers to develop programs on top of one big switch rather than many. These programs may contain reads and writes to global, persistent arrays, and as a result, programmers can implement a broad range of applications, from stateful firewalls to fine-grained traffic monitoring. The SNAP compiler relieves programmers of having to worry about how to distribute, place, and optimize access to these stateful arrays by doing it all for them. More specifically, the compiler discovers read/write dependencies between arrays and translates one-big-switch programs into an efficient internal representation based on a novel variant of binary decision diagrams. This internal representation is used to construct a mixed-integer linear program, which jointly optimizes the placement of state and the routing of traffic across the underlying physical topology. We have implemented a prototype compiler and applied it to about 20 SNAP programs over various topologies to demonstrate our techniques' scalability.