scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2019"


Journal ArticleDOI
TL;DR: A single-hop wireless network with a number of nodes transmitting time-sensitive information to a base station is considered and the problem of minimizing the expected weighted sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes is addressed.
Abstract: Age of Information (AoI) is a performance metric that captures the freshness of the information from the perspective of the destination. The AoI measures the time that elapsed since the generation of the packet that was most recently delivered to the destination. In this paper, we consider a single-hop wireless network with a number of nodes transmitting time-sensitive information to a base station and address the problem of minimizing the expected weighted sum AoI of the network while simultaneously satisfying timely-throughput constraints from the nodes. We develop four low-complexity transmission scheduling policies that attempt to minimize AoI subject to minimum throughput requirements and evaluate their performance against the optimal policy. In particular, we develop a randomized policy, a Max-Weight policy, a Drift-Plus-Penalty policy, and a Whittle’s Index policy, and show that they are guaranteed to be within a factor of two, four, two, and eight, respectively, away from the minimum AoI possible. The simulation results show that Max-Weight and Drift-Plus-Penalty outperform the other policies, both in terms of AoI and throughput, in every network configuration simulated, and achieve near-optimal performance.

186 citations


Journal ArticleDOI
TL;DR: In this paper, the authors formulate the service migration problem as a Markov decision process (MDP) and provide a mathematical framework to design optimal service migration policies in mobile edge computing.
Abstract: In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations. It is challenging to make migration decisions optimally because of the uncertainty in such a dynamic cloud environment. In this paper, we formulate the service migration problem as a Markov decision process (MDP). Our formulation captures general cost models and provides a mathematical framework to design optimal service migration policies. In order to overcome the complexity associated with computing the optimal policy, we approximate the underlying state space by the distance between the user and service locations. We show that the resulting MDP is exact for the uniform 1-D user mobility, while it provides a close approximation for uniform 2-D mobility with a constant additive error. We also propose a new algorithm and a numerical technique for computing the optimal solution, which is significantly faster than traditional methods based on the standard value or policy iteration. We illustrate the application of our solution in practical scenarios where many theoretical assumptions are relaxed. Our evaluations based on real-world mobility traces of San Francisco taxis show the superior performance of the proposed solution compared to baseline solutions.

153 citations


Journal ArticleDOI
TL;DR: This paper proposes a Robustness Optimization scheme with multi-population Co-evolution for scale-free wireless sensor networKS (ROCKS), and shows that ROCKS roughly doubles the robustness of initial scale- free WSNs, and outperforms two existing algorithms by about 16% when the network size is large.
Abstract: Wireless sensor networks (WSNs) have been the popular targets for cyberattacks these days. One type of network topology for WSNs, the scale-free topology, can effectively withstand random attacks in which the nodes in the topology are randomly selected as targets. However, it is fragile to malicious attacks in which the nodes with high node degrees are selected as targets. Thus, how to improve the robustness of the scale-free topology against malicious attacks becomes a critical issue. To tackle this problem, this paper proposes a Robustness Optimization scheme with multi-population Co-evolution for scale-free wireless sensor networKS (ROCKS) to improve the robustness of the scale-free topology. We build initial scale-free topologies according to the characteristics of WSNs in the real-world environment. Then, we apply our ROCKS with novel crossover operator and mutation operator to optimize the robustness of the scale-free topologies constructed for WSNs. For a scale-free WSNs topology, our proposed algorithm keeps the initial degree of each node unchanged such that the optimized topology remains scale-free. Based on a well-known metric for the robustness against malicious attacks, our experiment results show that ROCKS roughly doubles the robustness of initial scale-free WSNs, and outperforms two existing algorithms by about 16% when the network size is large.

137 citations


Journal ArticleDOI
Abstract: Information updates in multihop networks such as Internet of Things (IoT) and intelligent transportation systems have received significant recent attention. In this paper, we minimize the age of a single information flow in interference-free multihop networks. When preemption is allowed and the packet transmission times are exponentially distributed, we prove that a preemptive last-generated, first-served (LGFS) policy results in smaller age processes across all nodes in the network than any other causal policy (in a stochastic ordering sense). In addition, for the class of new-better-than-used (NBU) distributions, we show that the non-preemptive LGFS policy is within a constant age gap from the optimum average age. In contrast, our numerical result shows that the preemptive LGFS policy can be very far from the optimum for some NBU transmission time distributions. Finally, when preemption is prohibited and the packet transmission times are arbitrarily distributed, the non-preemptive LGFS policy is shown to minimize the age processes across all nodes in the network among all work-conserving policies (again in a stochastic ordering sense). Interestingly, these results hold under quite general conditions, including 1) arbitrary packet generation and arrival times, and 2) for minimizing both the age processes in stochastic ordering and any non-decreasing functional of the age processes.

109 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an analysis of a well-known model for resource sharing, the share-constrained proportional allocation mechanism, to realize network slicing, which enables tenants to reap the performance benefits of sharing, while retaining the ability to customize their own users' allocation.
Abstract: Network slicing to enable resource sharing among multiple tenants–network operators and/or services–is considered as a key functionality for next generation mobile networks. This paper provides an analysis of a well-known model for resource sharing, the share-constrained proportional allocation mechanism, to realize network slicing. This mechanism enables tenants to reap the performance benefits of sharing, while retaining the ability to customize their own users’ allocation. This results in a network slicing game in which each tenant reacts to the user allocations of the other tenants so as to maximize its own utility. We show that, for elastic traffic, the game associated with such strategic behavior converges to a Nash equilibrium. At the Nash equilibrium, a tenant always achieves the same or better performance than that of a static partitioning of resources, thus providing the same level of protection as static partitioning. We further analyze the efficiency and fairness of the resulting allocations, providing tight bounds for the price of anarchy and envy-freeness. Our analysis and extensive simulation results confirm that the mechanism provides a comprehensive practical solution to realize network slicing. Our theoretical results also fills a gap in the analysis of this resource allocation model under strategic players.

96 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a queuing-based model and use it at the network orchestrator to optimally match the vertical's requirements to the available system resources, which allows to reduce the solution complexity.
Abstract: One of the main goals of 5G networks is to support the technological and business needs of various industries (the so-called verticals), which wish to offer to their customers a wide range of services characterized by diverse performance requirements. In this context, a critical challenge lies in mapping in an automated manner the requirements of verticals into decisions concerning the network infrastructure, including VNF placement, resource assignment, and traffic routing. In this paper, we seek to make such decisions jointly , accounting for their mutual interaction, efficiently. To this end, we formulate a queuing-based model and use it at the network orchestrator to optimally match the vertical’s requirements to the available system resources. We then propose a fast and efficient solution strategy, called MaxZ, which allows us to reduce the solution complexity. Our performance evaluation, carried out an accounting for multiple scenarios representing the real-world services, shows that MaxZ performs substantially better than the state-of-the-art alternatives and consistently close to the optimum.

92 citations


Journal ArticleDOI
TL;DR: This paper develops a decentralized algorithm for allocating the computational tasks among nearby devices and the edge cloud and shows that it provides a good system performance close to that of the myopic best response algorithm.
Abstract: Fog computing is identified as a key enabler for using various emerging applications by battery powered and computationally constrained devices. In this paper, we consider devices that aim at improving their performance by choosing to offload their computational tasks to nearby devices or to an edge cloud. We develop a game theoretical model of the problem and use a variational inequality theory to compute an equilibrium task allocation in static mixed strategies. Based on the computed equilibrium strategy, we develop a decentralized algorithm for allocating the computational tasks among nearby devices and the edge cloud. We use the extensive simulations to provide insight into the performance of the proposed algorithm and compare its performance with the performance of a myopic best response algorithm that requires global knowledge of the system state. Despite the fact that the proposed algorithm relies on average system parameters only, our results show that it provides a good system performance close to that of the myopic best response algorithm.

85 citations


Journal ArticleDOI
TL;DR: The results show that by relying on appropriately tuned forecasting schemes, the RL-NSB approach provides very substantial potential gains in terms of system utilization while meeting the tenants’ SLAs.
Abstract: Network slicing is considered one of the main pillars of the upcoming 5G networks. Indeed, the ability to slice a mobile network and tailor each slice to the needs of the corresponding tenant is envisioned as a key enabler for the design of future networks. However, this novel paradigm opens up to new challenges, such as isolation between network slices, the allocation of resources across them, and the admission of resource requests by network slice tenants. In this paper, we address this problem by designing the following building blocks for supporting network slicing: i) traffic and user mobility analysis, ii) a learning and forecasting scheme per slice, iii) optimal admission control decisions based on spatial and traffic information, and iv) a reinforcement process to drive the system towards optimal states. In our framework, namely RL-NSB, infrastructure providers perform admission control considering the service level agreements (SLA) of the different tenants as well as their traffic usage and user distribution, and enhance the overall process by the means of learning and the reinforcement techniques that consider heterogeneous mobility and traffic models among diverse slices. Our results show that by relying on appropriately tuned forecasting schemes, our approach provides very substantial potential gains in terms of system utilization while meeting the tenants’ SLAs.

80 citations


Journal ArticleDOI
TL;DR: This paper proposes utility-driven caching, where each content is associate with each content a utility, which is a function of the corresponding content hit probability, and develops online algorithms that can be used by service providers to implement various caching policies based on arbitrary utility functions.
Abstract: In any caching system, the admission and eviction policies determine which contents are added and removed from a cache when a miss occurs. Usually, these policies are devised so as to mitigate staleness and increase the hit probability. Nonetheless, the utility of having a high hit probability can vary across contents. This occurs, for instance, when service level agreements must be met, or if certain contents are more difficult to obtain than others. In this paper, we propose utility-driven caching, where we associate with each content a utility, which is a function of the corresponding content hit probability. We formulate optimization problems where the objectives are to maximize the sum of utilities over all contents. These problems differ according to the stringency of the cache capacity constraint. Our framework enables us to reverse engineer classical replacement policies such as LRU and FIFO, by computing the utility functions that they maximize. We also develop online algorithms that can be used by service providers to implement various caching policies based on arbitrary utility functions.

78 citations


Journal ArticleDOI
TL;DR: A lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users’ speaking mouths through acoustic sensing on smartphones for user authentication and develops a balanced binary tree-based authentication approach to accurately identify each individual.
Abstract: To prevent users’ privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, and so on, to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although the state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Toward this end, we explore liveness verification of user authentication leveraging users’ mouth movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass , which extracts unique behavioral characteristics of users’ speaking mouths through acoustic sensing on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users’ speaking mouths and find that there are unique mouth movement patterns for different individuals. To characterize the mouth movements, we propose a deep learning-based method to extract efficient features from Doppler profiles and employ softmax function , support vector machine , and support vector domain description to construct multi-class identifier, binary classifiers, and spoofer detectors for mouth state identification, user identification, and spoofer detection, respectively. Afterward, we develop a balanced binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.2% accuracy in user identification and 93.1% accuracy in spoofer detection.

75 citations


Journal ArticleDOI
TL;DR: BLEST and STTF are compared with existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared with MPTCP’s default scheduler.
Abstract: The demand for mobile communication is continuously increasing, and mobile devices are now the communication device of choice for many people. To guarantee connectivity and performance, mobile devices are typically equipped with multiple interfaces. To this end, exploiting multiple available interfaces is also a crucial aspect of the upcoming 5G standard for reducing costs, easing network management, and providing a good user experience. Multi-path protocols, such as multi-path TCP (MPTCP), can be used to provide performance optimization through load-balancing and resilience to coverage drops and link failures, however, they do not automatically guarantee better performance. For instance, low-latency communication has been proven hard to achieve when a device has network interfaces with asymmetric capacity and delay (e.g., LTE and WLAN). For multi-path communication, the data scheduler is vital to provide low latency, since it decides over which network interface to send individual data segments. In this paper, we focus on the MPTCP scheduler with the goal of providing a good user experience for latency-sensitive applications when interface quality is asymmetric. After an initial assessment of existing scheduling algorithms, we present two novel scheduling techniques: the block estimation (BLEST) scheduler and the shortest transmission time first (STTF) scheduler. BLEST and STTF are compared with existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared with MPTCP’s default scheduler.

Journal ArticleDOI
TL;DR: A general model for edge-cloud computing, where the jobs are generated in arbitrary order and at arbitrary times at the mobile devices and then offloaded to servers with both upload and download delays, which can reduce the total weighted response time dramatically compared with heuristic algorithms.
Abstract: In edge-cloud computing, a set of servers (called edge servers) are deployed near the mobile devices to allow these devices to offload their jobs to and subsequently obtain their results from the edge servers with low latency. One fundamental problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of the job and the arrival of the computation result at the device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and at arbitrary times at the mobile devices and then offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time of all the jobs. The weight is set based on how latency-sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc , which is scalable in the speed augmentation model; that is, OnDisc is $(1 + \varepsilon )$ -speed $O(1/\varepsilon )$ -competitive for any small constant $\varepsilon >0$ . Moreover, OnDisc can be easily implemented in distributed systems. We also extend OnDisc with a fairness knob to incorporate the trade-off between the average job response time and the degree of fairness among jobs. Extensive simulations based on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.

Journal ArticleDOI
TL;DR: This work proposes a novel online classification algorithm, TupleMerge (TM), derived from tuple space search (TSS), the packet classifier used by Open vSwitch (OVS), and improves upon TSS by combining hash tables which contain rules with similar characteristics, which greatly reduces classification time preserving similar performance in updates.
Abstract: Packet classification is an important part of many networking devices, such as routers and firewalls. Software-defined networking (SDN) heavily relies on online packet classification which must efficiently process two different streams: incoming packets to classify and rules to update. This rules out many offline packet classification algorithms that do not support fast updates. We propose a novel online classification algorithm, TupleMerge (TM), derived from tuple space search (TSS), the packet classifier used by Open vSwitch (OVS). TM improves upon TSS by combining hash tables which contain rules with similar characteristics. This greatly reduces classification time preserving similar performance in updates. We validate the effectiveness of TM using both simulation and deployment in a full-fledged software router, specifically within the vector packet processor (VPP). In our simulation results, which focus solely on the efficiency of the classification algorithm, we demonstrate that TM outperforms all other state of the art methods, including TSS, PartitionSort (PS), and SAX-PAC. For example, TM is 34% faster at classifying packets and 30% faster at updating rules than PS. We then experimentally evaluate TM deployed within the VPP framework comparing TM against linear search and TSS, and also against TSS within the OVS framework. This validation of deployed implementations is important as SDN frameworks have several optimizations such as caches that may minimize the influence of a classification algorithm. Our experimental results clearly validate the effectiveness of TM. VPP TM classifies packets nearly two orders of magnitude faster than VPP TSS and at least one order of magnitude faster than OVS TSS.

Journal ArticleDOI
TL;DR: A distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions and performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.
Abstract: Due to the random delay, local maximum and data congestion in vehicular networks, the design of a routing is really a challenging task especially in the urban environment. In this paper, a distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions. As the guidance of routing selection, a road weight evaluation (RWE) algorithm is presented to assess road segments, the novelty of which lies that each road segment is assigned a weight based on two built delay models via exploiting the real-time link property when connected or historic traffic information when disconnected. With the RWE algorithm, the determined routing path can greatly alleviate the risk of local maximum and data congestion. Specially, in view of the large size of a modern city, the road map is divided into a series of Grid Zones (GZs). Based on the position of the destination, the packets can be forwarded among different GZs instead of the whole city map to reduce the computation complexity, where the best path with the lowest delay within each GZ is determined. The backbone link consisting of a series of selected backbone nodes at intersections and within road segments, is built for data forwarding along the determined path, which can further avoid the MAC contentions. Extensive simulations reveal that compared with some classic routing protocols, DGGR performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.

Journal ArticleDOI
TL;DR: A novel market-based resource allocation framework in which the services act as buyers and fog resources act as divisible goods in the market is proposed, and the proposed equilibrium is shown to possess salient fairness properties, including envy-freeness, sharing-incentive, and proportionality.
Abstract: Fog computing is transforming the network edge into an intelligent platform by bringing storage, computing, control, and networking functions closer to end users, things, and sensors. How to allocate multiple resource types (e.g., CPU, memory, bandwidth) of capacity-limited heterogeneous fog nodes to competing services with diverse requirements and preferences in a fair and efficient manner is a challenging task. To this end, we propose a novel market-based resource allocation framework in which the services act as buyers and fog resources act as divisible goods in the market. The proposed framework aims to compute a market equilibrium (ME) solution at which every service obtains its favorite resource bundle under the budget constraint, while the system achieves high resource utilization. This paper extends the general equilibrium literature by considering a practical case of satiated utility functions. In addition, we introduce the notions of non-wastefulness and frugality for equilibrium selection and rigorously demonstrate that all the non-wasteful and frugal ME are the optimal solutions to a convex program. Furthermore, the proposed equilibrium is shown to possess salient fairness properties, including envy-freeness, sharing-incentive, and proportionality. Another major contribution of this paper is to develop a privacy-preserving distributed algorithm, which is of independent interest, for computing an ME while allowing market participants to obfuscate their private information. Finally, extensive performance evaluation is conducted to verify our theoretical analyses.

Journal ArticleDOI
TL;DR: In this paper, a graph-based model of programmable environments is proposed, which incorporates core physical observations and efficiently separates physical and networking concerns, and the evaluation takes place in a specially developed simulation tool, and in a variety of environments, validating the model and reaching insights into the user capacity.
Abstract: Programmable wireless environments enable the software-defined propagation of waves within them, yielding exceptional performance. Several building-block technologies have been implemented and evaluated at the physical layer in the past. The present work contributes a network-layer solution to configure such environments for multiple users and objectives, and for any underlying physical-layer technology. Supported objectives include any combination of Quality of Service and power transfer optimization, eavesdropping, and Doppler effect mitigation, in multi-cast or uni-cast settings. In addition, a graph-based model of programmable environments is proposed, which incorporates core physical observations and efficiently separates physical and networking concerns. The evaluation takes place in a specially developed simulation tool, and in a variety of environments, validating the model and reaching insights into the user capacity of programmable environments.

Journal ArticleDOI
TL;DR: A spatiotemporal model is developed to characterize and design uncoordinated multiple access (UMA) strategies for MWNs by combining stochastic geometry and queueing theory, which quantifies the scalability of UMA via the maximum spatiotsemporal traffic density that can be accommodated in the network.
Abstract: The massive wireless networks (MWNs) enable surging applications for the Internet of Things and cyber physical systems. In these applications, nodes typically exhibit stringent power constraints, limited computing capabilities, and sporadic traffic patterns. This paper develops a spatiotemporal model to characterize and design uncoordinated multiple access (UMA) strategies for MWNs. By combining stochastic geometry and queueing theory, the paper quantifies the scalability of UMA via the maximum spatiotemporal traffic density that can be accommodated in the network, while satisfying the target operational constraints (e.g., stability) for a given percentile of the nodes. The developed framework is then used to design UMA strategies that stabilize the node data buffers and achieve desirable latency, buffer size, and data rate.

Journal ArticleDOI
TL;DR: The proposed algorithm called HeavyKeeper incurs small, constant processing overhead per packet and thus supports high line rates, and achieves 99.99% precision with a small memory size, and reduces the error by around 3 orders of magnitude on average compared to the state-of-the-art.
Abstract: Finding top- $k$ elephant flows is a critical task in network traffic measurement, with many applications in congestion control, anomaly detection and traffic engineering. As the line rates keep increasing in today’s networks, designing accurate and fast algorithms for online identification of elephant flows becomes more and more challenging. The prior algorithms are seriously limited in achieving accuracy under the constraints of heavy traffic and small on-chip memory in use. We observe that the basic strategies adopted by these algorithms either require significant space overhead to measure the sizes of all flows or incur significant inaccuracy when deciding which flows to keep track of. In this paper, we adopt a new strategy, called count-with-exponential-decay , to achieve space-accuracy balance by actively removing small flows through decaying, while minimizing the impact on large flows, so as to achieve high precision in finding top- $k$ elephant flows. Moreover, the proposed algorithm called HeavyKeeper incurs small, constant processing overhead per packet and thus supports high line rates. Experimental results show that HeavyKeeper algorithm achieves 99.99% precision with a small memory size, and reduces the error by around 3 orders of magnitude on average compared to the state-of-the-art.

Journal ArticleDOI
TL;DR: The FlexShare algorithm is proposed to provide near-optimal VNF-sharing and priority assignment decisions in polynomial time and it is proved that FlexShare is within a constant factor from the optimum and, using real-world VNF graphs, it consistently outperforms baseline solutions.
Abstract: Thanks to its computational and forwarding capabilities, the mobile network infrastructure can support several third-party (“vertical”) services, each composed of a graph of virtual (network) functions (VNFs). Importantly, one or more VNFs are often common to multiple services, thus the services deployment cost could be reduced by letting the services share the same VNF instance instead of devoting a separate instance to each service. By doing that, however, it is critical that the target KPI (key performance indicators) of all services are met. To this end, we study the VNF sharing problem and make decisions on 1) when sharing VNFs among multiple services is possible, 2) how to adapt the virtual machines running the shared VNFs to the combined load of the assigned services, and 3) how to prioritize the services traffic within shared VNFs. All decisions aim to minimize the cost for the mobile operator, subject to requirements on end-to-end service performance, e.g., total delay. Notably, we show that the aforementioned priorities should be managed dynamically and vary across VNFs. We then propose the FlexShare algorithm to provide near-optimal VNF-sharing and priority assignment decisions in polynomial time. We prove that FlexShare is within a constant factor from the optimum and, using real-world VNF graphs, we show that it consistently outperforms baseline solutions.

Journal ArticleDOI
TL;DR: In this paper, a matrix fractional programming (FP) based link scheduling algorithm is proposed to coordinate the link scheduling decisions among the interfering links, along with power control and beamforming.
Abstract: Interference management is a fundamental issue in device-to-device (D2D) communications whenever the transmitter-and-receiver pairs are located in close proximity and frequencies are fully reused, so active links may severely interfere with each other. This paper devises an optimization strategy named FPLinQ to coordinate the link scheduling decisions among the interfering links, along with power control and beamforming. The key enabler is a novel optimization method called matrix fractional programming (FP) that generalizes previous scalar and vector forms of FP in allowing multiple data streams per link. From a theoretical perspective, this paper provides a deeper understanding of FP by showing a connection to the minorization-maximization (MM) algorithm. From an application perspective, this paper shows that as compared to the existing methods for coordinating scheduling in the D2D network, such as FlashLinQ, ITLinQ, and ITLinQ+, the proposed FPLinQ approach is more general in allowing multiple antennas at both the transmitters and the receivers, and further in allowing arbitrary and multiple possible associations between the devices via matching. Numerical results show that FPLinQ significantly outperforms the previous state-of-the-art in a typical D2D communication environment.

Journal ArticleDOI
TL;DR: SEAF is proposed, a secure, efficient, and accountable edge-based access control framework for ICN, in which authentication is performed at the network edge to block unauthorized requests at the very beginning and group signature is adopted to achieve anonymous authentication and hash chain technique is used to reduce greatly the overhead when users make continuous requests for the same file.
Abstract: Information centric networking (ICN) has been regarded as an ideal architecture for the next-generation network to handle users’ increasing demand for content delivery with in-network cache. While making better use of network resources and providing better service delivery, an effective access control mechanism is needed due to the widely disseminated contents. However, in the existing solutions, making cache-enabled routers or content providers authenticate users’ requests causes high computation overhead and unnecessary delay. Also, the straightforward utilization of advanced encryption algorithms makes the system vulnerable to DoS attacks. Besides, privacy protection and service accountability are rarely taken into account in this scenario. In this paper, we propose SEAF, a secure, efficient, and accountable edge-based access control framework for ICN, in which authentication is performed at the network edge to block unauthorized requests at the very beginning. We adopt group signature to achieve anonymous authentication and use hash chain technique to reduce greatly the overhead when users make continuous requests for the same file. At the same time, we provide an efficient revocation method to make our framework more robust. Furthermore, the content providers can affirm the service amount received from the network and extract feedback information from the signatures and hash chains. By formal security analysis and the comparison with related works, we show that SEAF achieves the expected security goals and possesses more useful features. The experimental results also demonstrate that our design is efficient for routers and content providers and bring in only slight delay for users’ content retrieval.

Journal ArticleDOI
TL;DR: A cost (pain) vs. latency (gain) analysis of executing jobs of many tasks by employing replicated or erasure coded redundancy and it is found that the tail heaviness of service time variability is decisive on the pain and gain of redundancy.
Abstract: Runtime performance variability has been a major issue, hindering predictable and scalable performance in modern distributed systems. Executing requests or jobs redundantly over multiple servers have been shown to be effective for mitigating variability, both in theory and practice. Systems that employ redundancy has drawn significant attention, and numerous papers have analyzed the pain and gain of redundancy under various service models and assumptions on the runtime variability. This paper presents a cost (pain) vs. latency (gain) analysis of executing jobs of many tasks by employing replicated or erasure coded redundancy. The tail heaviness of service time variability is decisive on the pain and gain of redundancy and we quantify its effect by deriving expressions for cost and latency. Specifically, we try to answer four questions: 1) How do replicated and coded redundancy compare in the cost vs. latency tradeoff? 2) Can we introduce redundancy after waiting some time and expect it to reduce the cost? 3) Can relaunching the tasks that appear to be straggling after some time help to reduce cost and/or latency? 4) Is it effective to use redundancy and relaunching together? We validate the answers we found for each of these questions via simulations that use empirical distributions extracted from a Google cluster data.

Journal ArticleDOI
TL;DR: FlipTracer is introduced, a practical system that achieves highly reliable parallel decoding even in hostile channel conditions and a graphical model, called one-flip-graph OFG, to capture the transition pattern of collided signals, and design a reliable approach to construct the OFG in a manner robust to the diversity in backscatter systems.
Abstract: With parallel decoding for backscatter communication, tags are allowed to transmit concurrently and more efficiently. Existing parallel decoding mechanisms, however, assume that signals of the tags are highly stable and, hence, may not perform optimally in the naturally dynamic backscatter systems. This paper introduces FlipTracer, a practical system that achieves highly reliable parallel decoding even in hostile channel conditions. FlipTracer is designed with a key insight; although the collided signal is time-varying and irregular, transitions between signals’ combined states follow highly stable probabilities, which offers important clues for identifying the collided signals and provides us with an opportunity to decode the collided signals without relying on stable signals. Motivated by this observation, we propose a graphical model, called one-flip-graph (OFG), to capture the transition pattern of collided signals, and design a reliable approach to construct the OFG in a manner robust to the diversity in backscatter systems. Then, FlipTracer can resolve the collided signals by tracking the OFG. We have implemented FlipTracer and evaluated its performance with extensive experiments across a wide variety of scenarios. Our experimental results have shown that FlipTracer achieves a maximum aggregated throughput that approaches 2 Mb/s, which is $6\times $ higher than the state of the art.

Journal ArticleDOI
TL;DR: This paper investigates the statistical modeling of the individual user preferences of video content and proposes a novel modeling framework by using a genre-based hierarchical structure as well as a parameterization of the framework based on an extensive real-world data set.
Abstract: Caching of video files at the wireless edge, i.e., at the base stations or on user devices, is a key method for improving wireless video delivery. While global popularity distributions of video content have been investigated in the past and used in a variety of caching algorithms, this paper investigates the statistical modeling of the individual user preferences . With individual preferences being represented by probabilities, we identify their critical features and parameters and propose a novel modeling framework by using a genre-based hierarchical structure as well as a parameterization of the framework based on an extensive real-world data set. Besides, the correlation analysis between parameters and critical statistics of the framework is conducted. With the framework, an implementation recipe for generating practical individual preference probabilities is proposed. By comparing with the underlying real data, we show that the proposed models and generation approach can effectively characterize the individual preferences of users for video content.

Journal ArticleDOI
TL;DR: This paper proposes the multi-framed hierarchical-hashing data collection (MHDC) protocol, which can not only significantly improve the utilization of RFID wireless communication channel by establishing bijective mapping between target tags and the first $k$ slots in time frame, but also effectively filter out the serious interference of unexpected tags.
Abstract: This paper studies the important sensory data collection problem in the sensor-augmented RFID systems, which is to quickly and accurately collect sensory data from a predefined set of target tags with the coexistence of unexpected tags. The existing RFID data collection schemes suffer from either low time-efficiency due to tag-collisions or serious data corruption issue due to interference of unexpected tags. To overcome these limitations, we propose the hierarchical-hashing data collection (HDC) protocol, which can not only significantly improve the utilization of RFID wireless communication channel by establishing bijective mapping between $k$ target tags and the first $k$ slots in time frame, but also effectively filter out the serious interference of unexpected tags. Although HDC has attractive advantages, the theoretical analysis reveals that the computation cost involved in it is as huge as $\mathcal {O}(k2^{k})$ , where $k$ is normally large in practice. By making some modifications to the basic HDC protocol, we propose the multi-framed hierarchical-hashing data collection (MHDC) protocol to effectively reduce the involved computation complexity. Unlike HDC that only issues a single time frame, MHDC uses multiple time frames to collaboratively collect sensory data from the $k$ target tags. It can be understood as that a big computation task is disintegrated into multiple small pieces and then shared by multiple time frames. As a result, the computation cost involved in MHDC is reduced to $\mathcal {O}(k2^{n})$ , where $n\ll k$ is the expected number of target tags that each time frame handles. Theoretical analysis is given to jointly consider the communication cost and computation cost thereby maximizing the overall time-efficiency of MHDC. Extensive simulation results reveal that the proposed MHDC protocol can correctly collect all sensory data and is always about more than $2\times $ faster than the state-of-the-art RFID sensory data collection protocols.

Journal ArticleDOI
TL;DR: This paper studies the practically important problem of range query for sensor-augmented RFID systems, which is to classify the target tags according to the ranges specified by the user, and proposes a basic classification protocol that can ensure 100% query accuracy, and reduce the time cost when comparing with the state-of-the-art protocols.
Abstract: This paper studies the practically important problem of range query for sensor-augmented RFID systems, which is to classify the target tags according to the ranges specified by the user. The existing RFID protocols that seem to address this problem suffer from either low time-efficiency or the information corruption issue. To overcome their limitations, we first propose a basic classification protocol called Range Query (RQ) , in which each tag pseudo-randomly chooses a slot from the time frame and uses the ON-OFF Keying modulation to reply its range identifier. Then, RQ employs a collaborative decoding method to extract the tag range information from singleton and even collision slots. The numerical results reveal that the number of queried ranges significantly affects the performance of RQ . To optimize the number of queried ranges, we further propose the PartitionM (ii) it is immune to the interference from unexpected tags, and does not suffer information corruption issue. We use USRP and WISP tags to conduct a set of experiments, which demonstrate the feasibility of RQ + PM . Extensive simulation results reveal that RQ + PM can ensure 100% query accuracy, and reduce the time cost as much as 40% when comparing with the state-of-the-art protocols.

Journal ArticleDOI
TL;DR: An efficient tree-based tag search (TTS) that approaches state-of-the-art levels of efficiency through batched verification and derives the optimal hash code length and node degrees in TTS to accommodate hash collisions and the optimal filtering vector size to minimize the time cost of TTS+.
Abstract: Tag search, which is to find a particular set of tags in a radio frequency identification (RFID) system, is a key service in such important Internet-of-Things applications as inventory management. When the system scale is large with a massive number of tags, deterministic search can be prohibitively expensive, and probabilistic search has been advocated, seeking a balance between reliability and time efficiency. Given a failure probability $\frac {1}{\mathcal {O}(K)}$ , where $K$ is the number of tags, state-of-the-art solutions have achieved a time cost of $\mathcal {O}(K \log K)$ through multi-round hashing and verification. Further improvement, however, faces a critical bottleneck of repetitively verifying each individual target tag in each round. In this paper, we present an efficient tree-based tag search (TTS) that approaches $\mathcal {O}(K)$ through batched verification. The key novelty of TTS is to smartly hash multiple tags into each internal tree node and adaptively control the node degrees. It conducts bottom–up search to verify tags group by group with the number of groups decreasing rapidly. Furthermore, we design an enhanced tag search scheme, referred to as TTS+, to overcome the negative impact of asymmetric tag set sizes on time efficiency of TTS. TTS+ first rules out partial ineligible tags with a filtering vector and feeds the shrunk tag sets into TTS. We derive the optimal hash code length and node degrees in TTS to accommodate hash collisions and the optimal filtering vector size to minimize the time cost of TTS+. The superiority of TTS and TTS+ over the state-of-the-art solution is demonstrated through both theoretical analysis and extensive simulations. Specifically, as reliability demand on scales, the time efficiency of TTS+ reaches nearly 2 times at most that of TTS.

Journal ArticleDOI
TL;DR: This paper proposes a model for video streaming systems, typically composed of a centralized origin server, several CDN sites, and edge-caches located closer to the end user, and comprehensively considers different systems design factors.
Abstract: Internet video traffic has been rapidly increasing and is further expected to increase with the emerging 5G applications, such as higher definition videos, the IoT, and augmented/virtual reality applications. As end users consume video in massive amounts and in an increasing number of ways, the content distribution network (CDN) should be efficiently managed to improve the system efficiency. The streaming service can include multiple caching tiers, at the distributed servers and the edge routers, and efficient content management at these locations affects the quality of experience (QoE) of the end users. In this paper, we propose a model for video streaming systems, typically composed of a centralized origin server, several CDN sites, and edge-caches located closer to the end user. We comprehensively consider different systems design factors, including the limited caching space at the CDN sites, allocation of CDN for a video request, choice of different ports (or paths) from the CDN and the central storage, bandwidth allocation, the edge-cache capacity, and the caching policy. We focus on minimizing a performance metric, stall duration tail probability (SDTP), and present a novel and efficient algorithm accounting for the multiple design flexibilities. The theoretical bounds with respect to the SDTP metric are also analyzed and presented. The implementation of a virtualized cloud system managed by Openstack demonstrates that the proposed algorithms can significantly improve the SDTP metric compared with the baseline strategies.

Journal ArticleDOI
TL;DR: The approach proposed is the first step toward a security policy aware NFV management, orchestration, and resource allocation system—a paradigm shift for the management of virtualized networks—and it requires minor changes to the current NFV architecture.
Abstract: This paper introduces an approach toward the automatic enforcement of security policies in network functions virtualization (NFV) networks and dynamic adaptation to network changes. The approach relies on a refinement model that allows the dynamic transformation of high-level security requirements into configuration settings for the network security functions (NSFs), and optimization models that allow the optimal selection of the NSFs to use. These models are built on a formalization of the NSF capabilities, which serves to unequivocally describe what NSFs are able to do for security policy enforcement purposes. The approach proposed is the first step toward a security policy aware NFV management, orchestration, and resource allocation system—a paradigm shift for the management of virtualized networks—and it requires minor changes to the current NFV architecture. We prove that our approach is feasible, as it has been implemented by extending the OpenMANO framework and validated on several network scenarios. Furthermore, we prove with performance tests that policy refinement scales well enough to support current and future virtualized networks.

Journal ArticleDOI
TL;DR: Test results of NS2 simulation and small-scale testbed experiments show that CAPS significantly reduces the average flow completion time of short flows by ~30%–70% over the state-of-the-art multipath transmission schemes and achieves the high throughput for long flows with negligible traffic overhead.
Abstract: Modern data-center applications generate a diverse mix of short and long flows with different performance requirements and weaknesses. The short flows are typically delay-sensitive but to suffer the head-of-line blocking and out-of-order problems. Recent solutions prioritize the short flows to meet their latency requirements, while damaging the throughput-sensitive long flows. To solve these problems, we design a Coding-based Adaptive Packet Spraying (CAPS) that effectively mitigates the negative impact of short and long flows on each other. To exploit the availability of multiple paths and avoid the head-of-line blocking, CAPS spreads the packets of short flows to all paths, while the long flows are limited to a few paths with Equal Cost Multi Path (ECMP). Meanwhile, to resolve the out-of-order problem with low overhead, CAPS encodes the short flows using forward error correction (FEC) technology and adjusts the coding redundancy according to the blocking probability. Moreover, since the coding efficiency decreases when the coding unit is too small or large, we demonstrate how to obtain the optimal size of coding unit. The coding layer is deployed between the TCP and IP layers, without any modifications on the existing TCP/IP protocols. The test results of NS2 simulation and small-scale testbed experiments show that CAPS significantly reduces the average flow completion time of short flows by ~30%–70% over the state-of-the-art multipath transmission schemes and achieves the high throughput for long flows with negligible traffic overhead.