Showing papers by "Jon Crowcroft published in 2019"
••
TL;DR: The framework proposed in this paper discusses the effectiveness of the polling process, hashing algorithms’ utility, block creation and sealing, data accumulation, and result declaration by using the adjustable blockchain method.
Abstract: The electronic voting has emerged over time as a replacement to the paper-based voting to reduce the redundancies and inconsistencies. The historical perspective presented in the last two decades suggests that it has not been so successful due to the security and privacy flaws observed over time. This paper suggests a framework by using effective hashing techniques to ensure the security of the data. The concept of block creation and block sealing is introduced in this paper. The introduction of a block sealing concept helps in making the blockchain adjustable to meet the need of the polling process. The use of consortium blockchain is suggested, which ensures that the blockchain is owned by a governing body (e.g., election commission), and no unauthorized access can be made from outside. The framework proposed in this paper discusses the effectiveness of the polling process, hashing algorithms’ utility, block creation and sealing, data accumulation, and result declaration by using the adjustable blockchain method. This paper claims to apprehend the security and data management challenges in blockchain and provides an improved manifestation of the electronic voting process.
145 citations
••
TL;DR: An open-source python simulator for integrated modelling of 5G (pysim5G), that enables both engineering and cost metrics to be assessed in a single unified framework, allowing users to undertake integrated techno-economic assessment of 4G and 5G deployments in asingle geospatial framework.
Abstract: Optimal network planning is crucial to ensure viable investments. However, engineering analysis and cost assessment frequently occur independently of each other. Whereas considerable research has been undertaken on 5G networks, there is a lack of openly accessible tools that integrate the engineering and cost aspects, in a techno-economic assessment framework capable of providing geospatially-explicit network analytics. Consequently, this paper details an open-source python simulator for integrated modelling of 5G (pysim5G), that enables both engineering and cost metrics to be assessed in a single unified framework. The tool includes statistical analysis of radio interference to assess the system-level performance of 4G and 5G frequency band coexistence (including millimeter wave), while simultaneously quantifying the costs of ultra-dense 5G networks. An example application of this framework explores the techno-economics of 5G infrastructure sharing strategies, finding that total deployment costs can be reduced by 30% using either passive site sharing, or passive backhaul sharing, or by up to 50% via a multi-operator radio access network. The key contribution is a fully-tested, open-source software codebase, allowing users to undertake integrated techno-economic assessment of 5G deployments in a single geospatial framework.
64 citations
••
TL;DR: A comparative analysis of the usage and impact of bots and humans on Twitter—one of the largest OSNs in the world— draws clear differences and interesting similarities between the two entities.
Abstract: Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this article, we perform a comparative analysis of the usage and impact of bots and humans on Twitter—one of the largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. Using a human annotation task, we assign “bot” and “human” ground-truth labels to the dataset and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural characteristics of bots and humans using metrics within and among four popularity groups. From the comparative analysis, we draw clear differences and interesting similarities between the two entities.
41 citations
••
TL;DR: This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions, and shows that its results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems.
Abstract: Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of these services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, a “careful” placement of micro-clouds services over the network is required to optimize service performance. This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 2x in bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the packet loss rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic. Since these improvements translate in the QoE (Quality of Experience) perceived by the user, our results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems and adapting CN micro-clouds as an eco-system for service deployment.
40 citations
••
TL;DR: In this article, the authors used the article content and metadata of four important computer networking periodicals (IEEE Communications Surveys and Tutorials (COMST), IEEE/ACM Transactions on Networking (TON), ACM Special Interest Group on Data Communications (SIGCOMM), and IEEE International Conference on Computer Communications (INFOCOM) for an 18-year period (2000-2017) to address important bibliometrics questions.
Abstract: Computer networking is a major research discipline in computer science, electrical engineering, and computer engineering. The field has been actively growing, in terms of both research and development, for the past hundred years. This study uses the article content and metadata of four important computer networking periodicals—IEEE Communications Surveys and Tutorials (COMST), IEEE/ACM Transactions on Networking (TON), ACM Special Interest Group on Data Communications (SIGCOMM), and IEEE International Conference on Computer Communications (INFOCOM)—obtained using ACM, IEEE Xplore, Scopus and CrossRef, for an 18-year period (2000–2017) to address important bibliometrics questions. All of the venues are prestigious, yet they publish quite different research. The first two of these periodicals (COMST and TON) are highly reputed journals of the fields while SIGCOMM and INFOCOM are considered top conferences of the field. SIGCOMM and INFOCOM publish new original research. TON has a similar genre and publishes new original research as well as the extended versions of different research published in the conferences such as SIGCOMM and INFOCOM, while COMST publishes surveys and reviews (which not only summarize previous works but highlight future research opportunities). In this study, we aim to track the co-evolution of trends in the COMST and TON journals and compare them to the publication trends in INFOCOM and SIGCOMM. Our analyses of the computer networking literature include: (a) metadata analysis; (b) content-based analysis; and (c) citation analysis. In addition, we identify the significant trends and the most influential authors, institutes and countries, based on the publication count as well as article citations. Through this study, we are proposing a methodology and framework for performing a comprehensive bibliometric analysis on computer networking research. To the best of our knowledge, no such study has been undertaken in computer networking until now.
37 citations
•
TL;DR: This goal is to cover the evolution of blockchain-based systems that are trying to bring in a renaissance in the existing, mostly centralized, space of network applications, and highlight various common challenges, pitfalls, and shortcomings that can occur.
Abstract: Blockchain is challenging the status quo of the central trust infrastructure currently prevalent in the Internet towards a design principle that is underscored by decentralization, transparency, and trusted auditability In ideal terms, blockchain advocates a decentralized, transparent, and more democratic version of the Internet Essentially being a trusted and decentralized database, blockchain finds its applications in fields as varied as the energy sector, forestry, fisheries, mining, material recycling, air pollution monitoring, supply chain management, and their associated operations In this paper, we present a survey of blockchain-based network applications Our goal is to cover the evolution of blockchain-based systems that are trying to bring in a renaissance in the existing, mostly centralized, space of network applications While re-imagining the space with blockchain, we highlight various common challenges, pitfalls, and shortcomings that can occur Our aim is to make this work as a guiding reference manual for someone interested in shifting towards a blockchain-based solution for one's existing use case or automating one from the ground up
31 citations
••
TL;DR: The results show that the network densification and the cell load have a profound impact on system performance as well as spectral and energy efficiencies of the networks and the role of the ergodic channel capacity is discussed.
Abstract: Ultra-dense multi-tier cellular networks have recently drawn the attention of researchers due to their potential efficiency in dealing with high-data rate demands in upcoming 5G cellular networks. These networks consist of multi-tier base stations including micro base stations with very high-system capacity and short inter-site distances, overlooked by central macro base stations. In this way, network densification is achieved in the same area as that of traditional mobile networks, which offers much higher system capacity and bandwidth reuse. This paper utilizes a well-known analytical tool, stochastic geometry for modeling and analyzing interference in ultra-dense multi-tier cellular networks. Primarily, we have studied different factors affecting the system capacity including the network densification, cell load, and multi-tier interference. The role of the ergodic channel capacity is also discussed. Moreover, the effects of channel interference, system bandwidth, and the network densification on the spectral and energy efficiencies of the network are observed. Finally, the results show that the network densification and the cell load have a profound impact on system performance as well as spectral and energy efficiencies of the networks.
27 citations
••
TL;DR: In this paper, the authors used the article content and metadata of four important computer networking periodicals (IEEE Communications Surveys and Tutorials (COMST), IEEE/ACM Transactions on Networking (TON), ACM Special Interest Group on Data Communications (SIGCOMM), and IEEE International Conference on Computer Communications (INFOCOM) for an 18-year period (2000-2017) to address important bibliometrics questions.
Abstract: This study uses the article content and metadata of four important computer networking periodicals-IEEE Communications Surveys and Tutorials (COMST), IEEE/ACM Transactions on Networking (TON), ACM Special Interest Group on Data Communications (SIGCOMM), and IEEE International Conference on Computer Communications (INFOCOM)-obtained using ACM, IEEE Xplore, Scopus and CrossRef, for an 18-year period (2000-2017) to address important bibliometrics questions. All of the venues are prestigious, yet they publish quite different research. The first two of these periodicals (COMST and TON) are highly reputed journals of the fields while SIGCOMM and INFOCOM are considered top conferences of the field. SIGCOMM and INFOCOM publish new original research. TON has a similar genre and publishes new original research as well as the extended versions of different research published in the conferences such as SIGCOMM and INFOCOM, while COMST publishes surveys and reviews (which not only summarize previous works but highlight future research opportunities). In this study, we aim to track the co-evolution of trends in the COMST and TON journals and compare them to the publication trends in INFOCOM and SIGCOMM. Our analyses of the computer networking literature include: (a) metadata analysis; (b) content-based analysis; and (c) citation analysis. In addition, we identify the significant trends and the most influential authors, institutes and countries, based on the publication count as well as article citations. Through this study, we are proposing a methodology and framework for performing a comprehensive bibliometric analysis on computer networking research. To the best of our knowledge, no such study has been undertaken in computer networking until now.
19 citations
••
TL;DR: A novel hybrid deep learning model, Social-Aware Long Short-Term Memory (SA-LSTM), for predicting the types of item/PoIs that a user will likely buy/visit next, which features stacked LSTMs for sequential modeling and an autoencoder-based deep model for social influence modeling.
Abstract: In this paper, we propose to leverage the emerging deep learning techniques for sequential modeling of user interests based on big social data, which takes into account influence of their social circles. First, we present a preliminary analysis for two popular big datasets from Yelp and Epinions. We show statistically sequential actions of all users and their friends, and discover both temporal autocorrelation and social influence on decision making, which motivates our design. Then, we present a novel hybrid deep learning model, Social-Aware Long Short-Term Memory (SA-LSTM), for predicting the types of item/PoIs that a user will likely buy/visit next, which features stacked LSTMs for sequential modeling and an autoencoder-based deep model for social influence modeling. Moreover, we show that SA-LSTM supports end-to-end training. We conducted extensive experiments for performance evaluation using the two real datasets from Yelp and Epinions. The experimental results show that (1) the proposed deep model significantly improves prediction accuracy compared to widely used baseline methods; (2) the proposed social influence model works effectively; and (3) going deep does help improve prediction accuracy but a not-so-deep deep structure leads to the best performance.
14 citations
••
01 Jan 2019TL;DR: The idea that data could be a key differentiator is, of course, not a new one as discussed by the authors, but it has been the exception rather than the norm, isolated prototypes and trials rather than an indication of real, systemic change.
Abstract: Every era faces a unique set of challenges and dilemmas, but ours can credibly lay claim to some of the most complex and vexing that humankind may have ever confronted. From climate change to growing inequality to a rising tide of refugees: we face an intricate mesh of overlapping and interdependent difficulties, one that is pushing the limits of our existing policy and governance capabilities (Data for Policy, 2015;Meyer et al., 2017).What we require today are not so much (or not only) new solutions, but new ways for arriving at solutions (Susha et al., 2017). We need a twenty-first century paradigm of governance and policy making. Data, it is increasingly clear, will be central to this paradigm (Pentland, 2013; Kirkpatrick, 2012). Along with ever increasing computer storage and analytics capabilities, massive amounts of data generated from citizens, devices, and sensors provide decision makers the opportunity to monitor and manage public infrastructure in real time and predict future patterns when used responsibly (Engin and Treleaven, 2019; Janssen and Helbig, 2018). Data have the potential to transform every part of the policy-making life cycle—agenda setting and needs identification; the search for solutions; prototyping and implementation of solutions; enforcement; and evaluation (Janssen and Helbig, 2018). These are all critical, interlinked steps in addressing our societal challenges, and each of these needs a radical rethink. The idea that data could be a key differentiator is, of course, not a new one. Its potential has been evident for some time now (Wang et al., 2018), especially in the business world (Henke et al., 2016), but also in the policy community, where efforts to harness the power of information have yielded positive results in areas as disparate as gender equality (Fatehkia et al., 2018), improving urban traffic flows (Zhao et al., 2018), and enhancing regulatory compliance (Heat Seek, n.d.; Credit Suisse, n.d.). Successful data initiatives have been deployed by governments around the world in both developing and developed countries (Verhulst and Young, 2017a). Such initiatives have led to a growing recognition that data are and should increasingly be part of any effective governance toolkit. Despite such encouraging results, it is true that the policy world has generally lagged behind business in its use of data and data methods (Hou et al., 2011). Policy–data interactions or governance initiatives that use data have been the exception rather than the norm, isolated prototypes and trials rather than an indication of real, systemic change. There are various reasons for the generally slow uptake of data in policymaking, and several factors will have to change if the situation is to improve. In particular, advocates of more data (and we include ourselves among this number) will need to overcome the following obstacles and limitations:
11 citations
•
18 Jul 2019
TL;DR: A novel federated algorithm for PCA that is able to adaptively estimate the rank r of the dataset and compute its r leading principal components when only O(dr) memory is available, and exhibits attractive horizontal scalability.
Abstract: In many online machine learning and data science tasks such as data summarisation and feature compression, d-dimensional vectors are usually distributed across a large number of clients in a decentralised network and collected in a streaming fashion. This is increasingly common in modern applications due to the sheer volume of data generated and the clients’ constrained resources. In this setting, some clients are required to compute an update to a centralised target model independently using local data while other clients aggregate these updates with a low-complexity merging algorithm. However, some clients with limited storage might not be able to store all of the data samples if d is large, nor compute procedures requiring at least Ω(d) storage-complexity such as Principal Component Analysis, Subspace Tracking, or general Feature Correlation. In this work, we present a novel federated algorithm for PCA that is able to adaptively estimate the rank r of the dataset and compute its r leading principal components when only O(dr) memory is available. This inherent adaptability implies that r does not have to be supplied as a fixed hyper-parameter which is beneficial when the underlying data distribution is not known in advance, such as in a streaming setting. Numerical simulations show that, while using limited-memory, our algorithm exhibits state-of-the-art performance that closely matches or outperforms traditional non-federated algorithms, and in the absence of communication latency, it exhibits attractive horizontal scalability.
•
TL;DR: In this paper, a federated, asynchronous, and differentially private algorithm for PCA in the memory-limited setting is presented, which incrementally computes local model updates using a streaming procedure and adaptively estimates its $r$ leading principal components when only O(dr) memory is available.
Abstract: We present a federated, asynchronous, and $(\varepsilon, \delta)$-differentially private algorithm for PCA in the memory-limited setting. Our algorithm incrementally computes local model updates using a streaming procedure and adaptively estimates its $r$ leading principal components when only $\mathcal{O}(dr)$ memory is available with $d$ being the dimensionality of the data. We guarantee differential privacy via an input-perturbation scheme in which the covariance matrix of a dataset $\mathbf{X} \in \mathbb{R}^{d \times n}$ is perturbed with a non-symmetric random Gaussian matrix with variance in $\mathcal{O}\left(\left(\frac{d}{n}\right)^2 \log d \right)$, thus improving upon the state-of-the-art. Furthermore, contrary to previous federated or distributed algorithms for PCA, our algorithm is also invariant to permutations in the incoming data, which provides robustness against straggler or failed nodes. Numerical simulations show that, while using limited-memory, our algorithm exhibits performance that closely matches or outperforms traditional non-federated algorithms, and in the absence of communication latency, it exhibits attractive horizontal scalability.
••
TL;DR: The idea of a regulated and radical privacy trading mechanism that preserves the heterogeneous privacy preservation constraints at certain compromise levels and satisfying commercial requirements of agencies that collect and trade client data for the purpose of behavioral advertising is showcased.
Abstract: In the modern era of the mobile apps (part of the era of surveillance capitalism, a famously coined term by Shoshana Zuboff), huge quantities of data about individuals and their activities offer a wave of opportunities for economic and societal value creation. However, the current personal data ecosystem is mostly de-regulated, fragmented, and inefficient. On one hand, end-users are often not able to control access (either technologically, by policy, or psychologically) to their personal data which results in issues related to privacy, personal data ownership, transparency, and value distribution. On the other hand, this puts the burden of managing and protecting user data on profit-driven apps and ad-driven entities (e.g., an ad-network) at a cost of trust and regulatory accountability. Data holders (e.g., apps) may hence take commercial advantage of the individuals' inability to fully anticipate the potential uses of their private information, with detrimental effects for social welfare. As steps to improve social welfare, we comment on the the existence and design of efficient consumer-data releasing ecosystems aimed at achieving a maximum social welfare state amongst competing data holders. In view of (a) the behavioral assumption that humans are 'compromising' beings, (b) privacy not being a well-boundaried good, and (c) the practical inevitability of inappropriate data leakage by data holders upstream in the supply-chain, we showcase the idea of a regulated and radical privacy trading mechanism that preserves the heterogeneous privacy preservation constraints (at an aggregate consumer, i.e., app, level) upto certain compromise levels, and at the same time satisfying commercial requirements of agencies (e.g., advertising organizations) that collect and trade client data for the purpose of behavioral advertising. More specifically, our idea merges supply function economics, introduced by Klemperer and Meyer, with differential privacy, that, together with their powerful theoretical properties, leads to a stable and efficient, i.e., a maximum social welfare, state, and that too in an algorithmically scalable manner. As part of future research, we also discuss interesting additional techno-economic challenges related to realizing effective privacy trading ecosystems.
•
TL;DR: SCDP is a novel, general-purpose data transport protocol for data centres that natively supports one- to-many and many-to-one data communication, which is extremely common in modern data centres and does so without compromising on efficiency for short and long unicast flows.
Abstract: In this paper we propose SCDP, a novel, general-purpose data transport protocol for data centres that, in contrast to all other protocols proposed to date, natively supports one-to-many and many-to-one data communication, which is extremely common in modern data centres. SCDP does so without compromising on efficiency for short and long unicast flows. SCDP achieves this by integrating RaptorQ codes with receiver-driven data transport, in-network packet trimming and Multi-Level Feedback Queuing (MLFQ); (1) RaptorQ codes enable efficient one-to-many and many-to-one data transport; (2) on top of RaptorQ codes, receiver-driven flow control, in combination with in-network packet trimming, enable efficient usage of network resources as well as multi-path transport and packet spraying for all transport modes. Incast and Outcast are eliminated; (3) the systematic nature of RaptorQ codes, in combination with MLFQ, enable fast, decoding-free completion of short flows. We extensively evaluate SCDP in a wide range of simulated scenarios with realistic data centre workloads. For one-to-many and many-to-one transport sessions, SCDP performs significantly better compared to NDP. For short and long unicast flows, SCDP performs equally well or better compared to NDP.
••
08 Nov 2019TL;DR: Trends in co-authorship, country-based productivity, and knowledge flow to and from SIGCOMM venues using bibliometric techniques are explored.
Abstract: The ACM Special Interest Group on Data Communications (SIGCOMM) has been a major research forum for fifty years. This community has had a major impact on the history of the Internet, and therefore we argue its exploration may reveal fundamental insights into the evolution of networking technologies around the globe. Hence, on the 50th anniversary of SIGCOMM, we take this opportunity to reflect upon its progress and achievements, through the lens of its various publication outlets, e.g., the SIGCOMM conference, IMC, CoNEXT, HotNets. Our analysis takes several perspectives, looking at authors, countries, institutes and papers. We explore trends in co-authorship, country-based productivity, and knowledge flow to and from SIGCOMM venues using bibliometric techniques. We hope this study will serve as a valuable resource for the computer networking community.
••
TL;DR: XORs in the Air is placed in the context of the theoretical and practical understanding of network coding, and a view of the progress of the field ofNetwork coding is presented.
Abstract: While placing the paper "XORs in the Air" in the context of the theoretical and practical understanding of network coding, we present a view of the progress of the field of network coding, In particular, we examine the interplay of theory and practice in the field.
••
26 Aug 2019TL;DR: This paper shows how accessing host physical memory is achieved and discusses why this is not a vulnerability in some platforms, but rather a powerful tool for securing data acquisition when the host is not trusted to perform the acquisition.
Abstract: Modern malware is complex, stealthy, and employ anti-forensics techniques to evade detection. In order to detect malware, data must be collected, such, allows further analyses of the malware's behaviour. However, when both the malware and the detecting system run on the same domain (the CPU) it's questionable whether the data acquired by the acquisition method is not tampered with. Hardware based techniques, such as acquiring data out-of-band using a PCIe device allow for data acquisition that is deemed trusted when the acquisition method does not rely on any data present on the host memory. Unfortunately, in Input-Output Memory Management Unit (IOMMU) based systems, peripheral devices access to host memory go through a stage of translation by the IOMMU. The translation tables which reside in the host's memory are subject to malware control, hence are not trustworthy. In this paper we present a method that allows acquiring the data reliably without dependant on data residing in host memory, even when IOMMU is being used to restrict devices. We show how accessing host physical memory is achieved and discuss why this is not a vulnerability in some platforms, but rather a powerful tool for securing data acquisition when the host is not trusted to perform the acquisition.
••
TL;DR: SCDP as discussed by the authors integrates RaptorQ codes with receiver-driven data transport, packet trimming and Multi-Level Feedback Queuing (MLFQ) to enable efficient one-to-many and many-toone data transport.
Abstract: In this paper we propose SCDP, a general-purpose data transport protocol for data centres that, in contrast to all other protocols proposed to date, supports efficient one-to-many and many-to-one communication, which is extremely common in modern data centres. SCDP does so without compromising on efficiency for short and long unicast flows. SCDP achieves this by integrating RaptorQ codes with receiver-driven data transport, packet trimming and Multi-Level Feedback Queuing (MLFQ); (1) RaptorQ codes enable efficient one-to-many and many-to-one data transport; (2) on top of RaptorQ codes, receiver-driven flow control, in combination with in-network packet trimming, enable efficient usage of network resources as well as multi-path transport and packet spraying for all transport modes. Incast and Outcast are eliminated; (3) the systematic nature of RaptorQ codes, in combination with MLFQ, enable fast, decoding-free completion of short flows. We extensively evaluate SCDP in a wide range of simulated scenarios with realistic data centre workloads. For one-to-many and many-to-one transport sessions, SCDP performs significantly better compared to NDP and PIAS. For short and long unicast flows, SCDP performs equally well or better compared to NDP and PIAS.
10 Mar 2019
TL;DR: This document provides guidance for RTO settings in LPWAN, and describes an experimental dual RTO algorithm for LPWan that addresses the challenge of buffering at network elements such as radio gateways.
Abstract: Low-Power Wide Area Network (LPWAN) technologies are characterized by
very low physical layer bit and message transmission rates. Moreover,
a response to a message sent by an LPWAN device may often only be
received after a significant delay. As a result, Round-Trip Time (RTT)
values in LPWAN are often (sometimes, significantly) greater than
typical default values of Retransmission TimeOut (RTO) algorithms.
Furthermore, buffering at network elements such as radio gateways may
interact negatively with LPWAN technology transmission mechanisms,
potentially exacerbating RTTs by up to several orders of magnitude.
This document provides guidance for RTO settings in LPWAN, and
describes an experimental dual RTO algorithm for LPWAN.
[...]
04 Nov 2019
TL;DR: The TCP ACK Pull (AKP) mechanism, which allows a sender to request the ACK for a data segment to be sent without additional delay by the receiver, is defined in this specification as the AKP flag.
Abstract: Delayed Acknowledgments (ACKs) allow reducing protocol overhead in
many scenarios. However, in some cases, Delayed ACKs may significantly
degrade network and device performance in terms of link utilization,
latency, memory usage and/or energy consumption. This document defines
the TCP ACK Pull (AKP) mechanism, which allows a sender to request the
ACK for a data segment to be sent without additional delay by the
receiver. AKP makes use of one of the reserved bits in the TCP header,
which is defined in this specification as the AKP flag.
27 Aug 2019
TL;DR: This document aims at illustrating the advantages and limitations of each approach for transferring larger payloads when CoAP is used at the application layer, using the CoAP Block option.
Abstract: The SCHC adaptation layer provides header compression and
fragmentation functionality between IPv6 and an underlying LPWAN
technology. SCHC fragmentation has been specifically designed for the
characteristics of LPWANs. However, when CoAP is used at the
application layer, there exists an alternative approach for
fragmentation, which is using the CoAP Block option. This document
aims at illustrating the advantages and limitations of each approach
for transferring larger payloads.
•
TL;DR: The overall design, state-of-the-art technologies adopted, and various engineering details in the INFv system are presented, which enable a scalable and incremental deployment of computation offloading framework in practical ISPs' networks.
Abstract: Motivated by the huge disparity between the limited battery capacity of user devices and the ever-growing energy demands of modern mobile apps, we propose INFv. It is the first offloading system able to cache, migrate and dynamically execute on demand functionality from mobile devices in ISP networks. It aims to bridge this gap by extending the promising NFV paradigm to mobile applications in order to exploit in-network resources. In this paper, we present the overall design, state-of-the-art technologies adopted, and various engineering details in the INFv system. We also carefully study the deployment configurations by investigating over 20K Google Play apps, as well as thorough evaluations with realistic settings. In addition to a significant improvement in battery life (up to 6.9x energy reduction) and execution time (up to 4x faster), INFv has two distinct advantages over previous systems: 1) a non-intrusive offloading mechanism transparent to existing apps; 2) an inherent framework support to effectively balance computation load and exploit the proximity of in-network resources. Both advantages together enable a scalable and incremental deployment of computation offloading framework in practical ISPs' networks.