scispace - formally typeset
Search or ask a question

Showing papers by "Jon Crowcroft published in 2021"


Journal ArticleDOI
TL;DR: This paper proposes a distributed control framework for energy-efficient and DIstributed VEhicle navigation with chaRging sTations, called “e-Divert”, a distributed multi-agent deep reinforcement learning (DRL) solution, which uses a convolutional neural network to extract useful spatial features as the input to the actor-critic network to produce a real-time action.
Abstract: Mobile crowdsensing (MCS) represents a new sensing paradigm that utilizes the smart mobile devices to collect and share data. Traditional MCS systems mainly leverages the people carried smartphones and other wearable devices which are constrained by the limited sensing capability and battery power. With the popularity of unmanned vehicles like unmanned aerial vehicles (UAVs) and driverless cars, they can provide much more reliable, accurate and cost-efficient sensing services due to to their equipped more powerful sensors. In this paper, we propose a distributed control framework for e nergy-efficient and DI stributed VE hicle navigation with cha R ging s T ations, called “e-Divert”. It is a distributed multi-agent deep reinforcement learning (DRL) solution, which uses a convolutional neural network (CNN) to extract useful spatial features as the input to the actor-critic network to produce a real-time action. Also, e-Divert incorporates a distributed prioritized experience replay for better exploration and exploitation, and a long short-term memory (LSTM) enabled N-step temporal sequence modeling module. The solution fully explores the spatiotemporal nature of the considered scenario for better vehicle cooperation and competition between themselves and charging stations, to maximize the energy efficiency, data collection ratio, geographic fairness, and minimize the energy consumption simultaneously. Through extensive simulations, we find an appropriate set of hyperparameters that achieve the best performance, i.e., 5 actors in Ape-X architecture, priority exponent 0.5, and LSTM sequence length 3. Finally, we compare with four baselines including one state-of-the-art approach MADDPG. Results show that our proposed e-Divert significantly improves the energy efficiency, as compared to MADDPG, by 3.62 and 2.36 times on average when varying different numbers of vehicles and charging stations, respectively.

70 citations


DOI
02 Nov 2021
TL;DR: A comprehensive survey of the literature surrounding edge intelligence can be found in this article, where four fundamental components of edge intelligence are identified: edge caching, edge training, edge inference, and edge offloading.
Abstract: Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.

34 citations


Journal ArticleDOI
01 Jun 2021
TL;DR: In this paper, the authors present a series of perspectives of the subject, and where the authors believe fruitful areas for future research are to be found, and summarize a wide survey of the state of the art in network science and epidemiology.
Abstract: On May 28th and 29th, a two day workshop was held virtually, facilitated by the Beyond Center at ASU and Moogsoft Inc. The aim was to bring together leading scientists with an interest in network science and epidemiology to attempt to inform public policy in response to the COVID-19 pandemic. Epidemics are at their core a process that progresses dynamically upon a network, and are a key area of study in network science. In the course of the workshop a wide survey of the state of the subject was conducted. We summarize in this paper a series of perspectives of the subject, and where the authors believe fruitful areas for future research are to be found.

19 citations


ReportDOI
01 Mar 2021
TL;DR: A number of known and deployed techniques to simplify a TCP stack as well as corresponding tradeoffs are explained to help embedded developers with decisions on which TCP features to use.
Abstract: This document provides guidance on how to implement and use the Transmission Control Protocol (TCP) in Constrained-Node Networks (CNNs), which are a characterstic of the Internet of Things (IoT). Such environments require a lightweight TCP implementation and may not make use of optional functionality. This document explains a number of known and deployed techniques to simplify a TCP stack as well as corresponding tradeoffs. The objective is to help embedded developers with decisions on which TCP features to use.

14 citations


Journal ArticleDOI
25 Feb 2021
TL;DR: In this paper, the authors provide a comprehensive state-of-the-art survey on the energy efficiency of medium access control (MAC) protocols for cellular IoT, and provide insights and suggestions that can guide practitioners and researchers in designing EE MAC protocols that extend the battery life of IoT devices.
Abstract: In the modern world, the connectivity-as-we-go model is gaining popularity. Internet-of-Things (IoT) envisions a future in which human beings communicate with each other and with devices that have identities and virtual personalities, as well as sensing, processing, and networking capabilities, which will allow the developing of smart environments that operate with little or no human intervention. In such IoT environments, that will have battery-operated sensors and devices, energy efficiency becomes a fundamental concern. Thus, energy-efficient (EE) connectivity is gaining significant attention from the industrial and academic communities. This work aims to provide a comprehensive state-of-the-art survey on the energy efficiency of medium access control (MAC) protocols for cellular IoT. we provide a detailed discussion on the sources of energy dissipation at the MAC layer and then propose solutions. In addition to reviewing the proposed MAC designs, we also provide insights and suggestions that can guide practitioners and researchers in designing EE MAC protocols that extend the battery life of IoT devices. Finally, we identify a range of challenging open problems that should be solved for providing EE MAC services for IoT devices, along with corresponding opportunities and future research ideas to address these challenges.

11 citations


Journal ArticleDOI
TL;DR: In this article, the statistical quality-of-service (QoS) analysis of a block-fading device-to-device (D2D) link in a multi-tier cellular network is presented.
Abstract: This work does the statistical quality-of-service (QoS) analysis of a block-fading device-to-device (D2D) link in a multi-tier cellular network that consists of a macro-BS ( $BS_{_{MC}}$ ) and a micro-BS ( $BS_{_{mC}}$ ) which both operate in full-duplex (FD) mode. For the D2D link under consideration, we first formulate the mode selection problem—whereby D2D pair could either communicate directly, or, through the $BS_{_{mC}}$ , or, through the $BS_{_{MC}}$ —as a ternary hypothesis testing problem. Next, to compute the effective capacity (EC) for the given D2D link, we assume that the channel state information (CSI) is not available at the transmit D2D node, and hence, it transmits at a fixed rate $r$ with a fixed power. This allows us to model the D2D link as a Markov system with six-states. We consider both overlay and underlay modes for the D2D link. Moreover, to improve the throughput of the D2D link, we assume that the D2D pair utilizes two special automatic repeat request (ARQ) schemes, i.e., Hybrid-ARQ (HARQ) and truncated HARQ. Furthermore, we consider two distinct queue models at the transmit D2D node, based upon how it responds to the decoding failure at the receive D2D node. Eventually, we provide closed-form expressions for the EC for both HARQ-enabled D2D link and truncated HARQ-enabled D2D link, under both queue models. Noting that the EC looks like a quasi-concave function of $r$ , we further maximize the EC by searching for an optimal rate via the gradient-descent method. Simulation results provide us the following insights: i) EC decreases with an increase in the QoS exponent, ii) EC of the D2D link improves when HARQ is employed, iii) EC increases with an increase in the quality of self-interference cancellation techniques used at $BS_{_{mC}}$ and $BS_{_{MC}}$ in FD mode.

10 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive state-of-the-art survey on the energy efficiency of medium access control (MAC) protocols for cellular IoT, and provide insights and suggestions that can guide practitioners and researchers in designing EE MAC protocols that extend the battery life of IoT devices.
Abstract: In the modern world, the connectivity-as-we-go model is gaining popularity. Internet-of-Things (IoT) envisions a future in which human beings communicate with each other and with devices that have identities and virtual personalities, as well as sensing, processing, and networking capabilities, which will allow the developing of smart environments that operate with little or no human intervention. In such IoT environments, that will have battery-operated sensors and devices, energy efficiency becomes a fundamental concern. Thus, energy-efficient (EE) connectivity is gaining significant attention from the industrial and academic communities. This work aims to provide a comprehensive state-of-the-art survey on the energy efficiency of medium access control (MAC) protocols for cellular IoT. we provide a detailed discussion on the sources of energy dissipation at the MAC layer and then propose solutions. In addition to reviewing the proposed MAC designs, we also provide insights and suggestions that can guide practitioners and researchers in designing EE MAC protocols that extend the battery life of IoT devices. Finally, we identify a range of challenging open problems that should be solved for providing EE MAC services for IoT devices, along with corresponding opportunities and future research ideas to address these challenges.

7 citations


Journal ArticleDOI
TL;DR: An analysis to detect the influence of a set of topological properties of the Bitcoin Users Graph on Bitcoin's exchange rate shows that some of the considered features significantly influence the exchange rate up to several days, and that such relationships are likely not to be spurious.
Abstract: Cryptocurrencies are notorious for their exchange rate high volatility, and are often tools of wild speculation rather than decentralised value exchange. This is especially true for Bitcoin, still, nowadays, the most popular cryptocurrency. This paper presents an analysis to detect the influence of a set of topological properties of the Bitcoin Users Graph on Bitcoin's exchange rate. 1 1. In the rest of this paper we will use the terms “Bitcoin price” and “Bitcoin's exchange rate” interchangeably to represent the amount of fiat currency (USD) needed to buy one Bitcoin at a given time. We consider, besides classical properties, a novel notion of Trustful Transaction Graph introduced to describe partial Users Graphs derived by chains of 0-confirmation transactions. We present a temporal analysis of the evolution of a set of features with a single day granularity. Afterwards, we applied autoregressive distributed-lag linear regression to assess whether and with which strength and duration a change in the considered features is likely to influence the exchange rate up to a prespecified number of days (fifteen) in the future. The results show that some of the considered features significantly influence the exchange rate up to several days, and that such relationships are likely not to be spurious, since we found that those features contribute significantly to decrease the error in predicting the exchange rate.

7 citations


Journal ArticleDOI
TL;DR: Dune and IX achieve this in a way different from Unikernel LibOSs by augmenting processes, and they have essentially implemented a hypervisor, which naturally provides strong isolation and allows KylinX to focus on the flexibility and efficiency issues.
Abstract: Unikernel specializes a minimalistic LibOS and a target application into a standalone single-purpose virtual machine (VM) running on a hypervisor, which is referred to as (virtual) appliance. Compared to traditional VMs, Unikernel appliances have smaller memory footprint and lower overhead while guaranteeing the same level of isolation. On the downside, Unikernel strips off the process abstraction from its monolithic appliance and thus sacrifices flexibility, efficiency, and applicability. In this article, we examine whether there is a balance embracing the best of both Unikernel appliances (strong isolation) and processes (high flexibility/efficiency). We present KylinX, a dynamic library operating system for simplified and efficient cloud virtualization by providing the pVM (process-like VM) abstraction. A pVM takes the hypervisor as an OS and the Unikernel appliance as a process allowing both page-level and library-level dynamic mapping. At the page level, KylinX supports pVM fork plus a set of API for inter-pVM communication (IpC, which is compatible with conventional UNIX IPC). At the library level, KylinX supports shared libraries to be linked to a Unikernel appliance at runtime. KylinX enforces mapping restrictions against potential threats. We implement a prototype of KylinX by modifying MiniOS and Xen tools. Extensive experimental results show that KylinX achieves similar performance both in micro benchmarks (fork, IpC, library update, etc.) and in applications (Redis, web server, and DNS server) compared to conventional processes, while retaining the strong isolation benefit of VMs/Unikernels.

5 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated whether acute exposure to outdoor PM2.5 concentration, P, modifies the rate of change in the daily number of COVID-19 infections (R) across 18 high infection provincial capitals in China, including Wuhan.
Abstract: This study investigates thoroughly whether acute exposure to outdoor PM2.5 concentration, P, modifies the rate of change in the daily number of COVID-19 infections (R) across 18 high infection provincial capitals in China, including Wuhan. A best-fit multiple linear regression model was constructed to model the relationship between P and R, from 1 January to 20 March 2020, after accounting for meteorology, net move-in mobility (NM), time trend (T), co-morbidity (CM), and the time-lag effects. Regression analysis shows that P (β = 0.4309, p < 0.001) is the most significant determinant of R. In addition, T (β = -0.3870, p < 0.001), absolute humidity (AH) (β = 0.2476, p = 0.002), P × AH (β = -0.2237, p < 0.001), and NM (β = 0.1383, p = 0.003) are more significant determinants of R, as compared to GDP per capita (β = 0.1115, p = 0.015) and CM (Asthma) (β = 0.1273, p = 0.005). A matching technique was adopted to demonstrate a possible causal relationship between P and R across 18 provincial capital cities. A 10 µg/m3 increase in P gives a 1.5% increase in R (p < 0.001). Interaction analysis also reveals that P × AH and R are negatively correlated (β = -0.2237, p < 0.001). Given that P exacerbates R, we recommend the installation of air purifiers and improved air ventilation to reduce the effect of P on R. Given the increasing observation that COVID-19 is airborne, measures that reduce P, plus mandatory masking that reduces the risks of COVID-19 associated with viral-particulate transmission, are strongly recommended. Our study is distinguished by the focus on the rate of change instead of the individual cases of COVID-19 when modelling the statistical relationship between R and P in China; causal instead of correlation analysis via the matching analysis, while taking into account the key confounders, and the individual plus the interaction effects of P and AH on R.

4 citations


Journal ArticleDOI
25 May 2021
TL;DR: In this paper, a game-theoretic analysis was conducted to investigate whether an ecosystem comprising a set of profit-minded cyber-insurance companies, each capable of providing reinsurance services for a service-networked IT environment, is economically feasible to cover aggregate cyber-losses arising due to a cyber-attack.
Abstract: Service liability interconnections among networked IT and IoT-driven service organizations create potential channels for cascading service disruptions due to modern cybercrimes such as DDoS, APT, and ransomware attacks. These attacks are known to inflict cascading catastrophic service disruptions worth billions of dollars across organizations and critical infrastructure around the globe. Cyber-insurance is a risk management mechanism that is gaining increasing industry popularity to cover client (organization) risks after a cyber-attack. However, there is a certain likelihood that the nature of a successful attack is of such magnitude that an organizational client’s insurance provider is not able to cover the multi-party aggregate losses incurred upon itself by its clients and their descendants in the supply chain, thereby needing to re-insure itself via other cyber-insurance firms. To this end, one question worth investigating in the first place is whether an ecosystem comprising a set of profit-minded cyber-insurance companies, each capable of providing re-insurance services for a service-networked IT environment, is economically feasible to cover the aggregate cyber-losses arising due to a cyber-attack. Our study focuses on an empirically interesting case of extreme heavy tailed cyber-risk distributions that might be presenting themselves to cyber-insurance firms in the modern Internet age in the form of catastrophic service disruptions, and could be a possible standard risk distribution to deal with in the near IoT age. Surprisingly, as a negative result for society in the event of such catastrophes, we prove via a game-theoretic analysis that it may not be economically incentive compatible, even under i.i.d. statistical conditions on catastrophic cyber-risk distributions, for limited liability-taking risk-averse cyber-insurance companies to offer cyber re-insurance solutions despite the existence of large enough market capacity to achieve full cyber-risk sharing. However, our analysis theoretically endorses the popular opinion that spreading i.i.d. cyber-risks that are not catastrophic is an effective practice for aggregate cyber-risk managers, a result established theoretically and empirically in the past. A failure to achieve a working re-insurance market in critically demanding situations after catastrophic cyber-risk events strongly calls for centralized government regulatory action/intervention to promote risk sharing through re-insurance activities for the benefit of service-networked societies in the IoT age.

Journal ArticleDOI
TL;DR: A rigorous general theory to elicit conditions on (tail-dependent) heavy-tailed cyber-risk distributions under which a risk management firm might find it (non)sustainable to provide aggregate cyber- risk coverage services for smart societies is provided and a real-data-driven numerical study is provided to validate claims made in theory assuming boundedly rational cyber- Risk managers.
Abstract: IoT-driven smart societies are modern service-networked ecosystems, whose proper functioning is hugely based on the success of supply chain relationships. Robust security is still a big challenge in such ecosystems, catalyzed primarily by naive cyber-security practices (e.g., setting default IoT device passwords) on behalf of the ecosystem managers, i.e., users and organizations. This has recently led to some catastrophic malware-driven DDoS and ransomware attacks (e.g., the Mirai and WannaCry attacks). Consequently, markets for commercial third-party cyber-risk management (CRM) services (e.g., cyber-insurance) are steadily but sluggishly gaining traction with the rapid increase of IoT deployment in society, and provides a channel for ecosystem managers to transfer residual cyber-risk post attack events. Current empirical studies have shown that such residual cyber-risks affecting smart societies are often heavy-tailed in nature and exhibit tail dependencies . This is both, a major concern for a profit-minded CRM firm that might normally need to cover multiple such dependent cyber-risks from different sectors (e.g., manufacturing and energy) in a service-networked ecosystem, and a good intuition behind the sluggish market growth of CRM products. In this article, we provide: 1) a rigorous general theory to elicit conditions on (tail-dependent) heavy-tailed cyber-risk distributions under which a risk management firm might find it (non)sustainable to provide aggregate cyber-risk coverage services for smart societies and 2) a real-data-driven numerical study to validate claims made in theory assuming boundedly rational cyber-risk managers, alongside providing ideas to boost markets that aggregate dependent cyber-risks with heavy-tails. To the best of our knowledge, this is the only complete general theory till date on the feasibility of aggregate CRM.


Journal ArticleDOI
TL;DR: It is shown that at market equilibrium IP trading markets exhibiting strategic substitutes between buying firms pose lesser risks for IP in society, primarily because the ‘substitutes’ setting, in contrast to the “complements” setting, economically incentivizes appropriate consumer data distortion by the seller in addition to restricting the proportion of buyers to which it sells.
Abstract: ln-app advertising is a multi-billion dollar industry that is an essential part of the current digital ecosystem, and is amenable to sensitive consumer information often being sold downstream without the knowledge of consumers, and in many cases to their annoyance. While this practice, in cases, may result in long-term benefits for the consumers, it can result in serious information privacy (IP) breaches of very significant impact (e.g., breach of genetic data) in the short term. The question we raise through this article is: does the type of information being traded downstream play a role in the degree of IP risks generated? We investigate two general (one-many) information trading market structures between a single data aggregating seller (e.g., enterprise app) and multiple competing buyers (e.g., ad-networks, retailers), distinguished by mutually exclusive and privacy sanitized aggregated consumer data (information) types: (i) data entailing strategically complementary actions among buyers and (ii) data entailing strategically substituting actions among buyers. Our primary question of interest here is: trading which type of data might pose less information privacy risks for society? To this end, we show that at market equilibrium IP trading markets exhibiting strategic substitutes between buying firms pose lesser risks for IP in society, primarily because the ‘substitutes’ setting, in contrast to the ‘complements’ setting, economically incentivizes appropriate consumer data distortion by the seller in addition to restricting the proportion of buyers to which it sells. Moreover, we also show that irrespective of the data type traded by the seller, the likelihood of improved IP in society is higher if there is purposeful or free-riding based transfer/leakage of data between buying firms. This is because the seller finds itself economically incentivized to restrict the release of sanitized consumer data with respect to the span of its buyer space, as well as in improved data quality.

Journal ArticleDOI
TL;DR: In this paper, the role of the IETF in ensuring continued innovation in Internet technologies by embracing the wider research community's work on limited domain technology is discussed, leading to the key insight that limited domains are not only considered useful but a must to sustain innovation.
Abstract: Limited domains were defined conceptually in RFC 8799 to cater to requirements and behaviours that extend the dominant view of IP packet delivery in the Internet. This paper argues not only that limited domains have been with us from the very beginning of the Internet but also that they have been shaping innovation of Internet technologies ever since, and will continue to do so. In order to build limited domains that successfully interoperate with the existing Internet, we propose an architectural framework as a blueprint. We discuss the role of the IETF in ensuring continued innovation in Internet technologies by embracing the wider research community's work on limited domain technology, leading to our key insight that Limited Domains are not only considered useful but a must to sustain innovation.

Posted Content
TL;DR: In this article, the authors propose an end-to-end system architecture design scope for 6G, and talk about the necessity to incorporate an independent data plane and a novel intelligent plane with particular emphasis on endtoend AI workflow orchestration, management and operation.
Abstract: The mobile communication system has transformed to be the fundamental infrastructure to support digital demands from all industry sectors, and 6G is envisioned to go far beyond the communication-only purpose. There is coming to a consensus that 6G will treat Artificial Intelligence (AI) as the cornerstone and has a potential capability to provide "intelligence inclusion", which implies to enable the access of AI services at anytime and anywhere by anyone. Apparently, the intelligent inclusion vision produces far-reaching influence on the corresponding network architecture design in 6G and deserves a clean-slate rethink. In this article, we propose an end-to-end system architecture design scope for 6G, and talk about the necessity to incorporate an independent data plane and a novel intelligent plane with particular emphasis on end-to-end AI workflow orchestration, management and operation. We also highlight the advantages to provision converged connectivity and computing services at the network function plane. Benefiting from these approaches, we believe that 6G will turn to an "everything as a service" (XaaS) platform with significantly enhanced business merits.

Journal ArticleDOI
TL;DR: SCDP is proposed, a general-purpose data transport protocol for data centres that, in contrast to all other protocols proposed to date, supports efficient one- to-many and many-to-one communication, which is extremely common in modern data centres.
Abstract: In this paper we propose SCDP, a general-purpose data transport protocol for data centres that, in contrast to all other protocols proposed to date, supports efficient one-to-many and many-to-one communication, which is extremely common in modern data centres. SCDP does so without compromising on efficiency for short and long unicast flows. SCDP achieves this by integrating RaptorQ codes with receiver-driven data transport, packet trimming and Multi-Level Feedback Queuing (MLFQ); (1) RaptorQ codes enable efficient one-to-many and many-to-one data transport; (2) on top of RaptorQ codes, receiver-driven flow control, in combination with in-network packet trimming, enable efficient usage of network resources as well as multi-path transport and packet spraying for all transport modes. Incast and Outcast are eliminated; (3) the systematic nature of RaptorQ codes, in combination with MLFQ, enable fast, decoding-free completion of short flows. We extensively evaluate SCDP in a wide range of simulated scenarios with realistic data centre workloads. For one-to-many and many-to-one transport sessions, SCDP performs significantly better compared to NDP and PIAS. For short and long unicast flows, SCDP performs equally well or better compared to NDP and PIAS.

Journal ArticleDOI
09 Aug 2021
TL;DR: This work provides a closed‐form expression for the EC of relay‐assisted D2D communication, in which both the transmitter and the relay devices operate under individual QoS constraints, and proposes a novel multiple features‐based mechanism by utilizing all the features of a wireless link.
Abstract: Device‐to‐device (D2D) communication is a promising technique to enhance the spectral efficiency for 5G and beyond cellular networks. Traditionally, D2D communication was considered solely...

Journal ArticleDOI
TL;DR: In this paper, a rigorous general theory to elicit conditions on (tail-dependent) heavy-tailed cyber risk distributions under which a risk management firm might find it (non)sustainable to provide aggregate cyber-risk coverage services for smart societies is provided.
Abstract: In this paper, we provide (i) a rigorous general theory to elicit conditions on (tail-dependent) heavy-tailed cyber-risk distributions under which a risk management firm might find it (non)sustainable to provide aggregate cyber-risk coverage services for smart societies, and (ii)a real-data driven numerical study to validate claims made in theory assuming boundedly rational cyber-risk managers, alongside providing ideas to boost markets that aggregate dependent cyber-risks with this http URL the best of our knowledge, this is the only complete general theory till date on the feasibility of aggregate cyber-risk management.

Journal ArticleDOI
TL;DR: In this paper, the authors of Aggregate Cyber-Risk Management in the IoT Age: Cautionary Statistics for (Re)Insurers and Likes, published in the IEEE IoT Journal, regret that we have found a few errors in the numerical evaluation setup of the works in [1] and [2] that we had borrowed for our accepted paper.
Abstract: As authors of our recently accepted article: Aggregate Cyber-Risk Management in the IoT Age: Cautionary Statistics for (Re)Insurers and Likes , published in the IEEE IoT Journal, we regret that we have found a few errors in the numerical evaluation setup of the works in [1] and [2] that we had borrowed for our accepted paper. In this correction statement, we describe the errors in detail, correct it, and present our revised results with a renewed experimental setup, hoping it to replace the existing incorrect numerical results in the accepted paper. We apologize for the inconvenience caused to the reader. We emphasize that the numerical evaluation section does not in any way hamper the theoretical contributions in this article, and was initially only meant to provide some empirical evidence for whether the theory proposed in this article generalizes to behavioral settings introduced in [2] .

Journal ArticleDOI
TL;DR: In this article, the authors focus on generalizing the application scope to the inter-disciplinary contributions made in the article and focus on a variety of modern-day application families, not just the evident application pertaining to mobile ad ecosystems.
Abstract: In the above article [1] , we importantly missed out on generalizing the application scope to the inter-disciplinary contributions made in the article. It is essential to educate the readers on an increasing variety of novel and highly practical modern-day application families where the contributions made in [1] are equally applicable—not just the evident application pertaining to mobile-ad ecosystems, as in [1] .

Posted Content
TL;DR: In this paper, a sample of publicly collected wardriving data is compared to a predictive model for Wi-Fi Access Points, and the results demonstrate several statistical issues which future WRS studies must account for, including selection bias, sample representativeness and the modifiable areal unit problem.
Abstract: Knowledge of Wi-Fi networks helps to guide future engineering and spectrum policy decisions. However, due to its unlicensed nature, the deployment of Wi-Fi Access Points is undocumented meaning researchers are left making educated guesses as to the prevalence of these assets through remotely collected or passively sensed measurements. One commonly used method is referred to as `wardriving` essentially where a vehicle is used to collect geospatial statistical data on wireless networks to inform mobile computing and networking security research. Surprisingly, there has been very little examination of the statistical issues with wardriving data, despite the vast number of analyses being published in the literature using this approach. In this paper, a sample of publicly collected wardriving data is compared to a predictive model for Wi-Fi Access Points. The results demonstrate several statistical issues which future wardriving studies must account for, including selection bias, sample representativeness and the modifiable areal unit problem.

Posted Content
TL;DR: In this paper, the statistical quality-of-service (QoS) analysis of a block-fading D2D link in a multi-tier cellular network that consists of a macro BS (BSMC) and a micro-BS (BSmC) which both operate in full-duplex (FD) mode is presented.
Abstract: This work does the statistical quality-of-service (QoS) analysis of a block-fading device-to-device (D2D) link in a multi-tier cellular network that consists of a macro-BS (BSMC) and a micro-BS (BSmC) which both operate in full-duplex (FD) mode. For the D2D link under consideration, we first formulate the mode selection problem-whereby D2D pair could either communicate directly, or, through the BSmC, or, through the BSMC-as a ternary hypothesis testing problem. Next, to compute the effective capacity (EC) for the given D2D link, we assume that the channel state information (CSI) is not available at the transmit D2D node, and hence, it transmits at a fixed rate r with a fixed power. This allows us to model the D2D link as a Markov system with six-states. We consider both overlay and underlay modes for the D2D link. Moreover, to improve the throughput of the D2D link, we assume that the D2D pair utilizes two special automatic repeat request (ARQ) schemes, i.e., Hybrid-ARQ (HARQ) and truncated HARQ. Furthermore, we consider two distinct queue models at the transmit D2D node, based upon how it responds to the decoding failure at the receive D2D node. Eventually, we provide closed-form expressions for the EC for both HARQ-enabled D2D link and truncated HARQ-enabled D2D link, under both queue models. Noting that the EC looks like a quasi-concave function of r, we further maximize the EC by searching for an optimal rate via the gradient-descent method. Simulation results provide us the following insights: i) EC decreases with an increase in the QoS exponent, ii) EC of the D2D link improves when HARQ is employed, iii) EC increases with an increase in the quality of self-interference cancellation techniques used at BSmC and BSMC in FD mode.