scispace - formally typeset
Search or ask a question

Showing papers by "AT&T Labs published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors propose an edge controller-based architecture for cellular networks and evaluate its performance with real data from hundreds of base stations of a major U.S. operator.
Abstract: The fifth generation of cellular networks (5G) will rely on edge cloud deployments to satisfy the ultra-low latency demand of future applications. In this paper, we argue that such deployments can also be used to enable advanced data-driven and Machine Learning (ML) applications in mobile networks. We propose an edge-controller-based architecture for cellular networks and evaluate its performance with real data from hundreds of base stations of a major U.S. operator. In this regard, we will provide insights on how to dynamically cluster and associate base stations and controllers, according to the global mobility patterns of the users. Then, we will describe how the controllers can be used to run ML algorithms to predict the number of users in each base station, and a use case in which these predictions are exploited by a higher-layer application to route vehicular traffic according to network Key Performance Indicators (KPIs). We show that the prediction accuracy improves when based on machine learning algorithms that rely on the controllers’ view and, consequently, on the spatial correlation introduced by the user mobility, with respect to when the prediction is based only on the local data of each single base station.

63 citations


Journal ArticleDOI
TL;DR: The BRKGA-MP-IPR is a variant of the Biased Random-Key Genetic Algorithm that employs multiple (biased) parents to generate offspring instead of the usual two, and is hybridized with a novel, implicit path-relinking local search procedure, operating over the standard unit hypercube.

36 citations


Journal ArticleDOI
TL;DR: In this article, a RAN Intelligent Controller (RIC) platform decouples the control and data planes of the radio access network by fostering network openness and empowering network intelligence with AI-enabled applications.
Abstract: With the emergence of 5G, network densification, and richer and more demanding applications, the radio access network (RAN)—a key component of the cellular network infrastructure—will become increasingly complex. To tackle this complexity, it is critical for the RAN to be able to automate the process of deploying, optimizing, and operating while leveraging novel data-driven technologies to ultimately improve the end-user quality of experience. In this article, we disaggregate the traditional monolithic control plane (CP) RAN architecture and introduce a RAN Intelligent Controller (RIC) platform decoupling the control and data planes of the RAN driving an intelligent and continuously evolving radio network by fostering network openness and empowering network intelligence with AI-enabled applications. We provide functional and software architectures of the RIC and discuss its design challenges. We elaborate how the RIC can enable near-real-time network optimization in 5G for the dual-connectivity use case using machine learning control loops. Finally, we provide preliminary results to evaluate the performance of our open-source RIC platform.

30 citations


Journal ArticleDOI
TL;DR: A novel resource allocation scheme that optimizes the network energy efficiency of a C-RAN and proposes a provably-convergent iterative method to solve the resulting Weighted Sum-Rate maximization problem.
Abstract: Cloud Radio Access Network (C-RAN) is a key architecture for 5G cellular wireless network that aims at improving spectral and energy efficiency of the network by uniting traditional RAN with cloud computing. In this paper, a novel resource allocation scheme that optimizes the network energy efficiency of a C-RAN is designed. First, an energy consumption model that characterizes the computation energy of the BaseBand Unit (BBU) is introduced based on empirical results collected from a programmable C-RAN testbed. Then, an optimization problem is formulated to maximize the energy efficiency of the network, subject to practical constraints including Quality of Service (QoS) requirement, radio remote head transmit power, and fronthaul capacity limits. The formulated Network Energy Efficiency Maximization (NEEM) problem jointly considers the tradeoff among the network accumulated data rate, BBU power consumption, fronthaul cost, and beamforming design. To deal with the non-convexity and mixed-integer nature of the problem, we utilize successive convex approximation methods to transform the original problem into the equivalent Weighted Sum-Rate (WSR) maximization problem. We then propose a provably-convergent iterative method to solve the resulting WSR problem. Extensive simulation results coupled with real-time experiments on a small-scale C-RAN testbed show the effectiveness of our proposed resource allocation scheme and its advantages over existing approaches.

20 citations


Proceedings ArticleDOI
Ajay Mahimkar1, Ashiwan Sivakumar1, Zihui Ge1, Shomik Pathak2, Karunasish Biswas2 
09 Aug 2021
TL;DR: In this article, the authors proposed a new data-driven recommendation approach Auric to automatically and accurately generate configuration parameters for new carriers added in cellular networks, which incorporates new algorithms based on collaborative filtering and geographical proximity to automatically determine similarity across existing carriers.
Abstract: Cellular service providers add carriers in the network in order to support the increasing demand in voice and data traffic and provide good quality of service to the users. Addition of new carriers requires the network operators to accurately configure their parameters for the desired behaviors. This is a challenging problem because of the large number of parameters related to various functions like user mobility, interference management and load balancing. Furthermore, the same parameters can have varying values across different locations to manage user and traffic behaviors as planned and respond appropriately to different signal propagation patterns and interference. Manual configuration is time-consuming, tedious and error-prone, which could result in poor quality of service. In this paper, we propose a new data-driven recommendation approach Auric to automatically and accurately generate configuration parameters for new carriers added in cellular networks. Our approach incorporates new algorithms based on collaborative filtering and geographical proximity to automatically determine similarity across existing carriers. We conduct a thorough evaluation using real-world LTE network data and observe a high accuracy (96%) across a large number of carriers and configuration parameters. We also share experiences from our deployment and use of Auric in production environments.

7 citations


Proceedings ArticleDOI
11 Jul 2021
TL;DR: In this article, the authors propose to learn dense representations for skills and experts based on previous collaborations and bootstrap the training process through transfer learning, which is able to outperform the state-of-theart graph and neural methods over both ranking and quality metrics.
Abstract: Given a set of required skills, the objective of the team formation problem is to form a team of experts that cover the required skills. Most existing approaches are based on graph methods, such as minimum-cost spanning trees. These approaches, due to their limited view of the network, fail to capture complex interactions among experts and are computationally intractable. More recent approaches adopt neural architectures to learn a mapping between the skills and experts space. While they are more effective, these techniques face two main limitations: (1) they consider a fixed representation for both skills and experts, and (2) they overlook the significant amount of past collaboration network information. We learn dense representations for skills and experts based on previous collaborations and bootstrap the training process through transfer learning. We also propose to fine-tune the representation of skills and experts while learning the mapping function. Our experiments over the DBLP dataset verify that our proposed architecture is able to outperform the state-of-the-art graph and neural methods over both ranking and quality metrics.

7 citations


Proceedings ArticleDOI
19 Apr 2021
TL;DR: In this paper, the authors present novel techniques that match tables, infoboxes and lists within a page across page revisions, and evaluate their approach on a representative sample of pages and measure the number of correct matches.
Abstract: A considerable amount of useful information on the web is (semi-)structured, such as tables and lists. An extensive corpus of prior work addresses the problem of making these human-readable representations interpretable by algorithms. Most of these works focus only on the most recent snapshot of these web objects. However, their evolution over time represents valuable information that has barely been tapped, enabling various applications, including visual change exploration and trust assessment. To realize the full potential of this information, it is critical to match such objects across page revisions.In this work, we present novel techniques that match tables, infoboxes and lists within a page across page revisions. We are, thus, able to extract the evolution of structured information in various forms from a long series of web page revisions. We evaluate our approach on a representative sample of pages and measure the number of correct matches. Our approach achieves a significant improvement in object matching over baselines and over related work.

6 citations


Journal ArticleDOI
TL;DR: An evolutionary approach for the p-next center problem and an extension for the current benchmark instances are proposed, built on the Multi-Parent Biased Random-Key Genetic Algorithm with Implicit Path-Relinking.
Abstract: The p-next center problem is an extension of the classical p-center problem, in which a backup center must be assigned to welcome users from a suddenly unavailable center. Usually, users tend to seek help in the closest facility they can find. However, during a significant event or crisis, one only realizes that the closest facility has been disrupted upon his/her arrival. Therefore, the user seeks help in the next closest center from the one that has failed to provide service. Therefore, the objective of the p-next center problem is to minimize the path of any user, which is made by the distance from this origin to its closest installed facility, plus the distance from this facility to its backup. We propose an evolutionary approach for the p-next center problem and an extension for the current benchmark instances. The proposed methods are built on the Multi-Parent Biased Random-Key Genetic Algorithm with Implicit Path-Relinking. Computational experiments carried out on 416 test instances show, experimentally, the outstanding performance of the developed algorithms and their flexibility to reach a good quality-speed trade-off.

5 citations


Proceedings ArticleDOI
15 Jul 2021
TL;DR: In this paper, a framework for quality-aware adaptive bitrate (ABR) streaming involving a per-session data budget constraint is proposed, where fine-grained perceptual quality information is known to the planning scheme, and another for such information is not available.
Abstract: Over-the-top video (OTT) streaming accounts for the majority of traffic on cellular networks, and also places a heavy demand on users' limited monthly cellular data budgets. In contrast to much of traditional research that focuses on improving the quality, we explore a different direction---using data budget information to better manage the data usage of mobile video streaming, while minimizing the impact on users' quality of experience (QoE). Specifically, we propose a novel framework for quality-aware Adaptive Bitrate (ABR) streaming involving a per-session data budget constraint. Under the framework, we develop two planning based strategies, one for the case where fine-grained perceptual quality information is known to the planning scheme, and another for the case where such information is not available. Evaluations for a wide range of network conditions, using different videos covering a variety of content types and encodings, demonstrate that both these strategies use much less data compared to state-of-the-art ABR schemes, while still providing comparable QoE. Our proposed approach is designed to work in conjunction with existing ABR streaming workflows, enabling ease of adoption.

5 citations


Proceedings ArticleDOI
15 Jul 2021
TL;DR: In this article, the authors propose Livelyzer, a generalized active measurement and black-box testing framework for analyzing the performance of this component in popular live streaming software and services under controlled settings.
Abstract: Over-the-top (OTT) live video traffic has grown significantly, fueled by fundamental shifts in how users consume video content (e.g., increased cord-cutting) and by improvements in camera technologies, computing power, and wireless resources. A key determining factor for the end-to-end live streaming QoE is the design of the first-mile upstream ingest path that captures and transmits the live content in real-time, from the broadcaster to the remote video server. This path often involves either a Wi-Fi or cellular component, and is likely to be bandwidth-constrained with time-varying capacity, making the task of high-quality video delivery challenging. Today, there is little understanding of the state of the art in the design of this critical path, with existing research focused mainly on the downstream distribution path, from the video server to end viewers. To shed more light on the first-mile ingest aspect of live streaming, we propose Livelyzer, a generalized active measurement and black-box testing framework for analyzing the performance of this component in popular live streaming software and services under controlled settings. We use Livelyzer to characterize the ingest behavior and performance of several live streaming platforms, identify design deficiencies that lead to poor performance, and propose best practice design recommendations to improve the same.

4 citations


Proceedings ArticleDOI
09 Aug 2021
TL;DR: CorNET as mentioned in this paper proposes a new framework for modularization of changes into building blocks, flexible composition into change workflows, change plan optimization, change impact verification, and automated translation of high-level change management intent into low-level implementations and mathematical models.
Abstract: Change management has been a long-standing challenge for network operations. The large scale and diversity of networks, their complex dependencies, and continuous evolution through technology and software updates combined with the risk of service impact create tremendous challenges to effectively manage changes. In this paper, we use data from a large service provider and experiences of their operations teams to highlight the need for quick and easy adaptation of change management capabilities and keep up with the continuous network changes. We propose a new framework CORNET (COmposition fRamework for chaNge managEmenT) with key ideas of modularization of changes into building blocks, flexible composition into change workflows, change plan optimization, change impact verification, and automated translation of high-level change management intent into low-level implementations and mathematical models. We demonstrate the effectiveness of CORNET using real-world data collected from 4G and 5G cellular networks and virtualized services such as VPN and SDWAN running in the cloud as well as experiments conducted on a testbed of virtualized network functions. We also share our operational experiences and lessons learned from successfully using CORNET within a large service provider network over the last three years.

Proceedings ArticleDOI
01 Apr 2021
TL;DR: In this article, the authors determine the optimal beam sweeping period, i.e., the frequency of the channel measurements, to align the transmitter and receiver beams to the best channel directions for maximizing the vehicle-to-infrastructure (V2I) throughput.
Abstract: Millimeter wave wireless spectrum deployments will allow vehicular communications to share high data rate vehicular sensor data in real-time. The highly directional nature of wireless links in millimeter spectral bands will require continuous channel measurements to ensure the transmitter (TX) and receiver (RX) beams are aligned to provide the best channel. Using real-world vehicular mmWave measurement data at 28 GHz, we determine the optimal beam sweeping period, i.e. the frequency of the channel measurements, to align the RX beams to the best channel directions for maximizing the vehicle-to-infrastructure (V2I) throughput. We show that in a realistic vehicular traffic environment in Austin, TX, for a vehicle traveling at an average speed of 10.5 mph, a beam sweeping period of 300 ms in future V2I communication standards would maximize the V2I throughput, using a system of four RX phased arrays that scanned the channel 360 degrees in the azimuth and 30 degrees above and below the boresight. We also investigate the impact of the number of active RX chains controlling the steerable phased arrays on V2I throughput. Reducing the number of RX chains controlling the phased arrays helps reduce the cost of the vehicular mmWave hardware while multiple RX chains, although more expensive, provide more robustness to beam direction changes at the vehicle, allowing near maximum throughput over a wide range of beam sweep periods. We show that the overhead of utilizing one RX chain instead of four leads to a 10% drop in mean V2I throughput over six non-line-of-sight runs in real traffic conditions, with each run being 10 to 20 seconds long over a distance of 40 to 90 meters.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the core problem of count queries, and seek to design mechanisms to release data associated with a group of $n$ n individuals, without regard to the consequences on the results.
Abstract: Concern about how to aggregate sensitive user data without compromising individual privacy is a major barrier to greater availability of data. Differential privacy has emerged as an accepted model to release sensitive information while giving a statistical guarantee for privacy. Many different algorithms are possible to address different target functions. We focus on the core problem of count queries, and seek to design mechanisms to release data associated with a group of $n$ n individuals. Prior work has focused on designing mechanisms by raw optimization of a loss function, without regard to the consequences on the results. This can lead to mechanisms with undesirable properties, such as never reporting some outputs (gaps), and overreporting others (spikes). We tame these pathological behaviors by introducing a set of desirable properties that mechanisms can obey. Any combination of these can be satisfied by solving a linear program (LP) which minimizes a cost function, with constraints enforcing the properties. We focus on a particular cost function, and provide explicit constructions that are optimal for certain combinations of properties, and show a closed form for their cost. In the end, there are only a handful of distinct optimal mechanisms to choose between: one is the well-known (truncated) geometric mechanism; the second a novel mechanism that we introduce here, and the remainder are found as the solution to particular LPs. These all avoid the bad behaviors we identify. We demonstrate in a set of experiments on real and synthetic data which is preferable in practice, for different combinations of data distributions, constraints, and privacy parameters.

Journal ArticleDOI
TL;DR: An epoch-greedy bandit algorithm that achieves a sub-linear regret, given access to a class of classifying functions over the channel-state space and adaptive scheduling using this learned rate-region model outperforms the corresponding hand-tuned static maps in multiple settings.
Abstract: We propose an online algorithm for clustering channel-states and learning the associated achievable multiuser rates. Our motivation stems from the complexity of multiuser scheduling. For instance, MU-MIMO scheduling involves the selection of a user subset and associated rate selection each time-slot for varying channel states (the vector of quantized channels matrices for each of the users) — a complex integer optimization problem that is different for each channel state. Instead, our algorithm clusters the collection of channel states to a much lower dimension, and for each cluster provides achievable multiuser capacity trade-offs, which can be used for user and rate selection. Our algorithm uses a bandit approach, where it learns both the unknown partitions of the channel-state space (channel-state clustering) as well as the rate region for each cluster along a pre-specified set of directions, by observing the success/failure of the scheduling decisions (e.g. through packet loss). We propose an epoch-greedy learning algorithm that achieves a sub-linear regret, given access to a class of classifying functions over the channel-state space. We empirically validate our approach on a high-fidelity 5G New Radio (NR) wireless simulator developed within AT&T Labs. We show that our epoch-greedy bandit algorithm learns the channel-state clusters and the associated rate regions. Further, adaptive scheduling using this learned rate-region model (map from channel-state to the set of feasible rates) outperforms the corresponding hand-tuned static maps in multiple settings. Thus, we believe that auto-tuning cellular systems through learning-assisted scheduling algorithms can significantly improve performance in real deployments.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework, called PAveMENT, that provides safe and optimal paths for navigating UAVs where: a) they experience reliable and high-quality communications with the underlying cellular network; b) they cannot fly over no-flying zones and interrupt public/private services; and c) UAV have minimal impact on the ground users of the mobile network.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a distributed and contention-free scheduling of VM checkpointing to provide reliability as a transparent, elastic service, and quantified reliability in closed form by studying system stationary behaviours, and maximize job reliability through utility optimization.
Abstract: A datacenter that consists of hundreds or thousands of servers can provide virtualized environments to a large number of cloud applications and jobs that value the requirement of reliability very differently. Checkpointing a virtual machine (VM) is a proven technique to improve reliability. However, existing checkpoint scheduling techniques for enhancing reliability of distributed systems fails to achieve satisfactory results, either because they tend to offer the same, fixed reliability to all jobs, or because their solutions are tied up to specific applications and rely on centralized checkpoint control mechanisms. In this work, we first show that reliability can be significantly improved through contention-free scheduling of checkpoints. Then, inspired by the Carrier Sense Multiple Access (CSMA) protocol in wireless congestion control, we propose a novel framework for distributed and contention-free scheduling of VM checkpointing to provide reliability as a transparent, elastic service. We quantify reliability in closed form by studying system stationary behaviours, and maximize job reliability through utility optimization. Our design is validated via a proof-of-concept prototype that leverages readily available implementations in Xen hypervisors. The proposed checkpoint scheduling is shown to significantly reduce checkpointing interference and improve reliability by as much as one order of magnitude over contention-oblivious checkpoint schemes.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed Fat-Proxy which acts as a stand-alone execution engine of critical network events and executed several signalling messages in parallel while skipping unnecessary messages to reduce event execution time and signalling overhead while ensuring highly available service access.
Abstract: The explosion of mobile applications and phenomenal adoption of mobile connectivity by end users make all-IP based 4G LTE as an ideal choice for providing Internet access on the go. LTE core network which handles device control-plane and data-plane traffic becomes susceptible to network resource constraints. To ease these constraints, Network Function Virtualization (NFV) provides high scalability and flexibility by enabling dynamic allocation of LTE core network resources. NFV achieves this by decomposing LTE Network Functions (NF) into multiple instances. However, LTE core network architecture which is designed considering fewer NF boxes does not fit well where decomposed NF instances add delays in network event execution. Certain control-plane events being time critical hurt data-plane traffic requirements defined by LTE standard. This paper proposes Fat-proxy which acts as a stand-alone execution engine of these critical network events. Through space uncoupling, we execute several signalling messages in parallel while skipping unnecessary messages to reduce event execution time and signalling overhead while ensuring highly available service access. We build our system prototype of open source LTE core network over virtualized platform. Our results show that we can reduce event execution time and signalling overhead upto 50% and 40%, respectively.

Journal ArticleDOI
TL;DR: The use of geo-referenced data for targeted marketing is receiving significant attention from a wide spectrum of companies and organizations as mentioned in this paper, with numerous applications in many domains, including social networks, marketing, and tourism.
Abstract: The amount of publicly available geo-referenced data has seen a dramatic increase over the last years. Many user activities generate data that are annotated with location and contextual information. Moreover, it has become easier to collect and combine rich and diverse location information. In the context of geoadvertising, the use of geosocial data for targeted marketing is receiving significant attention from a wide spectrum of companies and organizations. With the advent of smartphones and online social networks, a multi-billion dollar industry that utilizes geosocial data for advertising and marketing has emerged. Geotagged social-media posts, GPS traces, data from cellular antennas and WiFi access points are used widely to directly access people for advertising, recommendations, marketing, and group purchases. Exploiting this torrent of geo-referenced data provides a tremendous potential to materially improve existing recommendation services and offer novel ones, with numerous applications in many domains, including social networks, marketing, and tourism.


DOI
02 Nov 2021
TL;DR: In this paper, the authors explore two approaches to the problem: (a) a pipeline approach, where each post is first classified, and then the location associated with the set of posts is inferred from the individual post labels; and (b) a joint approach where the individual posts are simultaneously processed to yield the desired location type.
Abstract: Location classification is used for associating type to locations, to enrich maps and support a plethora of geospatial applications that rely on location types. Classification can be performed by humans, but using machine learning is more efficient and faster to react to changes than human-based classification. Machine learning can be used in lieu of human classification or for supporting it. In this paper we study the use of machine learning for Geosocial Location Classification, where the type of a site, e.g., a building, is discovered based on social-media posts, e.g., tweets. Our goal is to correctly associate a set of tweets posted in a small radius around a given location with the corresponding location type, e.g., school, church, restaurant or museum. We explore two approaches to the problem: (a) a pipeline approach, where each post is first classified, and then the location associated with the set of posts is inferred from the individual post labels; and (b) a joint approach where the individual posts are simultaneously processed to yield the desired location type. We tested the two approaches over a data set of geotagged tweets. Our results demonstrate the superiority of the joint approach. Moreover, we show that due to the unique structure of the problem, where weakly-related messages are jointly processed to yield a single final label, linear classifiers outperform deep neural network alternatives.

DOI
Paul Reeser1
09 Dec 2021
TL;DR: In this article, the authors use a simple 2-tiered reference model consisting of servers and sites to illustrate an approach to topology configuration and optimization, with a focus on addressing geo-redundancy questions like how many sites, and how many servers per site, are required to meet performance and reliability requirements.
Abstract: We use a simple 2-tiered reference model consisting of servers and sites to illustrate an approach to topology configuration and optimization, with a focus on addressing geo-redundancy questions like how many sites, and how many servers per site, are required to meet performance and reliability requirements. We first develop a multi-dimensional component failure mode reference model, then reduce this model to a one-dimensional service outage mode reference model. The key contribution is the exact derivation of the outage and restoral rates from the set of ‘available’ states to the set of ‘unavailable’ states using an adaptation of the hyper-geometric “balls in urns” distribution with unequally likely combinations. We describe a topology configuration tool for optimizing resources to meet requirements and illustrate effective use of the tool for a hypothetical VoIP call setup protocol message processing application.

Proceedings ArticleDOI
29 Mar 2021
TL;DR: In this paper, an intelligent controller without strong assumption or domain knowledge about the RAN and running 24/7 without supervision is proposed. But the detailed mechanisms of the eNodeB configurations are usually very complicated and not disclosed, not to mention the large key performance indicators (KPIs) space needed to be considered.
Abstract: Due to the high variability of the traffic in the radio access network (RAN), fixed network configurations are not flexible enough to achieve optimal performance. Our vendors provide several settings of the eNodeB to optimize the RAN performance, such as media access control scheduler, loading balance, etc. But the detailed mechanisms of the eNodeB configurations are usually very complicated and not disclosed, not to mention the large key performance indicators (KPIs) space needed to be considered. These make constructing a simulator, offline tuning, or rule-based solutions difficult. We aim to build an intelligent controller without strong assumption or domain knowledge about the RAN and can run 24/7 without supervision. To achieve this goal, we first build a closed-loop control testbed RAN in a lab environment with one eNodeB provided by one of the largest wireless vendors and four smartphones. Next, we build a double Q network agent trained with the live feedback of the key performance indicators from the RAN. Our work proved the effectiveness of applying deep reinforcement learning to improve network performance in a real RAN network environment.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a dual formulation of the problem and develop the DIRECT algorithm that is significantly more efficient than the state-of-the-art state of the art.
Abstract: Signal reconstruction problem (SRP) is an important optimization problem where the objective is to identify a solution to an underdetermined system of linear equations that is closest to a given prior It has a substantial number of applications in diverse areas, such as network traffic engineering, medical image reconstruction, acoustics, astronomy, and many more Unfortunately, most of the common approaches for solving SRP do not scale to large problem sizes We propose a novel and scalable algorithm for solving this critical problem Specifically, we make four major contributions First, we propose a dual formulation of the problem and develop the DIRECT algorithm that is significantly more efficient than the state of the art Second, we show how adapting database techniques developed for scalable similarity joins provides a substantial speedup over DIRECT Third, we describe several practical techniques that allow our algorithm to scale---on a single machine---to settings that are orders of magnitude larger than previously studied Finally, we use the database techniques of materialization and reuse to extend our result to dynamic settings where the input to the SRP changes Extensive experiments on real-world and synthetic data confirm the efficiency, effectiveness, and scalability of our proposal

Posted Content
TL;DR: In this article, the authors proposed a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC.
Abstract: Traffic management systems capture tremendous video data and leverage advances in video processing to detect and monitor traffic incidents. The collected data are traditionally forwarded to the traffic management center (TMC) for in-depth analysis and may thus exacerbate the network paths to the TMC. To alleviate such bottlenecks, we propose to utilize edge computing by equipping edge nodes that are close to cameras with computing resources (e.g. cloudlets). A cloudlet, with limited computing resources as compared to TMC, provides limited video processing capabilities. In this paper, we focus on two common traffic monitoring tasks, congestion detection, and speed detection, and propose a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC. Our solution utilizes two algorithms for each task, one implemented at the edge and the other one at the TMC, which are designed with the consideration of different computing resources. While the TMC provides strong computation power, the video quality it receives depends on the underlying network conditions. On the other hand, the edge processes very high-quality video but with limited computing resources. Our model captures this trade-off. We evaluate the performance of the proposed two-tier model as well as the traffic monitoring algorithms via test-bed experiments under different weather as well as network conditions and show that our proposed hybrid edge-cloud solution outperforms both the cloud-only and edge-only solutions.

Proceedings Article
17 May 2021
TL;DR: In this paper, the authors propose and evaluate techniques to enable xApp in the RIC platform to be fault-tolerant while preserving high scalability, using state partitioning, partial replication, and fast re-route with role awareness to decrease the overhead.
Abstract: The Open Radio Access Network (O-RAN) Alliance is opening up traditionally closed RAN elements by defining a new open communication interface (E2) that allows the behavior of a RAN element to be customized and controlled in real time. The RAN Intelligent Controller (RIC for short) is a platform for implementing RAN control functions as microservices called xApps. In this work, we propose and evaluate techniques to enable xApps in the RIC platform to be fault-tolerant while preserving high scalability. The key premise of our work is that traditional replication techniques cannot sustain high throughput and low latency as required by RAN elements. We propose techniques that use state partitioning, partial replication, and fast re-route with role awareness to decrease the overhead. We implemented the fault tolerance techniques as a library, called RFT (RIC Fault Tolerance), that xApp writers can employ to easily make their xApps fault-tolerant. We present performance results which show that RFT meets latency and throughout requirements as the number of replicas increases.

Proceedings ArticleDOI
06 Jun 2021
TL;DR: The NOP (Network Operations Platform) as discussed by the authors is an OpenROADM-based platform that interoperates with TransportPCE and other controllers, bringing together information about topology, events, and metrics.
Abstract: Key functionalities of NOP (Network Operations Platform) are demonstrated with the latest multi-vendor OpenROADM equipment. Using open source packages, the NOP inter-operates with TransportPCE and other controllers, bringing together information about topology, events, and metrics.

Posted Content
TL;DR: In this paper, the authors determine the optimal beam sweeping period, i.e., the frequency of the channel measurements, to align the transmitter and receiver beams to the best channel directions for maximizing the vehicle-to-infrastructure (V2I) throughput.
Abstract: Millimeter wave wireless spectrum deployments will allow vehicular communications to share high data rate vehicular sensor data in real-time. The highly directional nature of wireless links in millimeter spectral bands will require continuous channel measurements to ensure the transmitter (TX) and receiver (RX) beams are aligned to provide the best channel. Using real-world vehicular mmWave measurement data at 28 GHz, we determine the optimal beam sweeping period, i.e. the frequency of the channel measurements, to align the RX beams to the best channel directions for maximizing the vehicle-to-infrastructure (V2I) throughput. We show that in a realistic vehicular traffic environment in Austin, TX, for a vehicle traveling at an average speed of 10.5 mph, a beam sweeping period of 300 ms in future V2I communication standards would maximize the V2I throughput, using a system of four RX phased arrays that scanned the channel 360 degrees in the azimuth and 30 degrees above and below the boresight. We also investigate the impact of the number of active RX chains controlling the steerable phased arrays on V2I throughput. Reducing the number of RX chains controlling the phased arrays helps reduce the cost of the vehicular mmWave hardware while multiple RX chains, although more expensive, provide more robustness to beam direction changes at the vehicle, allowing near maximum throughput over a wide range of beam sweep periods. We show that the overhead of utilizing one RX chain instead of four leads to a 10% drop in mean V2I throughput over six non-line-of-sight runs in real traffic conditions, with each run being 10 to 20 seconds long over a distance of 40 to 90 meters.

Proceedings Article
17 May 2021
TL;DR: In this article, the authors propose a compositional homing framework that allows service designers to easily mix and match homing requirements to create instances of the homing problem, enabling greater agility of service creation and evolution.
Abstract: Homing or placement of network elements on cloud infrastructure is a crucial step in the orchestration of network services, involving complex interactions with several cloud and network service controllers. Network Service Providers (NSPs) currently follow a traditional approach akin to existing VM and VNF placement techniques that involves hand-crafting service specific heuristics for homing network services. However, operational experience from a Tier-1 NSP shows that existing approaches do not scale well when network services evolve and their requirements change. Further, these approaches require extensive and repetitive querying of the various controllers (e.g., to check customer eligibility or capacity), placing significant burden on the resources at the controllers. We propose StepNet, a compositional homing framework that allows service designers to easily mix and match homing requirements to create instances of the homing problem, enabling greater agility of service creation and evolution. StepNet adopts an incremental approach to querying that provides near optimal homing solutions, while reducing the cumulative time spent by all of the data sources responding to queries for each homing request (query cost). Our evaluation with production traces from a Tier-1 NSP shows a reduction in query cost of 92% for over 50% of the requests.

Proceedings ArticleDOI
25 Jun 2021
TL;DR: In this paper, the authors proposed a novel approach for Cellular Ultra-light Probe-based available bandwidth estimation that seeks to operate at the cost point of Available Bandwidth techniques while correcting accuracy issues by leveraging the intrinsic aggregation properties of cellular scheduling, coupled with intelligent packet timing trains and the application of Bayesian probabilistic analysis.
Abstract: Cellular networks provide an essential connectivity foundation for a sizable number of mobile devices and applications, making it compelling to measure their performance in regard to user experience. Although cellular infrastructure provides low-level mechanisms for network-specific performance measurements, there is still a distinct gap in discerning the actual application-level or user-perceivable performance from such methods. Put simply, there is little substitute for direct sampling and testing to measure end-to-end performance. Unfortunately, most existing technologies often fall quite short. Achievable Throughput tests use bulk TCP downloads to provide an accurate but costly (time, bandwidth, energy) view of network performance. Conversely, Available Bandwidth techniques offer improved speed and low cost but are woefully inaccurate when faced with the typical dynamics of cellular networks. In this paper, we propose CUP, a novel approach for Cellular Ultra-light Probe-based available bandwidth estimation that seeks to operate at the cost point of Available Bandwidth techniques while correcting accuracy issues by leveraging the intrinsic aggregation properties of cellular scheduling, coupled with intelligent packet timing trains and the application of Bayesian probabilistic analysis. By keeping the costs low with reasonable accuracy, our approach enables scaling both with respect to time (longitude) and space (user device density). We construct a CUP prototype to evaluate our approach under various demanding real-world cellular environments (longitudinal, driving, multiple vendors) to demonstrate the efficacy of our approach.

Posted Content
TL;DR: In this paper, a low-complexity, closed-loop control system for Open-RAN architectures is proposed to support drone-sourced video streaming of a point of interest.
Abstract: Enabling high data-rate uplink cellular connectivity for drones is a challenging problem, since a flying drone has a higher likelihood of having line-of-sight propagation to base stations that terrestrial UEs normally do not have line-of-sight to. This may result in uplink inter-cell interference and uplink performance degradation for the neighboring ground UEs when drones transmit at high data-rates (e.g., video streaming). We address this problem from a cellular operator's standpoint to support drone-sourced video streaming of a point of interest. We propose a low-complexity, closed-loop control system for Open-RAN architectures that jointly optimizes the drone's location in space and its transmission directionality to support video streaming and minimize its uplink interference impact on the network. We prototype and experimentally evaluate the proposed control system on a dedicated outdoor multi-cell RAN testbed, which is the first measurement campaign of its kind. Furthermore, we perform a large-scale simulation assessment of the proposed control system using the actual cell deployment topologies and cell load profiles of a major US cellular carrier. The proposed Open-RAN control scheme achieves an average 19% network capacity gain over traditional BS-constrained control solutions and satisfies the application data-rate requirements of the drone (e.g., to stream an HD video).