scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2015"


Posted Content
TL;DR: This paper designs a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics.
Abstract: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.

1,272 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an efficient caching scheme, in which the content placement is performed in a decentralized manner, and despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.
Abstract: Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. Conventionally, the main performance gain of this caching was thought to result from making part of the requested data available closer to end-users. Instead, we recently showed that a much more significant gain can be achieved by using caches to create coded-multicasting opportunities, even for users with different demands, through coding across data streams. These coded-multicasting opportunities are enabled by careful content overlap at the various caches in the network, created by a central coordinating server. In many scenarios, such a central coordinating server may not be available, raising the question if this multicasting gain can still be achieved in a more decentralized setting. In this paper, we propose an efficient caching scheme, in which the content placement is performed in a decentralized manner. In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.

752 citations


Journal ArticleDOI
TL;DR: Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing.
Abstract: Assume that a multi-user multiple-input multiple-output (MIMO) system is designed from scratch to uniformly cover a given area with maximal energy efficiency (EE). What are the optimal number of antennas, active users, and transmit power? The aim of this paper is to answer this fundamental question. We consider jointly the uplink and downlink with different processing schemes at the base station and propose a new realistic power consumption model that reveals how the above parameters affect the EE. Closed-form expressions for the EE-optimal value of each parameter, when the other two are fixed, are provided for zero-forcing (ZF) processing in single-cell scenarios. These expressions prove how the parameters interact. For example, in sharp contrast to common belief, the transmit power is found to increase (not to decrease) with the number of antennas. This implies that energy-efficient systems can operate in high signal-to-noise ratio regimes in which interference-suppressing signal processing is mandatory. Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing. The numerical results show the same behavior under imperfect channel state information and in symmetric multi-cell scenarios.

707 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: It is indicated that on-device sensor and sensor handling heterogeneities impair HAR performances significantly and a novel clustering-based mitigation technique suitable for large-scale deployment of HAR is proposed, where heterogeneity of devices and their usage scenarios are intrinsic.
Abstract: The widespread presence of motion sensors on users' personal mobile devices has spawned a growing research interest in human activity recognition (HAR). However, when deployed at a large-scale, e.g., on multiple devices, the performance of a HAR system is often significantly lower than in reported research results. This is due to variations in training and test device hardware and their operating system characteristics among others. In this paper, we systematically investigate sensor-, device- and workload-specific heterogeneities using 36 smartphones and smartwatches, consisting of 13 different device models from four manufacturers. Furthermore, we conduct experiments with nine users and investigate popular feature representation and classification techniques in HAR research. Our results indicate that on-device sensor and sensor handling heterogeneities impair HAR performances significantly. Moreover, the impairments vary significantly across devices and depends on the type of recognition technique used. We systematically evaluate the effect of mobile sensing heterogeneities on HAR and propose a novel clustering-based mitigation technique suitable for large-scale deployment of HAR, where heterogeneity of devices and their usage scenarios are intrinsic.

561 citations


Journal ArticleDOI
TL;DR: In this paper, the potential gains and limitations of network densification and spectral efficiency enhancement techniques in ultra-dense small cell deployments are analyzed. And the top ten challenges to be addressed to bring ultra dense small-cell deployments to reality are discussed.
Abstract: Today's heterogeneous networks comprised of mostly macrocells and indoor small cells will not be able to meet the upcoming traffic demands. Indeed, it is forecasted that at least a $100\times$ network capacity increase will be required to meet the traffic demands in 2020. As a result, vendors and operators are now looking at using every tool at hand to improve network capacity. In this epic campaign, three paradigms are noteworthy, i.e., network densification, the use of higher frequency bands and spectral efficiency enhancement techniques. This paper aims at bringing further common understanding and analysing the potential gains and limitations of these three paradigms, together with the impact of idle mode capabilities at the small cells as well as the user equipment density and distribution in outdoor scenarios. Special attention is paid to network densification and its implications when transiting to ultra-dense small cell deployments. Simulation results show that comparing to the baseline case with an average inter site distance of 200 m and a 100 MHz bandwidth, network densification with an average inter site distance of 35 m can increase the average UE throughput by $7.56\times$ , while the use of the 10 GHz band with a 500 MHz bandwidth can further increase the network capacity up to $5\times$ , resulting in an average of 1.27 Gbps per UE. The use of beamforming with up to 4 antennas per small cell BS lacks behind with average throughput gains around 30% and cell-edge throughput gains of up to $2\times$ . Considering an extreme densification, an average inter site distance of 5 m can increase the average and cell-edge UE throughput by $18\times$ and $48\times$ , respectively. Our study also shows how network densification reduces multi-user diversity, and thus proportional fair alike schedulers start losing their advantages with respect to round robin ones. The energy efficiency of these ultra-dense small cell deployments is also analysed, indicating the benefits of energy harvesting approaches to make these deployments more energy-efficient. Finally, the top ten challenges to be addressed to bring ultra-dense small cell deployments to reality are also discussed.

515 citations


Proceedings ArticleDOI
24 Aug 2015
TL;DR: A thorough study of the NFV location problem is performed, it is shown that it introduces a new type of optimization problems, and near optimal approximation algorithms guaranteeing a placement with theoretically proven performance are provided.
Abstract: Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.

509 citations


Journal ArticleDOI
Thomas L. Marzetta1
TL;DR: Massive MIMO is a brand new technology that has yet to be reduced to practice, but its principles of operation are well understood, and surprisingly simple to elucidate.
Abstract: Demand for wireless throughput, both mobile and fixed, will always increase. One can anticipate that, in five or ten years, millions of augmented reality users in a large city will want to transmit and receive 3D personal high-definition video more or less continuously, say 100 megabits per second per user in each direction. Massive MIMO-also called Large-Scale Antenna Systems-is a promising candidate technology for meeting this demand. Fifty-fold or greater spectral efficiency improvements over fourth generation (4G) technology are frequently mentioned. A multiplicity of physically small, individually controlled antennas performs aggressive multiplexing/demultiplexing for all active users, utilizing directly measured channel characteristics. Unlike today's Point-to-Point MIMO, by leveraging time-division duplexing (TDD), Massive MIMO is scalable to any desired degree with respect to the number of service antennas. Adding more antennas is always beneficial for increased throughput, reduced radiated power, uniformly great service everywhere in the cell, and greater simplicity in signal processing. Massive MIMO is a brand new technology that has yet to be reduced to practice. Notwithstanding, its principles of operation are well understood, and surprisingly simple to elucidate.

486 citations


Journal ArticleDOI
01 Dec 2015
TL;DR: A novel Class A classifier general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95% of the accounts of the original training set.
Abstract: Fake followers are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere-hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel Class A classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95% of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier.The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers.

340 citations


Proceedings ArticleDOI
07 Sep 2015
TL;DR: This paper presents DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks and shows DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.
Abstract: Microphones are remarkably powerful sensors of human behavior and context. However, audio sensing is highly susceptible to wild fluctuations in accuracy when used in diverse acoustic environments (such as, bedrooms, vehicles, or cafes), that users encounter on a daily basis. Towards addressing this challenge, we turn to the field of deep learning; an area of machine learning that has radically changed related audio modeling domains like speech recognition. In this paper, we present DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks. We train DeepEar with a large-scale dataset including unlabeled data from 168 place visits. The resulting learned model, involving 2.3M parameters, enables DeepEar to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. Finally, we show DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.

329 citations


Proceedings ArticleDOI
11 May 2015
TL;DR: In this article, a closed-form expression for the achievable rate was derived for the downlink of a cell-free massive MIMO system, where a very large number of distributed access points (APs) simultaneously serve a much smaller number of users.
Abstract: We consider the downlink of Cell-Free Massive MIMO systems, where a very large number of distributed access points (APs) simultaneously serve a much smaller number of users. Each AP uses local channel estimates obtained from received uplink pilots and applies conjugate beamforming to transmit data to the users. We derive a closed-form expression for the achievable rate. This expression enables us to design an optimal max-min power control scheme that gives equal quality of service to all users. We further compare the performance of the Cell-Free Massive MIMO system to that of a conventional small-cell network and show that the throughput of the Cell-Free system is much more concentrated around its median compared to that of the smallcell system. The Cell-Free Massive MIMO system can provide an almost 20-fold increase in 95%-likely per-user throughput, compared with the small-cell system. Furthermore, Cell-Free systems are more robust to shadow fading correlation than smallcell systems.

250 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: The aim of this investigation is to begin to build knowledge of the performance characteristics, resource requirements and the execution bottlenecks for deep learning models when being used to recognize categories of behavior and context.
Abstract: Detecting and reacting to user behavior and ambient context are core elements of many emerging mobile sensing and Internet-of-Things (IoT) applications. However, extracting accurate inferences from raw sensor data is challenging within the noisy and complex environments where these systems are deployed. Deep Learning -- is one of the most promising approaches for overcoming this challenge, and achieving more robust and reliable inference. Techniques developed within this rapidly evolving area of machine learning are now state-of-the-art for many inference tasks (such as, audio sensing and computer vision) commonly needed by IoT and wearable applications. But currently deep learning algorithms are seldom used in mobile/IoT class hardware because they often impose debilitating levels of system overhead (e.g., memory, computation and energy). Efforts to address this barrier to deep learning adoption are slowed by our lack of a systematic understanding of how these algorithms behave at inference time on resource constrained hardware. In this paper, we present the first -- albeit preliminary -- measurement study of common deep learning models (such as Convolutional Neural Networks and Deep Neural Networks) on representative mobile and embedded platforms. The aim of this investigation is to begin to build knowledge of the performance characteristics, resource requirements and the execution bottlenecks for deep learning models when being used to recognize categories of behavior and context. The results and insights of this study, lay an empirical foundation for the development of optimization methods and execution environments that enable deep learning to be more readily integrated into next-generation IoT, smartphones and wearable systems.

Journal ArticleDOI
TL;DR: This study reveals that using angle diversity to build VLC-MIMO system is very promising and their channel capacities and BER performance are quite close to that of link-blocked receiver.
Abstract: This paper proposes two novel and practical designs of angle diversity receivers to achieve multiple-input-multiple-output (MIMO) capacity for indoor visible light communications (VLC). Both designs are easy to construct and suitable for small mobile devices. By using light emitting diodes for both illumination and data transmission, our receiver designs consist of multiple photodetectors (PDs), which are oriented with different inclination angles to achieve high-rank MIMO channels and can be closely packed without the requirement of spatial separation. Due to the orientations of the PDs, the proposed receiver designs are named pyramid receiver (PR) and hemispheric receiver (HR). In a PR, the normal vectors of PDs are chosen the same as the normal vectors of the triangle faces of a pyramid with equilateral $N$ -gon base. On the other hand, the idea behind HR is to evenly distribute the PDs on a hemisphere. Through analytical investigation, simulations and experiments, the channel capacity and bit-error-rate (BER) performance under various settings are presented to show that our receiver designs are practical and promising for enabling VLC-MIMO. In comparison to induced link-blocked receiver, our designs do not require any hardware adjustment at the receiver from location to location so that they can support user mobility. Besides, their channel capacities and BER performance are quite close to that of link-blocked receiver. Meanwhile, they substantially outperform spatially-separated receiver. This study reveals that using angle diversity to build VLC-MIMO system is very promising.

Posted Content
TL;DR: The random demand setting is considered and a comprehensive characterization of the order-optimal rate for all regimes of the system parameters is provided, as well as an explicit placement and delivery scheme achieving order-Optimal rates.
Abstract: We consider the canonical {\em shared link network} formed by a source node, hosting a library of $m$ information messages (files), connected via a noiseless common link to $n$ destination nodes (users), each with a cache of size M files. Users request files at random and independently, according to a given a-priori demand distribution $\qv$. A coding scheme for this network consists of a caching placement (i.e., a mapping of the library files into the user caches) and delivery scheme (i.e., a mapping for the library files and user demands into a common multicast codeword) such that, after the codeword transmission, all users can retrieve their requested file. The rate of the scheme is defined as the {\em average} codeword length normalized with respect to the length of one file, where expectation is taken over the random user demands. For the same shared link network, in the case of deterministic demands, the optimal min-max rate has been characterized within a uniform bound, independent of the network parameters. In particular, fractional caching (i.e., storing file segments) and using linear network coding has been shown to provide a min-max rate reduction proportional to 1/M with respect to standard schemes such as unicasting or "naive" uncoded multicasting. The case of random demands was previously considered by applying the same order-optimal min-max scheme separately within groups of files requested with similar probability. However, no order-optimal guarantee was provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and provide general achievability and converse results. In particular, we consider a family of schemes that combine random fractional caching according to a probability distribution $\pv$ that depends on the demand distribution $\qv$, with a linear coded delivery scheme based on ...

Proceedings ArticleDOI
14 Jun 2015
TL;DR: In this article, the authors consider an interference channel in which each transmitter is equipped with an isolated cache memory, and the objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands.
Abstract: Over the past decade, the bulk of wireless traffic has shifted from speech to content. This shift creates the opportunity to cache part of the content in memories closer to the end users, for example in base stations. Most of the prior literature focuses on the reduction of load in the backhaul and core networks due to caching, i.e., on the benefits caching offers for the wireline communication link between the origin server and the caches. In this paper, we are instead interested in the benefits caching can offer for the wireless communication link between the caches and the end users. To quantify the gains of caching for this wireless link, we consider an interference channel in which each transmitter is equipped with an isolated cache memory. Communication takes place in two phases, a content placement phase followed by a content delivery phase. The objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands. Focusing on the three-user case, we show that through careful joint design of these phases, we can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain, and an interference alignment gain. In our proposed scheme, load balancing is achieved through a specific file splitting and placement, creating a particular pattern of content overlap at the caches. This overlap allows to implement interference cancellation. Further, it allows us to construct several virtual transmitters, each responsible for a part of the requested content, which increases interference alignment possibilities.

Journal ArticleDOI
Thomas Pfeiffer1
TL;DR: An overview is given of currently available optical fronthaul technologies, and of recently started activities toward more efficient and scalable solutions, and an outlook is given into which 5G specific service characteristics may further impact future backhaul, midhaul, and fr onthaul networks.
Abstract: Centralized processing is expected to bring about substantial benefits for wireless networks both on the technical side and on the economic side. While this concept is considered an important part of future radio access network architectures, it is more and more recognized that the current approach to fronthauling by employing the Common Public Radio Interface protocol will be inefficient for large-scale network deployments in many respects, and particularly for the new radio network generation 5G. In this paper, an overview is given of currently available optical fronthaul technologies, and of recently started activities toward more efficient and scalable solutions, and finally an outlook is given into which 5G specific service characteristics may further impact future backhaul, midhaul, and fronthaul networks.

Proceedings ArticleDOI
22 Mar 2015
TL;DR: 15-mode photonic lanterns enabled low-loss coupling into and out of the fiber and a time-multiplexed coherent receiver facilitates measurement of all 30 signals.
Abstract: We transmit over all 30 spatial and polarization modes of a 22.8-km multimode fiber. 15-mode photonic lanterns enabled low-loss coupling into and out of the fiber and a time-multiplexed coherent receiver facilitates measurement of all 30 signals.

Proceedings Article
01 Sep 2015
TL;DR: Coded MapReduce as mentioned in this paper exploits the repetitive mapping of data blocks at different servers to create coded multicasting opportunities in the shuffling phase, cutting down the total communication load by a multiplicative factor.
Abstract: MapReduce is a commonly used framework for executing data-intensive tasks on distributed server clusters. We present “Coded MapReduce”, a new framework that enables and exploits a particular form of coding to significantly reduce the inter-server communication load of MapReduce. In particular, Coded MapReduce exploits the repetitive mapping of data blocks at different servers to create coded multicasting opportunities in the shuffling phase, cutting down the total communication load by a multiplicative factor that grows linearly with the number of servers in the cluster. We also analyze the tradeoff between the “computation load” and the “communication load” of the Coded MapReduce.

Journal ArticleDOI
TL;DR: In these experiments, InGaAsP nanorods emitting at ∼200 THz optical frequency show a spontaneous emission intensity enhancement of 35× corresponding to a spontaneously emission rate speedup ∼115×, for antenna gap spacing, d = 40 nm, proportional to 1/d2.
Abstract: Atoms and molecules are too small to act as efficient antennas for their own emission wavelengths. By providing an external optical antenna, the balance can be shifted; spontaneous emission could become faster than stimulated emission, which is handicapped by practically achievable pump intensities. In our experiments, InGaAsP nanorods emitting at ∼ 200 THz optical frequency show a spontaneous emission intensity enhancement of 35 × corresponding to a spontaneous emission rate speedup ∼ 115 ×, for antenna gap spacing, d = 40 nm. Classical antenna theory predicts ∼ 2,500 × spontaneous emission speedup at d ∼ 10 nm, proportional to 1/d(2). Unfortunately, at d < 10 nm, antenna efficiency drops below 50%, owing to optical spreading resistance, exacerbated by the anomalous skin effect (electron surface collisions). Quantum dipole oscillations in the emitter excited state produce an optical ac equivalent circuit current, I(o) = qω|x(o)|/d, feeding the antenna-enhanced spontaneous emission, where q|x(o)| is the dipole matrix element. Despite the quantum-mechanical origin of the drive current, antenna theory makes no reference to the Purcell effect nor to local density of states models. Moreover, plasmonic effects are minor at 200 THz, producing only a small shift of antenna resonance frequency.

Posted Content
TL;DR: In this article, the authors consider an interference channel in which each transmitter is equipped with an isolated cache memory, and they show that through careful joint design of these phases, they can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain and an interference alignment gain.
Abstract: Over the past decade, the bulk of wireless traffic has shifted from speech to content. This shift creates the opportunity to cache part of the content in memories closer to the end users, for example in base stations. Most of the prior literature focuses on the reduction of load in the backhaul and core networks due to caching, i.e., on the benefits caching offers for the wireline communication link between the origin server and the caches. In this paper, we are instead interested in the benefits caching can offer for the wireless communication link between the caches and the end users. To quantify the gains of caching for this wireless link, we consider an interference channel in which each transmitter is equipped with an isolated cache memory. Communication takes place in two phases, a content placement phase followed by a content delivery phase. The objective is to design both the placement and the delivery phases to maximize the rate in the delivery phase in response to any possible user demands. Focusing on the three-user case, we show that through careful joint design of these phases, we can reap three distinct benefits from caching: a load balancing gain, an interference cancellation gain, and an interference alignment gain. In our proposed scheme, load balancing is achieved through a specific file splitting and placement, producing a particular pattern of content overlap at the caches. This overlap allows to implement interference cancellation. Further, it allows us to create several virtual transmitters, each transmitting a part of the requested content, which increases interference-alignment possibilities.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper defines cell-free systems and analyzes algorithms for power optimization and linear pre-coding of Cell-Free Massive MIMO systems, which can yield more than ten-fold improvement in terms of 5%-outage rate.
Abstract: Cell-Free Massive MIMO systems comprise a large number of distributed, low cost, and low power access point antennas, connected to a network controller. The number of antennas is significantly larger than the number of users. The system is not partitioned into cells and each user is served by all access point antennas simultaneously. In this paper, we define cell-free systems and analyze algorithms for power optimization and linear pre-coding. Compared with the conventional small-cell scheme, Cell-Free Massive MIMO can yield more than ten-fold improvement in terms of 5%-outage rate.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: This paper develops a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing and shows that both these algorithms work well in practice.
Abstract: Segment Routing is a proposed IETF protocol to improve traffic engineering and online route selection in IP networks. The key idea in segment routing is to break up the routing path into segments in order to enable better network utilization. Segment routing also enables finer control of the routing paths and can be used to route traffic through middle boxes. This paper considers the problem of determining the optimal parameters for segment routing in the offline and online cases. We develop a traffic matrix oblivious algorithm for robust segment routing in the offline case and a competitive algorithm for online segment routing. We also show that both these algorithms work well in practice.

Journal ArticleDOI
TL;DR: The use of the multimode graded-index fibers in the taper can significantly relax the adiabaticity requirement in comparison with using single-mode fibers.
Abstract: We demonstrate the first all-fiber mode-group-selective photonic lantern using multimode graded-index fibers. Mode selectivity for mode groups LP01, LP11 and LP21+LP02 is 20-dB, 10-dB and 7-dB respectively. The insertion loss when butt coupled to multimode graded-index fiber is below 0.6-dB. The use of the multimode graded-index fibers in the taper can significantly reduce the adiabaticity requirement.

Journal ArticleDOI
TL;DR: It is shown how a disruptive force in mobile computing can be created by extending today's unmodified cloud to a second level consisting of self-managed data centers with no hard state called cloudlets, located at the edge of the Internet.
Abstract: We show how a disruptive force in mobile computing can be created by extending today’s unmodified cloud to a second level consisting of self-managed data centers with no hard state called cloudlets. These are located at the edge of the Internet, just one wireless hop away from associated mobile devices. By leveraging lowlatency offload, cloudlets enable a new class of real-time cognitive assistive applications on wearable devices. By processing high data rate sensor inputs such as video close to the point of capture, cloudlets can reduce ingress bandwidth demand into the cloud. By serving as proxies for distant cloud services that are unavailable due to failures or cyberattacks, cloudlets can improve robustness and availability. We caution that proprietary software ecosytems surrounding cloudlets will lead to a fragmented marketplace that fails to realize the full business potential of mobile-cloud convergence. Instead, we urge that the software ecosystem surrounding cloudlets be based on the same principles of openness and end-to-end design that have made the Internet so successful.

Posted Content
TL;DR: The Cell-Free Massive MIMO system can provide an almost 20-fold increase in 95%-likely per-user throughput, compared with the small-cell system, and is more robust to shadow fading correlation than smallcell systems.
Abstract: We consider the downlink of Cell-Free Massive MIMO systems, where a very large number of distributed access points (APs) simultaneously serve a much smaller number of users. Each AP uses local channel estimates obtained from received uplink pilots and applies conjugate beamforming to transmit data to the users. We derive a closed-form expression for the achievable rate. This expression enables us to design an optimal max-min power control scheme that gives equal quality of service to all users. We further compare the performance of the Cell-Free Massive MIMO system to that of a conventional small-cell network and show that the throughput of the Cell-Free system is much more concentrated around its median compared to that of the small-cell system. The Cell-Free Massive MIMO system can provide an almost $20-$fold increase in 95%-likely per-user throughput, compared with the small-cell system. Furthermore, Cell-Free systems are more robust to shadow fading correlation than small-cell systems.

Proceedings ArticleDOI
17 Jun 2015
TL;DR: The authors' measurements show that control actions, such as rule installation, have surprisingly high latency, due to both software implementation inefficiencies and fundamental traits of switch hardware.
Abstract: Timely interaction between an SDN controller and switches is crucial to many SDN applications---e.g., fast rerouting during link failure and fine-grained traffic engineering in data centers. However, it is not well understood how the control plane in SDN switches impacts these applications. To this end, we conduct a comprehensive measurement study using four types of production SDN switches. Our measurements show that control actions, such as rule installation, have surprisingly high latency, due to both software implementation inefficiencies and fundamental traits of switch hardware.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: This work implemented a flexible transmission system operating at adjustable data rate and fixed bandwidth, baudrate, constellation and overhead using probabilistic shaping and demonstrated in a transmission experiment up to 15% capacity and 43% reach increase versus 200 Gbit/s 16-QAM.
Abstract: We implemented a flexible transmission system operating at adjustable data rate and fixed bandwidth, baudrate, constellation and overhead using probabilistic shaping. We demonstrated in a transmission experiment up to 15% capacity and 43% reach increase versus 200 Gbit/s 16-QAM.

Journal ArticleDOI
TL;DR: A first view on the METIS system concept is provided, the main features including architecture are highlighted, and the challenges are addressed while discussing perspectives for the further research work.
Abstract: The Mobile and wireless communications Enablers for the Twenty-twenty Information Society (METIS) project is laying the foundations of Fifth Generation (5G) mobile and wireless communication system putting together the point of view of vendors, operators, vertical players, and academia. METIS envisions a 5G system concept that efficiently integrates new applications developed in the METIS horizontal topics and evolved versions of existing services and systems. This article provides a first view on the METIS system concept, highlights the main features including architecture, and addresses the challenges while discussing perspectives for the further research work.

Journal ArticleDOI
TL;DR: This work details the strategies adopted in the European research project IDEALIST to overcome the predicted data plane capacity crunch in optical networks and highlights the novelties stemming from the flex-grid concept.
Abstract: In this work we detail the strategies adopted in the European research project IDEALIST to overcome the predicted data plane capacity crunch in optical networks. In order for core and metropolitan telecommunication systems to be able to catch up with Internet traffic, which keeps growing exponentially, we exploit the elastic optical networks paradigm for its astounding characteristics: flexible bandwidth allocation and reach tailoring through adaptive line rate, modulation formats, and spectral efficiency. We emphasize the novelties stemming from the flex-grid concept and report on the corresponding proposed target network scenarios. Fundamental building blocks, like the bandwidth-variable transponder and complementary node architectures ushering those systems, are detailed focusing on physical layer, monitoring aspects, and node architecture design.

Journal ArticleDOI
Laurent Schmalen1, Vahid Aref1, Junho Cho1, Detlef Suikat1, Detlef Rosener1, Andreas Leven1 
TL;DR: This paper discusses and presents some recent advances in the field of error correcting codes and discusses their applicability for lightwave transmission systems, and shows how rapidly decodable codes can be constructed by careful selection of the degree distribution.
Abstract: In this paper, we discuss and present some recent advances in the field of error correcting codes and discuss their applicability for lightwave transmission systems. We introduce several classes of spatially coupled codes and discuss several design options for spatially coupled codes and show how rapidly decodable codes can be constructed by careful selection of the degree distribution. We confirm the good performance of some spatially coupled codes at very low bit error rates using an FPGA-based simulation. Finally, we compare all proposed schemes and show how spatially coupled Low-Density Parity-Check (LDPC) codes outperform conventional LDPC and polar codes with similar receiver complexity and memory requirements.

Journal ArticleDOI
TL;DR: An extensive measurement study of the system architectures and performance of Netflix and Hulu finds that both platforms assign the CDN to a video request without considering the network conditions and optimizing the user-perceived video quality.
Abstract: Netflix and Hulu are leading Over-the-Top (OTT) content service providers in the US and Canada. Netflix alone accounts for 29.7% of the peak downstream traffic in the US in 2011. Understanding the system architectures and performance of Netflix and Hulu can shed light on the design of such large-scale video streaming platforms, and help improving the design of future systems. In this paper, we perform extensive measurement study to uncover their architectures and service strategies. Netflix and Hulu bear many similarities. Both Netflix and Hulu video streaming platforms rely heavily on the third-party infrastructures, with Netflix migrating that majority of its functions to the Amazon cloud, while Hulu hosts its services out of Akamai. Both service providers employ the same set of three content distribution networks (CDNs) in delivering the video contents. Using active measurement study, we dissect several key aspects of OTT streaming platforms of Netflix and Hulu, e.g., employed streaming protocols, CDN selection strategy, user experience reporting, etc. We discover that both platforms assign the CDN to a video request without considering the network conditions and optimizing the user-perceived video quality. We further conduct the performance measurement studies of the three CDNs employed by Netflix and Hulu. We show that the available bandwidths on all three CDNs vary significantly over the time and over the geographic locations. We propose a measurement-based adaptive CDN selection strategy and a multiple-CDN-based video delivery strategy that can significantly increase users' average available bandwidth.