scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2016"


Journal ArticleDOI
TL;DR: In this article, a game theoretic approach for computation offloading in a distributed manner was adopted to solve the multi-user offloading problem in a multi-channel wireless interference environment.
Abstract: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.

2,013 citations


Journal ArticleDOI
TL;DR: This overview article identifies 10 myths of Massive MIMO and explains why they are not true, and asks a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly.
Abstract: Wireless communications is one of the most successful technologies in modern years, given that an exponential growth rate in wireless traffic has been sustained for over a century (known as Cooper’s law). This trend will certainly continue, driven by new innovative applications; for example, augmented reality and the Internet of Things. Massive MIMO has been identified as a key technology to handle orders of magnitude more data traffic. Despite the attention it is receiving from the communication community, we have personally witnessed that Massive MIMO is subject to several widespread misunderstandings, as epitomized by following (fictional) abstract: “The Massive MIMO technology uses a nearly infinite number of high-quality antennas at the base stations. By having at least an order of magnitude more antennas than active terminals, one can exploit asymptotic behaviors that some special kinds of wireless channels have. This technology looks great at first sight, but unfortunately the signal processing complexity is off the charts and the antenna arrays would be so huge that it can only be implemented in millimeter-wave bands.” These statements are, in fact, completely false. In this overview article, we identify 10 myths and explain why they are not true. We also ask a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly. We provide references to key technical papers that support our claims, while a further list of related overview and technical papers can be found at the Massive MIMO Info Point: http://massivemimo. eu

1,040 citations


Posted Content
TL;DR: Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.
Abstract: A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs)which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max-min power control algorithms. Max-min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard. Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly 5-fold improvement in 95%-likely per-user throughput over the small-cell scheme, and 10-fold improvement when shadow fading is correlated.

893 citations


Journal ArticleDOI
TL;DR: A transmission system with adjustable data rate for single-carrier coherent optical transmission is proposed, which enables high-speed transmission close to the Shannon limit, and it is experimentally demonstrated that the optical transmission of probabilistically shaped 64-QAM signals outperforms the transmission reach of regular 16- QAM and regular 64-ZAM signals.
Abstract: A transmission system with adjustable data rate for single-carrier coherent optical transmission is proposed, which enables high-speed transmission close to the Shannon limit. The proposed system is based on probabilistically shaped 64-QAM modulation formats. Adjustable shaping is combined with a fixed-QAM modulation and a fixed forward-error correction code to realize a system with adjustable net data rate that can operate over a large reach range. At the transmitter, an adjustable distribution matcher performs the shaping. At the receiver, an inverse distribution matcher is used. Probabilistic shaping is implemented into a coherent optical transmission system for 64-QAM at 32 Gbaud to realize adjustable operation modes for net data rates ranging from 200 to 300 Gb/s. It is experimentally demonstrated that the optical transmission of probabilistically shaped 64-QAM signals outperforms the transmission reach of regular 16-QAM and regular 64-QAM signals by more than 40% in the transmission reach.

564 citations


Posted Content
TL;DR: A novel variational autoencoder is developed to model images, as well as associated labels or captions, and a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.
Abstract: A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

514 citations


Journal ArticleDOI
TL;DR: In this paper, 16 researchers, each a world-leading expert in their respective subfields, contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.
Abstract: Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.

477 citations


Proceedings ArticleDOI
11 Apr 2016
TL;DR: Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading.
Abstract: Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract the high-level information needed by mobile apps. It is critical that the gains in inference accuracy that deep models afford become embedded in future generations of mobile apps. In this work, we present the design and implementation of DeepX, a software accelerator for deep learning execution. DeepX signif- icantly lowers the device resources (viz. memory, computation, energy) required by deep learning that currently act as a severe bottleneck to mobile adoption. The foundation of DeepX is a pair of resource control algorithms, designed for the inference stage of deep learning, that: (1) decompose monolithic deep model network architectures into unit- blocks of various types, that are then more efficiently executed by heterogeneous local device processors (e.g., GPUs, CPUs); and (2), perform principled resource scaling that adjusts the architecture of deep models to shape the overhead each unit-blocks introduces. Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading.

442 citations


Journal ArticleDOI
08 Mar 2016-ACS Nano
TL;DR: Silicon nanoparticle-based lithium-ion battery negative electrodes where multiple nonactive electrode additives are replaced with a single conductive binder, in this case, the conducting polymer PEDOT PSS are described.
Abstract: This work describes silicon nanoparticle-based lithium-ion battery negative electrodes where multiple nonactive electrode additives (usually carbon black and an inert polymer binder) are replaced with a single conductive binder, in this case, the conducting polymer PEDOT:PSS. While enabling the production of well-mixed slurry-cast electrodes with high silicon content (up to 95 wt %), this combination eliminates the well-known occurrence of capacity losses due to physical separation of the silicon and traditional inorganic conductive additives during repeated lithiation/delithiation processes. Using an in situ secondary doping treatment of the PEDOT:PSS with small quantities of formic acid, electrodes containing 80 wt % SiNPs can be prepared with electrical conductivity as high as 4.2 S/cm. Even at the relatively high areal loading of 1 mg/cm2, this system demonstrated a first cycle lithiation capacity of 3685 mA·h/g (based on the SiNP mass) and a first cycle efficiency of ∼78%. After 100 repeated cycles a...

369 citations


Journal ArticleDOI
TL;DR: The lessons from the first wave of smartphone-sensing research are drawn to highlight areas of opportunity for psychological research, present practical considerations for designing smartphone studies, and discuss the ongoing methodological and ethical challenges associated with research in this domain.
Abstract: Smartphones now offer the promise of collecting behavioral data unobtrusively, in situ, as it unfolds in the course of daily life. Data can be collected from the onboard sensors and other phone logs embedded in today's off-the-shelf smartphone devices. These data permit fine-grained, continuous collection of people's social interactions (e.g., speaking rates in conversation, size of social groups, calls, and text messages), daily activities (e.g., physical activity and sleep), and mobility patterns (e.g., frequency and duration of time spent at various locations). In this article, we have drawn on the lessons from the first wave of smartphone-sensing research to highlight areas of opportunity for psychological research, present practical considerations for designing smartphone studies, and discuss the ongoing methodological and ethical challenges associated with research in this domain. It is our hope that these practical guidelines will facilitate the use of smartphones as a behavioral observation tool in psychological science.

350 citations


Journal ArticleDOI
TL;DR: In this paper, a path loss model incorporating both line-of-sight (LoS) and non-line-ofsight (NLoS) transmissions was introduced to study the impact of LoS and NLoS transmissions on the performance of dense small cell networks.
Abstract: In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs.

282 citations


Proceedings Article
05 Dec 2016
TL;DR: In this paper, a variational autoencoder is used to model images, as well as associated labels or captions, and a deep generative deconvolutional network (DGDN) is used as a decoder of the latent image features.
Abstract: A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

Journal ArticleDOI
TL;DR: This work proposes an online coded caching scheme termed coded least-recently sent (LRS) and simulates it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize, showing that the proposed coded LRS algorithm significantly outperforms the popular least- recently used caching algorithm.
Abstract: We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used caching algorithm.

Proceedings ArticleDOI
03 Apr 2016
TL;DR: The targets for NB-IoT are described, coverage, capacity, latency, and battery life analysis are presented, and a preliminary system design is presented.
Abstract: In 3GPP, a narrowband system based on Long Term Evolution (LTE) is being introduced to support the Internet of Things. This system, named Narrowband Internet of Things (NB-IoT), can be deployed in three different operation modes - (1) stand-alone as a dedicated carrier, (2) in-band within the occupied bandwidth of a wideband LTE carrier, and (3) within the guard-band of an existing LTE carrier. In stand-alone operation mode, NB-IoT can occupy one GSM channel (200 kHz) while for in-band and guard-band operation modes, it will use one physical resource block of LTE (180 kHz). The design targets of NB-IoT include low-cost devices, high coverage (20-dB improvement over GPRS), long device battery life (more than 10 years), and massive capacity. Latency is relaxed although a delay budget of 10 seconds is the target for exception reports. The specifications for NB-IoT are expected to be finalized in 2016. In this paper, we describe the targets for NB-IoT and present a preliminary system design. In addition, coverage, capacity, latency, and battery life analysis are also presented.

Proceedings ArticleDOI
14 Nov 2016
TL;DR: This paper proposes SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms, and allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy.
Abstract: Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.

Journal ArticleDOI
TL;DR: This work proposes a fluid model for a large class of MP-TCP algorithms and identifies design criteria that guarantee the existence, uniqueness, and stability of system equilibrium and motivates the algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation.
Abstract: Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new algorithm to existing MP-TCP algorithms.

Proceedings ArticleDOI
11 Apr 2016
TL;DR: The data analysis relies on data analysis to envision regulations that are responsive to real-time demands, contributing to the emerging idea of ``algorithmic regulation''.
Abstract: Sharing economy platforms have become extremely popular in the last few years, and they have changed the way in which we commute, travel, and borrow among many other activities. Despite their popularity among consumers, such companies are poorly regulated. For example, Airbnb, one of the most successful examples of sharing economy platform, is often criticized by regulators and policy makers. While, in theory, municipalities should regulate the emergence of Airbnb through evidence-based policy making, in practice, they engage in a false dichotomy: some municipalities allow the business without imposing any regulation, while others ban it altogether. That is because there is no evidence upon which to draft policies. Here we propose to gather evidence from the Web. After crawling Airbnb data for the entire city of London, we find out where and when Airbnb listings are offered and, by matching such listing information with census and hotel data, we determine the socio-economic conditions of the areas that actually benefit from the hospitality platform. The reality is more nuanced than one would expect, and it has changed over the years. Airbnb demand and offering have changed over time, and traditional regulations have not been able to respond to those changes. That is why, finally, we rely on our data analysis to envision regulations that are responsive to real-time demands, contributing to the emerging idea of ``algorithmic regulation''.

Journal ArticleDOI
TL;DR: An adversary model is defined, several vulnerabilities affecting current Docker usage are pointed out, and further research directions are discussed on the Docker environment's security implications through realistic use cases.
Abstract: The need for ever-shorter development cycles, continuous delivery, and cost savings in cloud-based infrastructures led to the rise of containers, which are more flexible than virtual machines and provide near-native performance. Among all container solutions, Docker, a complete packaging and software delivery tool, currently leads the market. This article gives an overview of the container ecosystem and discusses the Docker environment's security implications through realistic use cases. The authors define an adversary model, point out several vulnerabilities affecting current Docker usage, and discuss further research directions.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This paper provides an overview of NB-IoT design, including salient features from the physical and higher layers, and illustrative results with respect to performance objectives are provided.
Abstract: In 3GPP Rel-13, a narrowband system, named Narrowband Internet of Things (NB-IoT), has been introduced to provide low-cost, low-power, wide-area cellular connectivity for the Internet of Things. This system, based on Long Term Evolution (LTE) technology, supports most LTE functionalities albeit with essential simplifications to reduce device complexity. Further optimizations to increase coverage, reduce overhead and reduce power consumption while increasing capacity have been introduced as well. The design objectives of NB-IoT include low-complexity devices, high coverage, long device battery life, and massive capacity. Latency is relaxed although a delay budget of 10 seconds is the target for exception reports. This paper provides an overview of NB-IoT design, including salient features from the physical and higher layers. Illustrative results with respect to performance objectives are also provided. Finally, NB-IoT enhancements in LTE Rel-14 are briefly outlined.

Journal ArticleDOI
TL;DR: The merits of an HTTP/2 push-based approach to segment duration reduction, a measurement study on the available bandwidth in real 4G/LTE networks, and the induced bit-rate overhead for HEVC-encoded video segments with a sub-second duration are discussed.
Abstract: In HTTP Adaptive Streaming, video content is temporally divided into multiple segments, each encoded at several quality levels. The client can adapt the requested video quality to network changes, generally resulting in a smoother playback. Unfortunately, live streaming solutions still often suffer from playout freezes and a large end-to-end delay. By reducing the segment duration, the client can use a smaller temporal buffer and respond even faster to network changes. However, since segments are requested subsequently, this approach is susceptible to high round-trip times. In this letter, we discuss the merits of an HTTP/2 push-based approach. We present the details of a measurement study on the available bandwidth in real 4G/LTE networks, and analyze the induced bit-rate overhead for HEVC-encoded video segments with a sub-second duration. Through an extensive evaluation with the generated video content, we show that the proposed approach results in a higher video quality (+7.5%) and a lower freeze time (−50.4%), and allows to reduce the live delay compared with traditional solutions over HTTP/1.1.

Proceedings Article
10 Aug 2016
TL;DR: The first two authors were funded by Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020”, which is supported by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).
Abstract: The first two authors were funded by Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020”, which is fi- nanced by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). The third and fourth authors were supported by projects S2013/ICE2731 N-GREENS Software-CM and ONR Grants N000141210914 (AutoCrypt) and N000141512750 (SynCrypt). The fourth author was also supported by FP7 Marie Cure Actions-COFUND 291803 (Amarout II). We thank Peter Schwabe for providing us with a collection of negative examples. We thank Hovav Shacham, Craig Costello and Patrick Longa for helpful observations on our verification results. TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020

Journal ArticleDOI
TL;DR: A new caching scheme that combines two basic approaches to provide coded multicasting opportunities within each layer and across multiple layers is proposed, which achieves the optimal communication rates to within a constant multiplicative and additive gap.
Abstract: caching of popular content during off-peak hours is a strategy to reduce network loads during peak hours. Recent work has shown significant benefits of designing such caching strategies not only to locally deliver the part of the content, but also to provide coded multicasting opportunities even among users with different demands. Exploiting both of these gains was shown to be approximately optimal for caching systems with a single layer of caches. Motivated by practical scenarios, we consider, in this paper, a hierarchical content delivery network with two layers of caches. We propose a new caching scheme that combines two basic approaches. The first approach provides coded multicasting opportunities within each layer; the second approach provides coded multicasting opportunities across multiple layers. By striking the right balance between these two approaches, we show that the proposed scheme achieves the optimal communication rates to within a constant multiplicative and additive gap. We further show that there is no tension between the rates in each of the two layers up to the aforementioned gap. Thus, both the layers can simultaneously operate at approximately the minimum rate.

Journal ArticleDOI
TL;DR: In this paper, the spectral sensing coherence information between their sensing matrices and spectrum-specific bases learned from a large-scale multispectral image database is analyzed and compared by examining the efficiency of their sampling schemes.
Abstract: Multispectral cameras collect image data with a greater number of spectral channels than traditional trichromatic sensors, thus providing spectral information at a higher level of detail. Such data are useful in various fields, such as remote sensing, materials science, biophotonics, and environmental monitoring. The massive scale of multispectral data-at high resolutions in the spectral, spatial, and temporal dimensions-has long presented a major challenge in spectrometer design. With recent developments in sampling theory, this problem has become more manageable through use of undersampling and constrained reconstruction techniques. This article presents an overview of these state-of-the-art multispectral acquisition systems, with a particular focus on snapshot multispectral capture, from a signal processing perspective. We propose that undersampling-based multispectral cameras can be understood and compared by examining the efficiency of their sampling schemes, which we formulate as the spectral sensing coherence information between their sensing matrices and spectrum-specific bases learned from a large-scale multispectral image database. We analyze existing snapshot multispectral cameras in this manner, and additionally discuss their optical performance in terms of light throughput and system complexity.

Proceedings ArticleDOI
Xin Yuan1
01 Sep 2016
TL;DR: The generalized alternating projection (GAP) algorithm is considered and the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework is derived.
Abstract: We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) algorithm. Extensive results demonstrate the high performance of proposed algorithm on compressive sensing, including two dimensional images, hyperspectral images and videos. We further derive the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework, respectively. Connections between GAP and ADMM are also provided.

Proceedings ArticleDOI
TL;DR: In this article, the authors considered a cache-aided wireless network with a library of files and showed that the sum degrees-of-freedom (sum-DoF) of the network is within a factor of 2 of the optimum under one-shot linear schemes.
Abstract: We consider a system comprising a library of $N$ files (e.g., movies) and a wireless network with $K_T$ transmitters, each equipped with a local cache of size of $M_T$ files, and $K_R$ receivers, each equipped with a local cache of size of $M_R$ files. Each receiver will ask for one of the $N$ files in the library, which needs to be delivered. The objective is to design the cache placement (without prior knowledge of receivers' future requests) and the communication scheme to maximize the throughput of the delivery. In this setting, we show that the sum degrees-of-freedom (sum-DoF) of $\min\left\{\frac{K_T M_T+K_R M_R}{N},K_R\right\}$ is achievable, and this is within a factor of 2 of the optimum, under one-shot linear schemes. This result shows that (i) the one-shot sum-DoF scales linearly with the aggregate cache size in the network (i.e., the cumulative memory available at all nodes), (ii) the transmitters' and receivers' caches contribute equally in the one-shot sum-DoF, and (iii) caching can offer a throughput gain that scales linearly with the size of the network. To prove the result, we propose an achievable scheme that exploits the redundancy of the content at transmitters' caches to cooperatively zero-force some outgoing interference and availability of the unintended content at receivers' caches to cancel (subtract) some of the incoming interference. We develop a particular pattern for cache placement that maximizes the overall gains of cache-aided transmit and receive interference cancellations. For the converse, we present an integer optimization problem which minimizes the number of communication blocks needed to deliver any set of requested files to the receivers. We then provide a lower bound on the value of this optimization problem, hence leading to an upper bound on the linear one-shot sum-DoF of the network, which is within a factor of 2 of the achievable sum-DoF.

Proceedings ArticleDOI
06 Sep 2016
TL;DR: An information-theoretic lower bound on the latency- load tradeoff is proved, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.
Abstract: We propose a unified coding framework for distributed computing with straggling servers, by introducing a tradeoff between "latency of computation" and "load of communication" for some linear computation tasks. We show that the coded scheme of [1]-[3] that repeats the intermediate computations to create coded multicasting opportunities to reduce communication load, and the coded scheme of [4] that generates redundant intermediate computations to combat against straggling servers can be viewed as special instances of the proposed framework, by considering two extremes of this tradeoff: minimizing either the load of communication or the latency of computation individually. Furthermore, the latency-load tradeoff achieved by the proposed coded framework allows to systematically operate at any point on that tradeoff to perform distributed computing tasks. We also prove an information-theoretic lower bound on the latency- load tradeoff, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: In this article, Pinkas et al. describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries, which is particularly efficient when used to generate a large batch of OPRF instances.
Abstract: We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries. In an OPRF protocol a receiver has an input r; the sender gets output s and the receiver gets output F(s; r), where F is a pseudorandom function and s is a random seed. Our protocol uses a novel adaptation of 1-out-of-2 OT-extension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize m OPRF instances is roughly the cost to realize 3:5m instances of standard 1-out-of-2 OTs (using state-of-the-art OT extension). We explore in detail our protocol's application to semihonest secure private set intersection (PSI). The fastest state-of- the-art PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bit-length of the parties' items. We implemented both PSI protocol variants and found ours to be 3.1{3.6 faster than Pinkas et al. for PSI of 128-bit strings and sufficiently large sets. Concretely, ours requires only 3.8 seconds to securely compute the intersection of 220-size sets, regardless of the bitlength of the items. For very large sets, our protocol is only 4:3 slower than the insecure naive hashing approach for PSI.

Journal ArticleDOI
TL;DR: This paper designs a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately and provides tight concentration results that show that the average number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.
Abstract: We study a noiseless broadcast link serving $K$ users whose requests arise from a library of $N$ files. Every user is equipped with a cache of size $M$ files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most $N/M$ file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file $F$ scales to infinity. The asymptotic coding gain obtained is roughly $t=KM/N$ . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets $F$ is a function of the system parameters $M,N$ , and $K$ . Specifically, we show that the existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 2 even if the number of packets is exponential in the asymptotic gain $t=K({M}/{N})$ . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of $({4}/{3})g$ is at least $O(({g}/{K})(N/M)^{g-1})$ . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.

Journal ArticleDOI
TL;DR: A broad survey of techniques aimed at tackling latency in the literature up to August 2014 is offered, finding that classifying techniques according to the sources of delay they alleviate provided the best insight into the following issues.
Abstract: Latency is increasingly becoming a performance bottleneck for Internet Protocol (IP) networks, but historically, networks have been designed with aims of maximizing throughput and utilization. This paper offers a broad survey of techniques aimed at tackling latency in the literature up to August 2014, as well as their merits. A goal of this work is to be able to quantify and compare the merits of the different Internet latency reducing techniques, contrasting their gains in delay reduction versus the pain required to implement and deploy them. We found that classifying techniques according to the sources of delay they alleviate provided the best insight into the following issues: 1) The structural arrangement of a network, such as placement of servers and suboptimal routes, can contribute significantly to latency; 2) each interaction between communicating endpoints adds a Round Trip Time (RTT) to latency, particularly significant for short flows; 3) in addition to base propagation delay, several sources of delay accumulate along transmission paths, today intermittently dominated by queuing delays; 4) it takes time to sense and use available capacity, with overuse inflicting latency on other flows sharing the capacity; and 5) within end systems, delay sources include operating system buffering, head-of-line blocking, and hardware interaction. No single source of delay dominates in all cases, and many of these sources are spasmodic and highly variable. Solutions addressing these sources often both reduce the overall latency and make it more predictable.

Journal ArticleDOI
TL;DR: The results show that, when properly optimized, the flexibility of IoT-Cloud networks can be efficiently exploited to deliver a wide range of IoT services in the context of next generation smart environments, while significantly reducing overall power consumption.
Abstract: The impact of the Internet of Things (IoT) on the evolution toward next generation smart environments (e.g., smart homes, buildings, and cities) will largely depend on the efficient integration of IoT and cloud computing technologies. With the predicted explosion in the number of connected devices and IoT services, current centralized cloud architectures, which tend to consolidate computing and storage resources into a few large data centers, will inevitably lead to excessive network load, end-to-end service latencies, and overall power consumption. Thanks to recent advances in network virtualization and programmability, highly distributed cloud networking architectures are a promising solution to efficiently host, manage, and optimize next generation IoT services in smart environments. In this paper, we mathematically formulate the service distribution problem (SDP) in IoT-Cloud networks, referred to as the IoT-CSDP, as a minimum cost mixed-cast flow problem that can be efficiently solved via linear programming. We focus on energy consumption as the major driver of today’s network and cloud operational costs and characterize the heterogeneous set of IoT-Cloud network resources according to their associated sensing, computing, and transport capacity and energy efficiency. Our results show that, when properly optimized, the flexibility of IoT-Cloud networks can be efficiently exploited to deliver a wide range of IoT services in the context of next generation smart environments, while significantly reducing overall power consumption.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A deployment study of Narrowband Internet of Things using existing LTE infrastructure, examining potential techniques to compensate for the high path-loss and high interference and analysis to indicate when partial deployment of NB-IoT is feasible.
Abstract: In 3GPP, a narrowband system based on Long Term Evolution (LTE) has been introduced to support the Internet of Things. This system, named Narrowband Internet of Things (NB-IoT), provides low-cost devices, high coverage (20 dB improvement over LTE/GPRS), long device battery life (more than 10 years), and massive capacity. Latency is relaxed although a delay budget of 10 seconds is the target for exception reports. NB-IoT can be deployed in three different operation modes — (1) stand-alone as a dedicated carrier, (2) in-band within the occupied bandwidth of a wideband LTE carrier, and (3) within the guard-band of an existing LTE carrier. In this paper, we undertake a deployment study of NB-IoT using existing LTE infrastructure. We consider the case when only a fraction of the existing LTE cell sites support NB-IoT (so called partial deployment of NB-IoT). In this case, NB-IoT devices cannot attach to the best cell if that cell does not support NB-IoT. As a result, the path loss can be very high. In addition, they also suffer from high interference from non-NB-IoT cells. We examine potential techniques to compensate for the high path-loss and high interference and provide analysis to indicate when partial deployment of NB-IoT is feasible. We also examine interference issues in asynchronous deployments and study performance.