scispace - formally typeset
Search or ask a question

Showing papers on "Online algorithm published in 2017"


Journal ArticleDOI
TL;DR: This paper model the online energy management as a stochastic optimal power flow problem and proposes an online EMS based on Lyapunov optimization that takes into account the power flow and system operational constraints on a distribution network.
Abstract: Energy management in microgrids is typically formulated as an offline optimization problem for day-ahead scheduling by previous studies. Most of these offline approaches assume perfect forecasting of the renewables, the demands, and the market, which is difficult to achieve in practice. Existing online algorithms, on the other hand, oversimplify the microgrid model by only considering the aggregate supply-demand balance while omitting the underlying power distribution network and the associated power flow and system operational constraints. Consequently, such approaches may result in control decisions that violate the real-world constraints. This paper focuses on developing an online energy management strategy (EMS) for real-time operation of microgrids that takes into account the power flow and system operational constraints on a distribution network. We model the online energy management as a stochastic optimal power flow problem and propose an online EMS based on Lyapunov optimization. The proposed online EMS is subsequently applied to a real-microgrid system. The simulation results demonstrate that the performance of the proposed EMS exceeds a greedy algorithm and is close to an optimal offline algorithm. Lastly, the effect of the underlying network structure on energy management is observed and analyzed.

285 citations


Journal ArticleDOI
TL;DR: An online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time is proposed.
Abstract: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either save local resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. We formulate an NP-hard problem to minimize the application latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on the integer programming solver, or on heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time approximation scheme (FPTAS). We identify for a subset of problem instances, where the application task graphs can be described as serial trees, Hermes provides a solution with latency no more than $(1+\epsilon)$ times of the minimum while incurring complexity that is polynomial in problem size and $\frac{1}{\epsilon}$ . We further propose an online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time. Evaluation is done by using real data set collected from several benchmarks, and is shown that Hermes improves the latency by $16$ percent compared to a previously published heuristic and increases CPU computing time by only $0.4$ percent of overall latency.

233 citations


Posted ContentDOI
17 Feb 2017-bioRxiv
TL;DR: The proposed algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching is introduced, which can be run in an online mode resulting in comparable to or even faster than real time motion registration on streaming data.
Abstract: Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. Here we introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. orm operates by splitting the field of view into overlapping spatial patches that are registered for rigid translation against a continuously updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid motion in a piecewise-rigid manner. orm allows for subpixel registration and can be run in an online mode resulting in comparable to or even faster than real time motion registration on streaming data. We evaluate the performance of the proposed method with simple yet intuitive metrics and compare against other non-rigid registration methods on two-photon calcium imaging datasets. Open source Matlab and Python code is also made available.

225 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work presents a deep-learning framework for real-time multiple spatio-temporal (S/T) action localisation and classification that is not only capable of performing S/T detection in real time, but can also perform early action prediction in an online fashion.
Abstract: We present a deep-learning framework for real-time multiple spatio-temporal (S/T) action localisation and classification. Current state-of-the-art approaches work offline, and are too slow to be useful in real-world settings. To overcome their limitations we introduce two major developments. Firstly, we adopt real-time SSD (Single Shot Multi-Box Detector) CNNs to regress and classify detection boxes in each video frame potentially containing an action of interest. Secondly, we design an original and efficient online algorithm to incrementally construct and label ‘action tubes’ from the SSD frame level detections. As a result, our system is not only capable of performing S/T detection in real time, but can also perform early action prediction in an online fashion. We achieve new state-of-the-art results in both S/T action localisation and early action prediction on the challenging UCF101-24 and J-HMDB-21 benchmarks, even when compared to the top offline competitors. To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S/T action localisation on the untrimmed videos of UCF101-24.

222 citations


Journal ArticleDOI
TL;DR: This survey considers approximation and online algorithms for several classical generalizations of bin packing problem such as geometric bin packing, vector bin packing and various other related problems.

202 citations


Journal ArticleDOI
TL;DR: In this article, a modified online saddle-point (MOSP) scheme is developed, and proved to simultaneously yield sublinear dynamic regret and fit, provided that the accumulated variations of per-slot minimizers and constraints are sublinearly growing with time.
Abstract: Existing approaches to online convex optimization make sequential one-slot-ahead decisions, which lead to (possibly adversarial) losses that drive subsequent decision iterates. Their performance is evaluated by the so-called regret that measures the difference of losses between the online solution and the best yet fixed overall solution in hindsight . The present paper deals with online convex optimization involving adversarial loss functions and adversarial constraints, where the constraints are revealed after making decisions, and can be tolerable to instantaneous violations but must be satisfied in the long term. Performance of an online algorithm in this setting is assessed by the difference of its losses relative to the best dynamic solution with one-slot-ahead information of the loss function and the constraint (that is here termed dynamic regret ); and the accumulated amount of constraint violations (that is here termed dynamic fit ). In this context, a modified online saddle-point (MOSP) scheme is developed, and proved to simultaneously yield sublinear dynamic regret and fit, provided that the accumulated variations of per-slot minimizers and constraints are sublinearly growing with time. MOSP is also applied to the dynamic network resource allocation task, and it is compared with the well-known stochastic dual gradient method. Numerical experiments demonstrate the performance gain of MOSP relative to the state of the art.

193 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose an offline algorithm that solves for the optimal configuration in a specific look-ahead time-window, and an online approximation algorithm with polynomial time-complexity to find the placement in real-time whenever an instance arrives.
Abstract: Mobile micro-clouds are promising for enabling performance-critical cloud applications. However, one challenge therein is the dynamics at the network edge. In this paper, we study how to place service instances to cope with these dynamics, where multiple users and service instances coexist in the system. Our goal is to find the optimal placement (configuration) of instances to minimize the average cost over time, leveraging the ability of predicting future cost parameters with known accuracy. We first propose an offline algorithm that solves for the optimal configuration in a specific look-ahead time-window. Then, we propose an online approximation algorithm with polynomial time-complexity to find the placement in real-time whenever an instance arrives. We analytically show that the online algorithm is $O(1)$ -competitive for a broad family of cost functions. Afterwards, the impact of prediction errors is considered and a method for finding the optimal look-ahead window size is proposed, which minimizes an upper bound of the average actual cost. The effectiveness of the proposed approach is evaluated by simulations with both synthetic and real-world (San Francisco taxi) user-mobility traces. The theoretical methodology used in this paper can potentially be applied to a larger class of dynamic resource allocation problems.

165 citations


Journal ArticleDOI
27 Jul 2017
TL;DR: This paper formalizes the semantics for robust online monitoring of partial signals using the notion of robust satisfaction intervals (RoSIs) and proposes an efficient algorithm to compute the \(\mathtt {RoSI}\) and demonstrates its usage on two real-world case studies from the automotive domain and massively-online CPS education.
Abstract: Signal temporal logic (STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course setting. We show that savings in computationally expensive simulations far outweigh any overheads incurred by an online approach.

126 citations


Journal ArticleDOI
TL;DR: This paper considers the optimal PEV charging scheduling, where the noncausal information about future PEV arrivals is not known in advance, but its statistical information can be estimated, and provides a model predictive control (MPC)-based algorithm with computational complexity.
Abstract: With the increasing adoption of plug-in electric vehicles (PEVs), it is critical to develop efficient charging coordination mechanisms that minimize the cost and impact of PEV integration to the power grid. In this paper, we consider the optimal PEV charging scheduling, where the noncausal information about future PEV arrivals is not known in advance, but its statistical information can be estimated. This leads to an “online” charging scheduling problem that is naturally formulated as a finite-horizon dynamic programming with continuous state space and action space. To avoid the prohibitively high complexity of solving such a dynamic programming problem, we provide a model predictive control (MPC)-based algorithm with computational complexity $O(T^3)$ , where $T$ is the total number of time stages. We rigorously analyze the performance gap between the near-optimal solution of the MPC-based approach and the optimal solution for any distributions of exogenous random variables. Furthermore, our rigorous analysis shows that when the random process describing the arrival of charging demands is first-order periodic, the complexity of the proposed algorithm can be reduced to $O(1)$ , which is independent of $T$ . Extensive simulations show that the proposed online algorithm performs very closely to the optimal online algorithm. The performance gap is smaller than $0.4\%$ in most cases.

123 citations


Journal ArticleDOI
TL;DR: This work proposes a hierarchical two-phase algorithm that integrates key concepts from both matching theory and coalitional games to solve the dynamic controller assignment problem efficiently and proves that the algorithm converges to a near-optimal Nash stable solution within tens of iterations.
Abstract: Software defined networking is increasingly prevalent in data center networks for it enables centralized network configuration and management. However, since switches are statically assigned to controllers and controllers are statically provisioned, traffic dynamics may cause long response time and incur high maintenance cost. To address these issues, we formulate the dynamic controller assignment problem (DCAP) as an online optimization to minimize the total cost caused by response time and maintenance on the cluster of controllers. By applying the randomized fixed horizon control framework, we decompose DCAP into a series of stable matching problems with transfers, guaranteeing a small loss in competitive ratio. Since the matching problem is NP-hard, we propose a hierarchical two-phase algorithm that integrates key concepts from both matching theory and coalitional games to solve it efficiently. Theoretical analysis proves that our algorithm converges to a near-optimal Nash stable solution within tens of iterations. Extensive simulations show that our online approach reduces total cost by about 46%, and achieves better load balancing among controllers compared with static assignment.

118 citations


Journal ArticleDOI
TL;DR: This work proposes Tiles, an algorithm that extracts overlapping communities and tracks their evolution in time following an online iterative procedure, and compares it with state-of-the-art community detection algorithms on both synthetic and real world networks having annotated community structure.
Abstract: Community discovery has emerged during the last decade as one of the most challenging problems in social network analysis. Many algorithms have been proposed to find communities on static networks, i.e. networks which do not change in time. However, social networks are dynamic realities (e.g. call graphs, online social networks): in such scenarios static community discovery fails to identify a partition of the graph that is semantically consistent with the temporal information expressed by the data. In this work we propose Tiles, an algorithm that extracts overlapping communities and tracks their evolution in time following an online iterative procedure. Our algorithm operates following a domino effect strategy, dynamically recomputing nodes community memberships whenever a new interaction takes place. We compare Tiles with state-of-the-art community detection algorithms on both synthetic and real world networks having annotated community structure: our experiments show that the proposed approach is able to guarantee lower execution times and better correspondence with the ground truth communities than its competitors. Moreover, we illustrate the specifics of the proposed approach by discussing the properties of identified communities it is able to identify.

Proceedings ArticleDOI
05 Jun 2017
TL;DR: This paper proposes a novel online algorithm that optimally solves a series of subproblems with a carefully designed logarithmic objective, finally producing feasible solutions for edge cloud resource allocation over time and proves via rigorous analysis that the online algorithm can provide a parameterized competitive ratio.
Abstract: As clouds move to the network edge to facilitate mobile applications, edge cloud providers are facing new challenges on resource allocation. As users may move and resource prices may vary arbitrarily, %and service delays are heterogeneous, resources in edge clouds must be allocated and adapted continuously in order to accommodate such dynamics. In this paper, we first formulate this problem with a comprehensive model that captures the key challenges, then introduce a gap-preserving transformation of the problem, and propose a novel online algorithm that optimally solves a series of subproblems with a carefully designed logarithmic objective, finally producing feasible solutions for edge cloud resource allocation over time. We further prove via rigorous analysis that our online algorithm can provide a parameterized competitive ratio, without requiring any a priori knowledge on either the resource price or the user mobility. Through extensive experiments with both real-world and synthetic data, we further confirm the effectiveness of the proposed algorithm. We show that the proposed algorithm achieves near-optimal results with an empirical competitive ratio of about 1.1, reduces the total cost by up to 4x compared to static approaches, and outperforms the online greedy one-shot optimizations by up to 70%.

Proceedings ArticleDOI
19 Apr 2017
TL;DR: This paper formally defines a novel dynamic online task assignment problem, called the trichromatic online matching in real-time spatial crowdsourcing (TOM) problem, which is proven to be NP-hard and presents a threshold-based randomized algorithm that not only guarantees a tighter competitive ratio but also includes an adaptive optimization technique, which can quickly learn the optimal threshold for the randomized algorithm.
Abstract: The prevalence of mobile Internet techniques and Online-To-Offline (O2O) business models has led the emergence of various spatial crowdsourcing (SC) platforms in our daily life. A core issue of SC is to assign real-time tasks to suitable crowd workers. Existing approaches usually focus on the matching of two types of objects, tasks and workers, or assume the static offline scenarios, where the spatio-temporal information of all the tasks and workers is known in advance. Recently, some new emerging O2O applications incur new challenges: SC platforms need to assign three types of objects, tasks, workers and workplaces, and support dynamic real-time online scenarios, where the existing solutions cannot handle. In this paper, based on the aforementioned challenges, we formally define a novel dynamic online task assignment problem, called the trichromatic online matching in real-time spatial crowdsourcing (TOM) problem, which is proven to be NP-hard. Thus, we first devise an efficient greedy online algorithm. However, the greedy algorithm can be trapped into local optimal solutions easily. We then present a threshold-based randomized algorithm that not only guarantees a tighter competitive ratio but also includes an adaptive optimization technique, which can quickly learn the optimal threshold for the randomized algorithm. Finally, we verify the effectiveness and efficiency of the proposed methods through extensive experiments on real and synthetic datasets.

Journal ArticleDOI
TL;DR: This work formulated a sum-rate maximization problem of joint resource block and power allocation for the D2D links, which resulted in a non-convex problem that was then transformed into a more tractable convex optimization problem.
Abstract: The specific family of device-to-device (D2D) communication underlying downlink cellular networks eliminates the reliance on base stations for its transmission by allowing direct transmission between two devices in each other’s close proximity that reuse the cellular resource blocks for enhancing the attainable network capacity and spectrum efficiency. By considering downlink resource reuse and energy harvesting (EH), our goal is to maximize the sum-rate of the D2D links, without degrading the quality-of-service requirement of the cellular users. We formulated a sum-rate maximization problem of joint resource block and power allocation for the D2D links, which resulted in a non-convex problem that was then transformed into a more tractable convex optimization problem. Based on the results of our Lagrangian constrained optimization, we propose joint resource block and power allocation algorithms for the D2D links, when there is non-causal (offline) and causal (online) knowledge of the EH profiles at the D2D transmitters. The performance of the algorithms is quantified using simulation results for different network parameters settings, where our online algorithm performs close to the upper bound provided by our offline algorithm.

Journal ArticleDOI
TL;DR: This paper designs efficient online auctions for cloud resource provisioning that executes in an online fashion, runs in polynomial time, provides truthfulness guarantee, and achieves optimal social welfare for the cloud ecosystem.
Abstract: This paper studies the cloud market for computing jobs with completion deadlines, and designs efficient online auctions for cloud resource provisioning. A cloud user bids for future cloud resources to execute its job. Each bid includes: 1) a utility, reflecting the amount that the user is willing to pay for executing its job and 2) a soft deadline, specifying the preferred finish time of the job, as well as a penalty function that characterizes the cost of violating the deadline. We target cloud job auctions that executes in an online fashion, runs in polynomial time, provides truthfulness guarantee, and achieves optimal social welfare for the cloud ecosystem. Towards these goals, we leverage the following classic and new auction design techniques. First, we adapt the posted pricing auction framework for eliciting truthful online bids. Second, we address the challenge posed by soft deadline constraints through a new technique of compact exponential-size LPs coupled with dual separation oracles. Third, we develop efficient social welfare approximation algorithms using the classic primal-dual framework based on both LP duals and Fenchel duals. Empirical studies driven by real-world traces verify the efficacy of our online auction design.

Proceedings ArticleDOI
21 May 2017
TL;DR: In this article, a novel approach based on the online secretary framework is proposed to find the desired set of neighboring fog nodes, and an online algorithm is developed to enable a task initiating fog node to decide on which other nodes can be used as part of its fog network, to offload computational tasks, without knowing any prior information on the future arrivals of those other nodes.
Abstract: Fog computing is seen as a promising approach to perform distributed, low-latency computation for supporting Internet of Things applications. However, due to the unpredictable arrival of available neighboring fog nodes, the dynamic formation of a fog network can be challenging. In essence, a given fog node must smartly select the set of neighboring fog nodes that can provide low-latency computations. In this paper, this problem of fog network formation and task distribution is studied considering a hybrid cloud-fog architecture. The goal of the proposed framework is to minimize the maximum computational latency by enabling a given fog node to form a suitable fog network, under uncertainty on the arrival process of neighboring fog nodes. To solve this problem, a novel approach based on the online secretary framework is proposed. To find the desired set of neighboring fog nodes, an online algorithm is developed to enable a task initiating fog node to decide on which other nodes can be used as part of its fog network, to offload computational tasks, without knowing any prior information on the future arrivals of those other nodes. Simulation results show that the proposed online algorithm can successfully select an optimal set of neighboring fog nodes while achieving a latency that is as small as the one resulting from an ideal, offline scheme that has complete knowledge of the system. The results also show how, using the proposed approach, the computational tasks can be properly distributed between the fog network and a remote cloud server.

Journal ArticleDOI
TL;DR: An efficient online algorithm LS_OL is designed using a simple majority voting rule that can differentiate high and low quality labelers over time, and is shown to have a regret (with respect to always using the optimal set of labelers) of ...read more
Abstract: We consider a crowd-sourcing problem where in the process of labeling massive data sets, multiple labelers with unknown annotation quality must be selected to perform the labeling task for each incoming data sample or task, with the results aggregated using for example simple or weighted majority voting rule. In this paper, we approach this labeler selection problem in an online learning framework, whereby the quality of the labeling outcome by a specific set of labelers is estimated so that the learning algorithm over time learns to use the most effective combinations of labelers. This type of online learning in some sense falls under the family of multi-armed bandit (MAB) problems, but with a distinct feature not commonly seen: since the data is unlabeled to begin with and the labelers’ quality is unknown, their labeling outcome (or reward in the MAB context) cannot be readily verified; it can only be estimated against the crowd and be known probabilistically. We design an efficient online algorithm LS_OL using a simple majority voting rule that can differentiate high and low quality labelers over time, and is shown to have a regret (with respect to always using the optimal set of labelers) of $O(\log ^{2} T)$ uniformly in time under mild assumptions on the collective quality of the crowd, thus regret free in the average sense. We discuss further performance improvement by using a more sophisticated majority voting rule, and show how to detect and filter out “bad” (dishonest, malicious or very incompetent) labelers to further enhance the quality of crowd-sourcing. Extension to the case when a labeler’s quality is task-type dependent is also discussed using techniques from the literature on continuous arms. We establish a lower bound on the order of $O(\log T D_{2}(T))$ , where $D_{2}(T)$ is an arbitrary function such that $D_{2}(T) > O(1)$ . We further provide a matching upper bound through a minor modification of the algorithm we proposed and studied earlier on. We present numerical results using both simulation and set of images labeled by amazon mechanic turks.

Journal ArticleDOI
TL;DR: This paper designs truthful, polynomial time auctions to achieve social welfare maximization and/or the provider’s profit maximization with good competitive ratios, and adopts a new application of Fenchel duality in the primal-dual framework, which provides richer structures for convex programs than the commonly used Lagrangian duality.
Abstract: Auction design has recently been studied for dynamic resource bundling and virtual machine (VM) provisioning in IaaS clouds, but is mostly restricted to one-shot or offline setting This paper targets a more realistic case of online VM auction design, where: 1) cloud users bid for resources into the future to assemble customized VMs with desired occupation durations, possibly located in different data centers; 2) the cloud provider dynamically packs multiple types of resources on heterogeneous physical machines (servers) into the requested VMs; 3) the operational costs of servers are considered in resource allocation; and 4) both social welfare and the cloud provider’s net profit are to be maximized over the system running span We design truthful, polynomial time auctions to achieve social welfare maximization and/or the provider’s profit maximization with good competitive ratios Our mechanisms consist of two main modules: 1) an online primal-dual optimization framework for VM allocation to maximize the social welfare with server costs, and for revealing the payments through the dual variables to guarantee truthfulness and 2) a randomized reduction algorithm to convert the social welfare maximizing auctions to ones that provide a maximal expected profit for the provider, with competitive ratios comparable to those for social welfare We adopt a new application of Fenchel duality in our primal-dual framework, which provides richer structures for convex programs than the commonly used Lagrangian duality, and our optimization framework is general and expressive enough to handle various convex server cost functions The efficacy of the online auctions is validated through careful theoretical analysis and trace-driven simulation studies

Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this paper, the authors give new results for the set cover problem in the fully dynamic model, where the set of "active" elements to be covered changes over time, and the goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep.
Abstract: In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of "active" elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research.

Journal ArticleDOI
TL;DR: This work introduces a new recursive aggregation procedure called Bernstein Online Aggregation (BOA), which is optimal for the model selection aggregation problem in the bounded iid setting for the square loss and is the first online algorithm that satisfies the fast rate of convergence.
Abstract: We introduce a new recursive aggregation procedure called Bernstein Online Aggregation (BOA). Its exponential weights include a second order refinement. The procedure is optimal for the model selection aggregation problem in the bounded iid setting for the square loss: the excess of risk of its batch version achieves the fast rate of convergence $$\log (M)/n$$log(M)/n in deviation. The BOA procedure is the first online algorithm that satisfies this optimal fast rate. The second order refinement is required to achieve the optimality in deviation as the classical exponential weights cannot be optimal, see Audibert (Advances in neural information processing systems. MIT Press, Cambridge, MA, 2007). This refinement is settled thanks to a new stochastic conversion that estimates the cumulative predictive risk in any stochastic environment with observable second order terms. The observable second order term is shown to be sufficiently small to assert the fast rate in the iid setting when the loss is Lipschitz and strongly convex. We also introduce a multiple learning rates version of BOA. This fully adaptive BOA procedure is also optimal, up to a $$\log \log (n)$$loglog(n) factor.

Proceedings ArticleDOI
05 Jun 2017
TL;DR: This paper designs the very first approximation algorithm with an approximation ratio of 2K for the NFV-enabled multicasting problem and proposes an online algorithm with a competitive ratio of O(log n) when K = 1, where n is the number of nodes in the network.
Abstract: Multicasting is a fundamental functionality of networks for many applications including online conferencing, event monitoring, video streaming, and system monitoring in data centers. To ensure multicasting reliable, secure and scalable, a service chain consisting of network functions (e.g., firewalls, Intrusion Detection Systems (IDSs), and transcoders) usually is associated with each multicast request. Such a multicast request is referred to as an NFV-enabled multicast request. In this paper we study NFV-enabled multicasting in a Software-Defined Network (SDN) with the aims to minimize the implementation cost of each NFV-enabled multicast request or maximize the network throughput for a sequence of NFV-enabled requests, subject to network resource capacity constraints. We first formulate novel NFV-enabled multicasting and online NFV-enabled multicasting problems. We then devise the very first approximation algorithm with an approximation ratio of 2K for the NFV-enabled multicasting problem if the number of servers for implementing the network functions of each request is no more than a constant K (1). We also study dynamic admissions of NFV-enabled multicast requests without the knowledge of future request arrivals with the objective to maximize the network throughput, for which we propose an online algorithm with a competitive ratio of O(log n) when K = 1, where n is the number of nodes in the network. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms outperform other existing heuristics.

Proceedings Article
21 Oct 2017
TL;DR: ZOO-ADMM as mentioned in this paper is a zeroth-order online alternating direction method of multipliers (ADMM) algorithm, which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers.
Abstract: In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers. Compared to the first-order gradient-based online algorithm, we show that ZOO-ADMM requires $\sqrt{m}$ times more iterations, leading to a convergence rate of $O(\sqrt{m}/\sqrt{T})$, where $m$ is the number of optimization variables, and $T$ is the number of iterations. To accelerate ZOO-ADMM, we propose two minibatch strategies: gradient sample averaging and observation averaging, resulting in an improved convergence rate of $O(\sqrt{1+q^{-1}m}/\sqrt{T})$, where $q$ is the minibatch size. In addition to convergence analysis, we also demonstrate ZOO-ADMM to applications in signal processing, statistics, and machine learning.

Journal ArticleDOI
TL;DR: This survey explains the models for online algorithms with advice, motivates the study in general, presents some examples of the work that has been carried out, and includes an extensive set of references, organized by problem studied.
Abstract: In online scenarios requests arrive over time, and each request must be serviced in an irrevocable manner before the next request arrives. Online algorithms with advice is an area of research where one attempts to measure how much knowledge of future requests is necessary to achieve a given performance level, as defined by the competitive ratio. When this knowledge, the advice, is obtainable, this leads to practical algorithms, called semi-online algorithms. On the other hand, each negative result gives robust results about the limitations of a broad range of semi-online algorithms. This survey explains the models for online algorithms with advice, motivates the study in general, presents some examples of the work that has been carried out, and includes an extensive set of references, organized by problem studied.

Journal ArticleDOI
TL;DR: In this article, a generative model is proposed to generate items from their underlying topics, and an efficient online algorithm based on particle learning is developed for inferring both latent parameters and states of the model.
Abstract: Online interactive recommender systems strive to promptly suggest to consumers appropriate items (e.g., movies, news articles) according to the current context including both the consumer and item content information. However, such context information is often unavailable in practice for the recommendation, where only the users' interaction data on items can be utilized. Moreover, the lack of interaction records, especially for new users and items, worsens the performance of recommendation further. To address these issues, collaborative filtering (CF), one of the recommendation techniques relying on the interaction data only, as well as the online multi-armed bandit mechanisms, capable of achieving the balance between exploitation and exploration, are adopted in the online interactive recommendation settings, by assuming independent items (i.e., arms). Nonetheless, the assumption rarely holds in reality, since the real-world items tend to be correlated with each other (e.g., two articles with similar topics). In this paper, we study online interactive collaborative filtering problems by considering the dependencies among items. We explicitly formulate the item dependencies as the clusters on arms, where the arms within a single cluster share the similar latent topics. In light of the topic modeling techniques, we come up with a generative model to generate the items from their underlying topics. Furthermore, an efficient online algorithm based on particle learning is developed for inferring both latent parameters and states of our model. Additionally, our inferred model can be naturally integrated with existing multi-armed selection strategies in the online interactive collaborating setting. Empirical studies on two real-world applications, online recommendations of movies and news, demonstrate both the effectiveness and efficiency of the proposed approach.

Journal ArticleDOI
TL;DR: This paper first tackles the single-slot version of the D2D-LB problem, shows that it is NP-hard, and design a polynomial-time offline algorithm with a small approximation ratio, and designs an online algorithm for the multi-slot problem with sound competitive ratio.
Abstract: The device-to-device load balancing (D2D-LB) paradigm has been advocated in recent small-cell architecture design for cellular networks. The idea is to exploit inter-cell D2D communication and dynamically relay traffic of a busy cell to adjacent under-utilized cells to improve spectrum temporal efficiency, addressing a fundamental drawback of small-cell architecture. Technical challenges of D2D-LB have been studied in previous works. The potential of D2D-LB, however, cannot be fully realized without providing proper incentive mechanism for device participation. In this paper, we address this economical challenge using an online procurement auction framework. In our design, multiple sellers (devices) submit bids to participate in D2D-LB and the auctioneer (cellular service provider) evaluates all the bids and decides to purchase a subset of them to fulfill load balancing requirement with the minimum social cost. Different from similar auction design studies for cellular offloading, battery limit of relaying devices imposes a time-coupled capacity constraint that turns the underlying problem into a challenging multi-slot one. Furthermore, the dynamics in the input to the multi-slot auction problem emphasize the need for online algorithm design. We first tackle the single-slot version of the problem, show that it is NP-hard, and design a polynomial-time offline algorithm with a small approximation ratio. Building upon the single-slot results, we design an online algorithm for the multi-slot problem with sound competitive ratio. Our auction algorithm design ensures that truthful bidding is a dominant strategy for devices. Extensive experiments using real-world traces demonstrate that our proposed solution achieves near offline-optimum and reduces the cost by 45% compared with an alternative heuristic.

Journal ArticleDOI
TL;DR: A distributed online algorithm is developed using optimal stopping theory, in which in each meeting event, nodes make adaptive online decisions on whether this communication opportunity should be exploited to deliver data packets, and carried out simulations to evaluate the scalability of the proposed schemes.
Abstract: Delivery delay and communication costs are two conflicting design issues for mobile opportunistic networks with nonreplenishable energy resources. In this paper, we study the optimal data dissemination for resource constrained mobile opportunistic networks, i.e., the delay-constrained least-cost multicasting in mobile opportunistic networks. We formally formulate the problem and introduce a centralized heuristic algorithm which aims to discover a tree for multicasting, in order to meet the delay constraint and achieve low communication cost. While the above algorithm can be implemented by each individual node, it is intrinsically centralized (requiring global information) and, thus, impractical for real-world implementation. However, it offers useful insights for the development of a distributed scheme. The essence of the centralized approach is to first learn the probabilities to deliver the data along different paths to different nodes and then decide the optimal multicast tree by striking the balance between cost and delivery probability. In mobile opportunistic networks, even if the optimal routing tree can be computed by the centralized solution, it is the “best” only on a statistic basis for a large number of data packets. It is not necessarily the best solution for every individual transmission. Based on the above observation, we develop a distributed online algorithm using optimal stopping theory, in which in each meeting event, nodes make adaptive online decisions on whether this communication opportunity should be exploited to deliver data packets. We carry out simulations to evaluate the scalability of the proposed schemes. Furthermore, we prototype the proposed distributed online multicast algorithm using Nexus tablets and conduct an experiment that involves 37 volunteers and lasts for 21 days to demonstrate its effectiveness.

Journal ArticleDOI
TL;DR: An online algorithm is designed that decouples the original offline problem over time by constructing a series of regularized subproblems, solvable at each corresponding time slot using the output of the previous time slot, and achieves a parameterized competitive ratio for arbitrarily dynamic workloads and resource prices.
Abstract: The problem of dynamic resource allocation for service provisioning in multi-tier distributed clouds is particularly challenging due to the coexistence of several factors: the need for joint allocation of cloud and network resources, the need for online decision-making under time-varying service demands and resource prices, and the reconfiguration cost associated with changing resource allocation decisions. We study this problem from an online optimization perspective to address all these challenges. We design an online algorithm that decouples the original offline problem over time by constructing a series of regularized subproblems, solvable at each corresponding time slot using the output of the previous time slot. We prove that, without prediction beyond the current time slot, our algorithm achieves a parameterized competitive ratio for arbitrarily dynamic workloads and resource prices. If prediction is available, we demonstrate that existing prediction-based control algorithms lack worst case performance guarantees for our problem, and we design two novel predictive control algorithms that inherit the theoretical guarantees of our online algorithm, while exhibiting improved practical performance. We conduct evaluations in a variety of settings based on real-world dynamic inputs and show that, without prediction, our online algorithm achieves up to nine times total cost reduction compared with the sequence of greedy one-shot optimizations and at most three times the offline optimum; with moderate predictions, our control algorithms can achieve two times total cost reduction compared with existing prediction-based algorithms.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A novel input sensitive analysis of a deterministic online algorithm for the minimum metric bipartite matching problem and shows that the cost of edges of the optimal matching inside each larger ball can be shown to be proportional to the weight times the radius of the larger ball.
Abstract: We present a novel input sensitive analysis of a deterministic online algorithm \cite{r_approx16} for the minimum metric bipartite matching problem. We show that, in the adversarial model, for any metric space \metric and a set of n servers S, the competitive ratio of this algorithm is O(\mu_{\metric}(S)\log^2 n); here \mu_{\metric}(S) is the maximum ratio of the traveling salesman tour and the diameter of any subset of S. It is straight-forward to show that any algorithm, even with complete knowledge of \metric and S, will have a competitive ratio of Ω(\mu_\metric(S)). So, the performance of this algorithm is sensitive to the input and near-optimal for any given S and \metric. As consequences, we also achieve the following results:• If S is a set of points on a line, then \mu_\metric(S) = \Theta(1) and the competitive ratio is O(\log^2 n), and,• If S is a set of points spanning a subspace with doubling dimension d, then \mu_\metric(S) = O(n^{1-1/d}) and the competitive ratio is O(n^{1-1/d}\log^2 n).Prior to this result, the previous best-known algorithm for the line metric has a competitive ratio of O(n^{0.59}) and requires both S and the request set R to be on a line. There is also an O(\log n) competitive algorithm in the weaker oblivious adversary model.To obtain our results, we partition the requests into well-separated clusters and replace each cluster with a small and a large weighted ball; the weight of a ball is the number of requests in the cluster. We show that the cost of the online matching can be expressed as the sum of the weight times radius of the smaller balls. We also show that the cost of edges of the optimal matching inside each larger ball can be shown to be proportional to the weight times the radius of the larger ball. We then use a simple variant of the well-known Vitalis covering lemma to relate the radii of these balls and obtain the competitive ratio.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed sharing-aware online algorithms activate a smaller average number of physical servers relative to the sharing-oblivious algorithms, directly reduce the amount of required memory, and thus, require fewer physical servers to instantiate the VM instances requested by users.
Abstract: One of the key problems that cloud providers need to efficiently solve when offering on-demand virtual machine (VM) instances to a large number of users is the VM Packing problem, a variant of Bin Packing. The VM Packing problem requires determining the assignment of user requested VM instances to physical servers such that the number of physical servers is minimized. In this paper, we consider a more general variant of the VM Packing problem, called the Sharing-Aware VM Packing problem, that has the same objective as the standard VM Packing problem, but allows the VM instances collocated on the same physical server to share memory pages, thus reducing the amount of cloud resources required to satisfy the users’ demand. Our main contributions consist of designing several online algorithms for solving the Sharing-Aware VM Packing problem, and performing an extensive set of experiments to compare their performance against that of several existing sharing-oblivious online algorithms. For small problem instances, we also compare the performance of the proposed online algorithms against the optimal solution obtained by solving the offline variant of the Sharing-Aware VM Packing problem (i.e., the version of the problem that assumes that the set of VM requests are known a priori). The experimental results show that our proposed sharing-aware online algorithms activate a smaller average number of physical servers relative to the sharing-oblivious algorithms, directly reduce the amount of required memory, and thus, require fewer physical servers to instantiate the VM instances requested by users.

Journal ArticleDOI
TL;DR: This paper forms this complex cost optimization problem for data movement, resource provisioning and reducer selection into a joint stochastic integer nonlinear optimization problem by minimizing the five cost factors simultaneously by balancing bandwidth cost, storage cost, computing cost, migration cost, and latency cost, between the two MapReduce phases across datacenters.
Abstract: With the globalization of service, organizations continuously produce large volumes of data that need to be analysed over geo-dispersed locations. Traditionally central approach that moving all data to a single cluster is inefficient or infeasible due to the limitations such as the scarcity of wide-area bandwidth and the low latency requirement of data processing. Processing big data across geo-distributed datacenters continues to gain popularity in recent years. However, managing distributed MapReduce computations across geo-distributed datacenters poses a number of technical challenges: how to allocate data among a selection of geo-distributed datacenters to reduce the communication cost, how to determine the Virtual Machine (VM) provisioning strategy that offers high performance and low cost, and what criteria should be used to select a datacenter as the final reducer for big data analytics jobs. In this paper, these challenges is addressed by balancing bandwidth cost, storage cost, computing cost, migration cost, and latency cost, between the two MapReduce phases across datacenters. We formulate this complex cost optimization problem for data movement, resource provisioning and reducer selection into a joint stochastic integer nonlinear optimization problem by minimizing the five cost factors simultaneously. The Lyapunov framework is integrated into our study and an efficient online algorithm that is able to minimize the long-term time-averaged operation cost is further designed. Theoretical analysis shows that our online algorithm can provide a near optimum solution with a provable gap and can guarantee that the data processing can be completed within pre-defined bounded delays. Experiments on WorldCup98 web site trace validate the theoretical analysis results and demonstrate that our approach is close to the offline-optimum performance and superior to some representative approaches.