scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2016"


Proceedings ArticleDOI
01 Dec 2016
TL;DR: A novel scalable algorithm for time series subsequence all-pairs-similarity-search that computes the answer to the time series motif and time series discord problem as a side-effect, and incidentally provides the fastest known algorithm for both these extensively-studied problems.
Abstract: The all-pairs-similarity-search (or similarity join) problem has been extensively studied for text and a handful of other datatypes. However, surprisingly little progress has been made on similarity joins for time series subsequences. The lack of progress probably stems from the daunting nature of the problem. For even modest sized datasets the obvious nested-loop algorithm can take months, and the typical speed-up techniques in this domain (i.e., indexing, lower-bounding, triangular-inequality pruning and early abandoning) at best produce one or two orders of magnitude speedup. In this work we introduce a novel scalable algorithm for time series subsequence all-pairs-similarity-search. For exceptionally large datasets, the algorithm can be trivially cast as an anytime algorithm and produce high-quality approximate solutions in reasonable time. The exact similarity join algorithm computes the answer to the time series motif and time series discord problem as a side-effect, and our algorithm incidentally provides the fastest known algorithm for both these extensively-studied problems. We demonstrate the utility of our ideas for two time series data mining problems, including motif discovery and novelty discovery.

452 citations


Journal ArticleDOI
TL;DR: This work studies the downlink sum rate maximization problem, when the NOMA principle is applied, and the conditions under which the achievable rates maximization can be further simplified to a low complexity design problem, and compute the probability of occurrence of this event.
Abstract: Non-orthogonal multiple access (NOMA) systems have the potential to deliver higher system throughput, compared with contemporary orthogonal multiple access techniques. For a linearly precoded multiple-input single-output (MISO) system, we study the downlink sum rate maximization problem, when the NOMA principle is applied. Being a non-convex and intractable optimization problem, we resort to approximate it with a minorization-maximization algorithm (MMA), which is a widely used tool in statistics. In each step of the MMA, we solve a second-order cone program, such that the feasibility set in each step contains that of the previous one, and is always guaranteed to be a subset of the feasibility set of the original problem. It should be noted that the algorithm takes a few iterations to converge. Furthermore, we study the conditions under which the achievable rates maximization can be further simplified to a low complexity design problem, and we compute the probability of occurrence of this event. Numerical examples are conducted to show a comparison of the proposed approach against conventional multiple access systems.

403 citations


Journal ArticleDOI
TL;DR: An approximation algorithm for SamplingTSPN is presented, and how to model the UAV planning problem using a metric graph and formulate an orienteering instance to which a known approximation algorithm can be applied is shown.
Abstract: We study two new informative path planning problems that are motivated by the use of aerial and ground robots in precision agriculture. The first problem, termed sampling traveling salesperson problem with neighborhoods ( Sampling TSPN), is motivated by scenarios in which unmanned ground vehicles (UGVs) are used to obtain time-consuming soil measurements. The input in SamplingTSPN is a set of possibly overlapping disks. The objective is to choose a sampling location in each disk and a tour to visit the set of sampling locations so as to minimize the sum of the travel and measurement times. The second problem concerns obtaining the maximum number of aerial measurements using an unmanned aerial vehicle (UAV) with limited energy. We study the scenario in which the two types of robots form a symbiotic system—the UAV lands on the UGV, and the UGV transports the UAV between deployment locations. This paper makes the following contributions. First, we present an $\operatornamewithlimits{\mathcal {O}}(\frac{r_{\max }}{r_{\min }})$ approximation algorithm for SamplingTSPN , where $r_{\min }$ and $r_{\max }$ are the minimum and maximum radii of input disks. Second, we show how to model the UAV planning problem using a metric graph and formulate an orienteering instance to which a known approximation algorithm can be applied. Third, we apply the two algorithms to the problem of obtaining ground and aerial measurements in order to accurately estimate a nitrogen map of a plot. Along with theoretical results, we present results from simulations conducted using real soil data and preliminary field experiments with the UAV.

387 citations


Journal ArticleDOI
03 Feb 2016
TL;DR: In this paper, the authors studied nonconvex distributed optimization in multi-agent networks with time-varying (nonsymmetric) connectivity and proposed an algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconcave and non-separable) function, the agents' sum-utility, plus a convex regularizer.
Abstract: We study nonconvex distributed optimization in multiagent networks with time-varying (nonsymmetric) connectivity. We introduce the first algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconvex and nonseparable) function—the agents’ sum-utility—plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on successive convex approximation techniques while leveraging dynamic consensus as a mechanism to distribute the computation among the agents: each agent first solves (possibly inexactly) a local convex approximation of the nonconvex original problem, and then performs local averaging operations. Asymptotic convergence to (stationary) solutions of the nonconvex problem is established. Our algorithmic framework is then customized to a variety of convex and nonconvex problems in several fields, including signal processing, communications, networking, and machine learning. Numerical results show that the new method compares favorably to existing distributed algorithms on both convex and nonconvex problems.

379 citations


Posted Content
TL;DR: SSA and D-SSA as mentioned in this paper are two sampling frameworks for IM-based viral marketing problems, which are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same $(1-1/e-\epsilon) approximation guarantee.
Abstract: Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same $(1-1/e-\epsilon)$ approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM. The absolute superiority of SSA and D-SSA are confirmed through extensive experiments on real network data for IM and another topic-aware viral marketing problem, named TVM. The source code is available at this https URL

272 citations


Journal ArticleDOI
TL;DR: The Ising chip achieves 100 MHz operation and its capability of solving combinatorial optimization problems using an Ising model is confirmed and the power efficiency can be estimated to be 1800 times higher than that of a general purpose CPU when running an approximation algorithm.
Abstract: In the near future, the ability to solve combinatorial optimization problems will be a key technique to enable the IoT era. A new computing architecture called Ising computing and implemented using CMOS circuits is proposed. This computing maps the problems to an Ising model, a model to express the behavior of magnetic spins, and solves combinatorial optimization problems efficiently exploiting its intrinsic convergence properties. In the computing, “CMOS annealing” is used to find a better solution for the problems. A 20k-spin prototype Ising chip is fabricated in 65 nm process. The Ising chip achieves 100 MHz operation and its capability of solving combinatorial optimization problems using an Ising model is confirmed. The power efficiency of the chip can be estimated to be 1800 times higher than that of a general purpose CPU when running an approximation algorithm.

260 citations


Proceedings ArticleDOI
14 Jun 2016
TL;DR: Theoretically, it is proved that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM.
Abstract: Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same (1-1/e-e) approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM. The absolute superiority of SSA and D-SSA are confirmed through extensive experiments on real network data for IM and another topic-aware viral marketing problem, named TVM.

236 citations


Journal ArticleDOI
TL;DR: An efficient algorithm is provided which approximates up to a multiplicative factor of O(log n), with n being the network size, any optimal actuator set that meets the same energy criteria; this is the best approximation factor one can achieve in polynomial time in the worst case.
Abstract: We address the problem of minimal actuator placement in a linear system subject to an average control energy bound. First, following the recent work of Olshevsky, we prove that this is NP-hard. Then, we provide an efficient algorithm which, for a given range of problem parameters, approximates up to a multiplicative factor of $O(\log n)$ , with $n$ being the network size, any optimal actuator set that meets the same energy criteria; this is the best approximation factor one can achieve in polynomial time in the worst case. Moreover, the algorithm uses a perturbed version of the involved control energy metric, which we prove to be supermodular. Next, we focus on the related problem of cardinality-constrained actuator placement for minimum control effort, where the optimal actuator set is selected so that an average input energy metric is minimized. While this is also an NP-hard problem, we use our proposed algorithm to efficiently approximate its solutions as well. Finally, we run our algorithms over large random networks to illustrate their efficiency.

233 citations


Journal ArticleDOI
TL;DR: This paper studies the cloudlet placement problem in a large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs) with the objective to minimize the average access delay between mobile users and the cloudlets serving the users.
Abstract: Mobile cloud computing is emerging as a main ubiquitous computing platform to provide rich cloud resources for various applications of mobile devices. Although most existing studies in mobile cloud computing focus on energy savings of mobile devices by offloading computing-intensive jobs from mobile devices to remote clouds, the access delays between mobile users and remote clouds usually are long and sometimes unbearable. Cloudlet as a new technology is capable to bridge this gap, and can enhance the performance of mobile devices significantly while meeting the crisp response time requirements of mobile users. In this paper, we study the cloudlet placement problem in a large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs). We first formulate the problem as a novel capacitated cloudlet placement problem that places $K$ cloudlets to some strategic locations in the WMAN with the objective to minimize the average access delay between mobile users and the cloudlets serving the users. We then propose an exact solution to the problem by formulating it as an Integer Linear Programming (ILP). Due to the poor scalability of the ILP, we instead propose an efficient heuristic for the problem. For a special case of the problem where all cloudlets have identical computing capacities, we devise novel approximation algorithms with guaranteed approximation ratios. We also devise an online algorithm for dynamically allocating user requests to different cloudlets, if the $K$ cloudlets have already been placed. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising and scalable.

224 citations


Journal ArticleDOI
TL;DR: This is the first algorithm providing a constant factor approximation for treewidth which runs in time single exponential in $k$ and linear in the input size and can be used to speed up many algorithms to work in time.
Abstract: We give an algorithm that for an input $n$-vertex graph $G$ and integer $k>0$, in time $2^{O(k)} n$, either outputs that the treewidth of $G$ is larger than $k$, or gives a tree decomposition of $G$ of width at most $5k+4$. This is the first algorithm providing a constant factor approximation for treewidth which runs in time single exponential in $k$ and linear in $n$. Treewidth-based computations are subroutines of numerous algorithms. Our algorithm can be used to speed up many such algorithms to work in time which is single exponential in the treewidth and linear in the input size.

215 citations


Journal ArticleDOI
TL;DR: Factorized graph matching (FGM) is proposed, which factorizes the large pairwise affinity matrix into smaller matrices that encode the local structure of each graph and the Pairwise affinity between edges and four are the benefits that follow.
Abstract: Graph matching (GM) is a fundamental problem in computer science, and it plays a central role to solve correspondence problems in computer vision. GM problems that incorporate pairwise constraints can be formulated as a quadratic assignment problem (QAP). Although widely used, solving the correspondence problem through GM has two main limitations: (1) the QAP is NP-hard and difficult to approximate; (2) GM algorithms do not incorporate geometric constraints between nodes that are natural in computer vision problems. To address aforementioned problems, this paper proposes factorized graph matching (FGM). FGM factorizes the large pairwise affinity matrix into smaller matrices that encode the local structure of each graph and the pairwise affinity between edges. Four are the benefits that follow from this factorization: (1) There is no need to compute the costly (in space and time) pairwise affinity matrix; (2) The factorization allows the use of a path-following optimization algorithm, that leads to improved optimization strategies and matching performance; (3) Given the factorization, it becomes straight-forward to incorporate geometric transformations (rigid and non-rigid) to the GM problem. (4) Using a matrix formulation for the GM problem and the factorization, it is easy to reveal commonalities and differences between different GM methods. The factorization also provides a clean connection with other matching algorithms such as iterative closest point; Experimental results on synthetic and real databases illustrate how FGM outperforms state-of-the-art algorithms for GM. The code is available at http://humansensing.cs.cmu.edu/fgm .

01 Jan 2016
TL;DR: The analysis of some heuristics or approximation algorithms which never deviate by more than 100% from the optimum is focused on.
Abstract: EACH OF n jobs is to be processed without interruption on a single machine. The machine can process only one job at a time. Job i (i = 1, #t4, n) is available for processing at time ri, has a nonzero processing time pi and has a subsequent delivery time qi. We assume that all ri, pi and qi are integers. The objective is to find a sequence of jobs which minimizes the time by which all jobs are delivered. As the problem is stated above, it is in symmetric form because an equivalent inverse problem can be obtained from the original problem by interchanging ri and qi for all jobs i. For any constant K, we can define due dates for each job i by di = K qi. This produces a modified problem in which the due dates replace the delivery times. Minimizing the time by which all jobs are delivered in the symmetric form is equivalent to minimizing maximum lateness with respect to the due dates in the modified form. It has been shown by Lenstra et al. (1977) that the problem is NPhard, which implies that the existence of a polynomial bounded algorithm to solve the problem is unlikely. Implicit enumeration algorithms have been successfully applied to problems with up to 80 jobs by Baker and Su (1974), McMahon and Florian (1975) and Lageweg et al. (1976). Kise et al. (1978) has analyzed the performance of several heuristics, demonstrating that each heuristic can deviate by an amount arbitrarily close to 100% from the optimum. This analysis has since been extended by Kise and Uno (1978). In this note we shall concentrate on the analysis of some heuristics or approximation algorithms which never deviate by more than

Journal ArticleDOI
TL;DR: Two efficient randomized algorithms for betweenness estimation are presented based on random sampling of shortest paths and offer probabilistic guarantees on the quality of the approximation.
Abstract: Betweenness centrality is a fundamental measure in social network analysis, expressing the importance or influence of individual vertices (or edges) in a network in terms of the fraction of shortest paths that pass through them. Since exact computation in large networks is prohibitively expensive, we present two efficient randomized algorithms for betweenness estimation. The algorithms are based on random sampling of shortest paths and offer probabilistic guarantees on the quality of the approximation. The first algorithm estimates the betweenness of all vertices (or edges): all approximate values are within an additive factor $$\varepsilon \in (0,1)$$??(0,1) from the real values, with probability at least $$1-\delta $$1-?. The second algorithm focuses on the top-K vertices (or edges) with highest betweenness and estimate their betweenness value to within a multiplicative factor $$\varepsilon $$?, with probability at least $$1-\delta $$1-?. This is the first algorithm that can compute such approximation for the top-K vertices (or edges). By proving upper and lower bounds to the VC-dimension of a range set associated with the problem at hand, we can bound the sample size needed to achieve the desired approximations. We obtain sample sizes that are independent from the number of vertices in the network and only depend on a characteristic quantity that we call the vertex-diameter, that is the maximum number of vertices in a shortest path. In some cases, the sample size is completely independent from any quantitative property of the graph. An extensive experimental evaluation on real and artificial networks shows that our algorithms are significantly faster and much more scalable as the number of vertices grows than other algorithms with similar approximation guarantees.

Proceedings ArticleDOI
19 Jun 2016
TL;DR: A simple cost function on hierarchies over a set of points, given pairwise similarities between those points, is introduced and it is shown that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.
Abstract: The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation, where for each sample, the main idea is to seek a small transformation that yields maximal classification loss on the transformed sample.
Abstract: Data augmentation is the process of generating samples by transforming training data, with the target of improving the accuracy and robustness of classifiers. In this paper, we propose a new automatic and adaptive algorithm for choosing the transformations of the samples used in data augmentation. Specifically, for each sample, our main idea is to seek a small transformation that yields maximal classification loss on the transformed sample. We employ a trust-region optimization strategy, which consists of solving a sequence of linear programs. Our data augmentation scheme is then integrated into a Stochastic Gradient Descent algorithm for training deep neural networks. We perform experiments on two datasets, and show that that the proposed scheme outperforms random data augmentation algorithms in terms of accuracy and robustness, while yielding comparable or superior results with respect to existing selective sampling approaches.

Journal ArticleDOI
TL;DR: It is shown that NOMP achieves near-optimal performance under a variety of conditions, and is compared with classical algorithms such as MUSIC and more recent Atomic norm Soft Thresholding and Lasso algorithms, both in terms of frequency estimation accuracy and run time.
Abstract: We propose a fast sequential algorithm for the fundamental problem of estimating frequencies and amplitudes of a noisy mixture of sinusoids. The algorithm is a natural generalization of Orthogonal Matching Pursuit (OMP) to the continuum using Newton refinements, and hence is termed Newtonized OMP (NOMP). Each iteration consists of two phases: detection of a new sinusoid, and sequential Newton refinements of the parameters of already detected sinusoids. The refinements play a critical role in two ways: 1) sidestepping the potential basis mismatch from discretizing a continuous parameter space and 2) providing feedback for locally refining parameters estimated in previous iterations. We characterize convergence and provide a constant false alarm rate (CFAR) based termination criterion. By benchmarking against the Cramer–Rao Bound, we show that NOMP achieves near-optimal performance under a variety of conditions. We compare the performance of NOMP with classical algorithms such as MUSIC and more recent Atomic norm Soft Thresholding (AST) and Lasso algorithms, both in terms of frequency estimation accuracy and run time.

Proceedings ArticleDOI
15 Jun 2016
TL;DR: InsideOut as mentioned in this paper is a variation of the traditional dynamic programming approach for constraint programming based on variable elimination, which adds a couple of simple twists to basic variable elimination in order to deal with the generality of FAQ, to take full advantage of Grohe and Marx's fractional edge cover framework, and of the analysis of recent worstcase optimal relational join algorithms.
Abstract: We define and study the Functional Aggregate Query (FAQ) problem, which encompasses many frequently asked questions in constraint satisfaction, databases, matrix operations, probabilistic graphical models and logic. This is our main conceptual contribution. We then present a simple algorithm called "InsideOut" to solve this general problem. InsideOut is a variation of the traditional dynamic programming approach for constraint programming based on variable elimination. Our variation adds a couple of simple twists to basic variable elimination in order to deal with the generality of FAQ, to take full advantage of Grohe and Marx's fractional edge cover framework, and of the analysis of recent worst-case optimal relational join algorithms.As is the case with constraint programming and graphical model inference, to make InsideOut run efficiently we need to solve an optimization problem to compute an appropriate variable ordering. The main technical contribution of this work is a precise characterization of when a variable ordering is `semantically equivalent' to the variable ordering given by the input FAQ expression. Then, we design an approximation algorithm to find an equivalent variable ordering that has the best `fractional FAQ-width'. Our results imply a host of known and a few new results in graphical model inference, matrix operations, relational joins, and logic.We also briefly explain how recent algorithms on beyond worst-case analysis for joins and those for solving SAT and #SAT can be viewed as variable elimination to solve FAQ over compactly represented input functions.

Journal ArticleDOI
TL;DR: The proposed algorithm improves the state of the art on the two problems of scene categorization using representative images and time-series modeling and segmentation using representative models.
Abstract: Finding an informative subset of a large collection of data points or models is at the center of many problems in computer vision, recommender systems, bio/health informatics as well as image and natural language processing. Given pairwise dissimilarities between the elements of a ‘source set’ and a ‘target set,’ we consider the problem of finding a subset of the source set, called representatives or exemplars , that can efficiently describe the target set. We formulate the problem as a row-sparsity regularized trace minimization problem. Since the proposed formulation is, in general, NP-hard, we consider a convex relaxation. The solution of our optimization finds representatives and the assignment of each element of the target set to each representative, hence, obtaining a clustering. We analyze the solution of our proposed optimization as a function of the regularization parameter. We show that when the two sets jointly partition into multiple groups, our algorithm finds representatives from all groups and reveals clustering of the sets. In addition, we show that the proposed framework can effectively deal with outliers. Our algorithm works with arbitrary dissimilarities, which can be asymmetric or violate the triangle inequality. To efficiently implement our algorithm, we consider an Alternating Direction Method of Multipliers (ADMM) framework, which results in quadratic complexity in the problem size. We show that the ADMM implementation allows to parallelize the algorithm, hence further reducing the computational time. Finally, by experiments on real-world datasets, we show that our proposed algorithm improves the state of the art on the two problems of scene categorization using representative images and time-series modeling and segmentation using representative models.

Proceedings ArticleDOI
07 Nov 2016
TL;DR: The paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers, and a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation.
Abstract: Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.

Journal ArticleDOI
TL;DR: This work proves that the minimum-number full-view point coverage is NP-hard and proposes two approximation algorithms to solve it from two different perspectives, and devise two distributed algorithms that obtain the same approximation ratios as GA and SCA.
Abstract: We study the problem of minimum-number full-view area coverage in camera sensor networks, i.e., how to select the minimum number of camera sensors to guarantee the full-view coverage of a given region. Full-view area coverage is challenging because the full-view coverage of a 2-D continuous domain has to be considered. To tackle this challenge, we first study the intrinsic geometric relationship between the full-view area coverage and the full-view point coverage and prove that the full-view area coverage can be guaranteed, as long as a selected full-view ensuring set of points is full-view covered. This leads to a significant dimension reduction for the full-view area coverage problem. Next, we prove that the minimum-number full-view point coverage is NP-hard and propose two approximation algorithms to solve it from two different perspectives, respectively: 1) By introducing a full-view coverage ratio function, we quantify the “contribution” of each camera sensor to the full-view coverage through which we transform the full-view point coverage into a submodular set cover problem and propose a greedy algorithm (GA); and 2) by studying the geometric relationship between the full-view coverage and the traditional coverage, we propose a set-cover-based algorithm (SCA). We analyze the performance of these two approximation algorithms and characterize their approximation ratios. Furthermore, we devise two distributed algorithms that obtain the same approximation ratios as GA and SCA, respectively. Finally, we provide extensive simulation results to validate our analysis.

Journal ArticleDOI
TL;DR: A control algorithm based on adaptive dynamic programming to solve the infinite-horizon optimal control problem for known deterministic nonlinear systems with saturating actuators and nonquadratic cost functionals is proposed.
Abstract: This paper proposes a control algorithm based on adaptive dynamic programming to solve the infinite-horizon optimal control problem for known deterministic nonlinear systems with saturating actuators and nonquadratic cost functionals. The algorithm is based on an actor/critic framework, where a critic neural network (NN) is used to learn the optimal cost, and an actor NN is used to learn the optimal control policy. The adaptive control nature of the algorithm requires a persistence of excitation condition to be a priori validated, but this can be relaxed using previously stored data concurrently with current data in the update of the critic NN. A robustifying control term is added to the controller to eliminate the effect of residual errors, leading to the asymptotically stability of the closed-loop system. Simulation results show the effectiveness of the proposed approach for a controlled Van der Pol oscillator and also for a power system plant.

Journal ArticleDOI
TL;DR: This paper reviews research into solving the two-dimensional (2D) rectangular assignment problem and combines the best methods to implement a k-best 2D rectangular assignment algorithm with bounded runtime.
Abstract: This paper reviews research into solving the two-dimensional (2D) rectangular assignment problem and combines the best methods to implement a k-best 2D rectangular assignment algorithm with bounded runtime. This paper condenses numerous results as an understanding of the "best" algorithm, a strong polynomial-time algorithm with a low polynomial order (a shortest augmenting path approach), would require assimilating information from many separate papers, each making a small contribution. 2D rectangular assignment Matlab code is provided.

Journal ArticleDOI
TL;DR: It is shown how to greatly reduce the gradient computation in each iteration by using approximate gradient derived from linearized power flow equations, which suggests that any local minimum is almost as good as any strictly feasible point.
Abstract: We propose an online algorithm for solving optimal power flow (OPF) problems on radial networks where the controllable devices continuously interact with the network that implicitly computes a power flow solution given a control action. Collectively the controllable devices and the network implement a gradient projection algorithm for the OPF problem in real time. The key design feature that enables this approach is that the intermediate iterates of our algorithm always satisfy power flow equations and operational constraints. This is achieved by explicitly exploiting the network to implicitly solve power flow equations for us in real time at scale. We prove that the proposed algorithm converges to the set of local optima and provide sufficient conditions under which it converges to a global optimum. We derive an upper bound on the suboptimality gap of any local optimum. This bound suggests that any local minimum is almost as good as any strictly feasible point. We explain how to greatly reduce the gradient computation in each iteration by using approximate gradient derived from linearized power flow equations. Numerical results on test networks, ranging from 42-bus to 1990-bus, show a great speedup over a second-order cone relaxation method with negligible difference in objective values.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a decentralized algorithm based on iterative water-filling to schedule the charging and discharging of vehicle-to-grid (V2G) vehicles in a decentralized fashion.
Abstract: This paper focuses on the procurement of load shifting service by optimally scheduling the charging and discharging of PEVs in a decentralized fashion. We assume that the energy flow between PEVs and the grid is bidirectional, i.e., PEVs can also release energy back into the grid as distributed generation, which is known as vehicle-to-grid (V2G). The optimal scheduling problem is then formulated as a mixed discrete programming (MDP) problem, which is NP-hard and extremely difficult to solve directly. To get over this difficulty, we propose a solvable approximation of the MDP problem by exploiting the shape feature of the base demand curve during the night, and develop a decentralized algorithm based on iterative water-filling. Our algorithm is decentralized in the sense that the PEVs compute locally and communicate with an aggregator. The advantages of our algorithm include reduction in computational burden and privacy preserving. Simulation results are given to show the performance of our algorithm.

Journal ArticleDOI
TL;DR: A heuristic based on computing a Maximum Circulation on the demand graph together with a convex integer program solved optimally by a greedy algorithm is proposed and the performance ratio of this heuristic is proved to be exactly N/(N+M-1)$$N/(N-1).

Journal ArticleDOI
TL;DR: An inverse-free ELM to incrementally increase the number of hidden nodes, and update the connection weights progressively and optimally and proves the monotonic decrease of the training error with the proposed updating procedure and the optimality in every updating step.
Abstract: The extreme learning machine (ELM) has drawn insensitive research attentions due to its effectiveness in solving many machine learning problems. However, the matrix inversion operation involved in the algorithm is computational prohibitive and limits the wide applications of ELM in many scenarios. To overcome this problem, in this paper, we propose an inverse-free ELM to incrementally increase the number of hidden nodes, and update the connection weights progressively and optimally. Theoretical analysis proves the monotonic decrease of the training error with the proposed updating procedure and also proves the optimality in every updating step. Extensive numerical experiments show the effectiveness and accuracy of the proposed algorithm.

Journal ArticleDOI
TL;DR: For the uniform-speed case, in which all jobs have arbitrary power demands and must be processed at a single uniform speed, it is proved that the non-preemptive version of this problem is inapproximable within a constant factor unless P = NP, and for the speed-scalable case, this problem can be solved in polynomial time.
Abstract: We consider the problem of scheduling jobs on a single machine to minimize the total electricity cost of processing these jobs under time-of-use electricity tariffs. For the uniform-speed case, in which all jobs have arbitrary power demands and must be processed at a single uniform speed, we prove that the non-preemptive version of this problem is inapproximable within a constant factor unless $$\text {P} = \text {NP}$$ . On the other hand, when all the jobs have the same workload and the electricity prices follow a so-called pyramidal structure, we show that this problem can be solved in polynomial time. For the speed-scalable case, in which jobs can be processed at an arbitrary speed with a trade-off between speed and power demand, we show that the non-preemptive version of the problem is strongly NP-hard. We also present different approximation algorithms for this case, and test the computational performance of these approximation algorithms on randomly generated instances. In addition, for both the uniform-speed and speed-scaling cases, we show how to compute optimal schedules for the preemptive version of the problem in polynomial time.

Journal ArticleDOI
TL;DR: This work investigates the optimal quality-aware coverage in mobile crowdsensing networks and proposes an (1 - (1/e) approximation algorithm with O(nk+2) time complexity, which achieves a near-optimal solution, compared with the brute-force search results.
Abstract: Mobile crowdsensing has shown elegant capacity in data collection and has given rise to numerous applications. In the sense of coverage quality, marginal works have considered the efficient (less cost) and effective (considerable coverage) design for mobile crowdsensing networks. We investigate the optimal quality-aware coverage in mobile crowdsensing networks. The difference between ours and the conventional coverage problem is that we only select a subset of mobile users so that the coverage quality is maximized with constrained budget. To address this new problem, which is proved to be NP-hard, we first prove that the set function of coverage quality is nondecreasing submodular. By leveraging the favorable property in submodular optimization, we then propose an $(\text{1}-(\text{1}/e))$ approximation algorithm with $O(n^{k+2})$ time complexity, where $k$ is an integer that is greater than or equal to 3. Finally, we conduct extensive simulations for the proposed scheme, and the results demonstrate that ours outperforms the random selection scheme and one of the state of the art in terms of total coverage quality by, at most, 2.4× and 1.5× and by, on average, 1.4× and 1.3×, respectively. Additionally, ours achieves a near-optimal solution, compared with the brute-force search results.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a received signal strength indication-based distributed Bayesian localization algorithm based on message passing to solve the approximate inference problem for precision agriculture applications, such as pest management and pH sensing in large farms.
Abstract: In this paper, we propose a received signal strength indication-based distributed Bayesian localization algorithm based on message passing to solve the approximate inference problem. The algorithm is designed for precision agriculture applications, such as pest management and pH sensing in large farms, where greater power efficiency besides communication and computational scalability is needed but location accuracy requirements are less demanding. Communication overhead, which is a key limitation of popular non-Bayesian and Bayesian distributed techniques, is avoided by a message passing schedule, in which outgoing message by each node does not depend on the destination node, and therefore is a fixed size. Fast convergence is achieved by: 1) eliminating the setup phase linked with spanning tree construction, which is frequent in belief propagation schemes and 2) the parallel nature of the updates, since no message needs to be exchanged among nodes during each update, which is called the coupled variables phenomenon in non-Bayesian techniques and accounts for a significant amount of communication overhead. These features make the proposed algorithm highly compatible with realistic wireless sensor network (WSN) deployments, e.g., ZigBee, that are based upon the ad hoc on-demand distance vector, where route request and route reply packets are flooded in the network during route discovery phase.

Journal ArticleDOI
TL;DR: A class of algorithmic approaches suitable for online cost-sensitive learning, designed for such problems, is investigated, which leverage existing methods for online ensemble algorithms, and combine these with batch mode methods for cost- sensitive bagging/boosting algorithms.
Abstract: While both cost-sensitive learning and online learning have been studied separately, these two issues have seldom been addressed simultaneously. Yet, there are many applications where both aspects are important. This paper investigates a class of algorithmic approaches suitable for online cost-sensitive learning, designed for such problems. The basic idea is to leverage existing methods for online ensemble algorithms, and combine these with batch mode methods for cost-sensitive bagging/boosting algorithms. Within this framework, we describe several theoretically sound online cost-sensitive bagging and online cost-sensitive boosting algorithms, and show that the convergence of the proposed algorithms is guaranteed under certain conditions. We then present extensive experimental results on benchmark datasets to compare the performance of the various proposed approaches.