scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2018"


Journal ArticleDOI
TL;DR: This work first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications, demonstrating the superior ability ofDNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
Abstract: Numerical optimization has played a central role in addressing key signal processing (SP) problems Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing In this paper, we aim at providing a new learning-based perspective to address this challenging issue The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively—since passing the input through a DNN only requires a small number of simple operations In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm

607 citations


Proceedings ArticleDOI
Jiezhong Qiu1, Yuxiao Dong2, Hao Ma2, Jian Li1, Kuansan Wang2, Jie Tang1 
02 Feb 2018
TL;DR: In this paper, a unified matrix factorization framework for skip-gram based network embedding was proposed, leading to a better understanding of latent network representation learning and the theory of graph Laplacian.
Abstract: Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.

568 citations


Journal ArticleDOI
TL;DR: A surrogate-assisted reference vector guided evolutionary algorithm (SAEA) for computationally expensive optimization problems with more than three objectives that uses Kriging to approximate each objective function to reduce the computational cost.
Abstract: We propose a surrogate-assisted reference vector guided evolutionary algorithm (EA) for computationally expensive optimization problems with more than three objectives The proposed algorithm is based on a recently developed EA for many-objective optimization that relies on a set of adaptive reference vectors for selection The proposed surrogate-assisted EA (SAEA) uses Kriging to approximate each objective function to reduce the computational cost In managing the Kriging models, the algorithm focuses on the balance of diversity and convergence by making use of the uncertainty information in the approximated objective values given by the Kriging models, the distribution of the reference vectors as well as the location of the individuals In addition, we design a strategy for choosing data for training the Kriging model to limit the computation time without impairing the approximation accuracy Empirical results on comparing the new algorithm with the state-of-the-art SAEAs on a number of benchmark problems demonstrate the competitiveness of the proposed algorithm

326 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the mobile edge service performance optimization problem under long-term cost budget constraint, and apply Lyapunov optimization to decompose the problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility.
Abstract: Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge. However, with the sinking of computing capabilities, the new challenge incurred by user mobility arises: since end users typically move erratically, the services should be dynamically migrated among multiple edges to maintain the service performance, i.e., user-perceived latency. Tackling this problem is non-trivial since frequent service migration would greatly increase the operational cost. To address this challenge in terms of the performance-cost tradeoff, in this paper, we study the mobile edge service performance optimization problem under long-term cost budget constraint. To address user mobility which is typically unpredictable, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility. As the decomposed problem is NP-hard, we first design an approximation algorithm based on Markov approximation to seek a near-optimal solution. To make our solution scalable and amenable to future fifth-generation application scenario with large-scale user devices, we further propose a distributed approximation scheme with greatly reduced time complexity, based on the technique of the best response update. Rigorous theoretical analysis and extensive evaluations demonstrate the efficacy of the proposed centralized and distributed schemes.

254 citations


Proceedings ArticleDOI
02 Jul 2018
TL;DR: A constant-factor approximation algorithm for the homogeneous case and efficient heuristics for the general case are developed, which show that while the problem is polynomial-time solvable without storage constraints, it is NP-hard even if each edge cloud has unlimited communication or computation resources.
Abstract: Mobile edge computing is an emerging technology to offer resource-intensive yet delay-sensitive applications from the edge of mobile networks, where a major challenge is to allocate limited edge resources to competing demands. While prior works often make a simplifying assumption that resources assigned to different users are non-sharable, this assumption does not hold for storage resources, where users interested in services (e.g., data analytics) based on the same set of data/code can share storage resource. Meanwhile, serving each user request also consumes non-sharable resources (e.g., CPU cycles, bandwidth). We study the optimal provisioning of edge services with non-trivial demands of both sharable (storage) and non-sharable (communication, computation) resources via joint service placement and request scheduling. In the homogeneous case, we show that while the problem is polynomial-time solvable without storage constraints, it is NP-hard even if each edge cloud has unlimited communication or computation resources. We further show that the hardness is caused by the service placement subproblem, while the request scheduling subproblem is polynomial-time solvable via maximum-flow algorithms. In the general case, both subproblems are NP-hard. We develop a constant-factor approximation algorithm for the homogeneous case and efficient heuristics for the general case. Our trace-driven simulations show that the proposed algorithms, especially the approximation algorithm, can achieve near-optimal performance, serving 2–3 times more requests than a baseline solution that optimizes service placement and request scheduling separately.

199 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a new clustering-based band selection framework to solve the problem of suboptimal solutions toward a specific objective function, which can obtain the optimal clustering result for a particular form of objective under a reasonable constraint.
Abstract: Band selection, by choosing a set of representative bands in a hyperspectral image, is an effective method to reduce the redundant information without compromising the original contents. Recently, various unsupervised band selection methods have been proposed, but most of them are based on approximation algorithms which can only obtain suboptimal solutions toward a specific objective function. This paper focuses on clustering-based band selection and proposes a new framework to solve the above dilemma, claiming the following contributions: 1) an optimal clustering framework, which can obtain the optimal clustering result for a particular form of objective function under a reasonable constraint; 2) a rank on clusters strategy, which provides an effective criterion to select bands on existing clustering structure; and 3) an automatic method to determine the number of the required bands, which can better evaluate the distinctive information produced by certain number of bands. In experiments, the proposed algorithm is compared with some state-of-the-art competitors. According to the experimental results, the proposed algorithm is robust and significantly outperforms the other methods on various data sets.

195 citations


Journal ArticleDOI
TL;DR: Novel approximate compressors and an algorithm to exploit them for the design of efficient approximate multiplier circuits are proposed and synthesized approximate multipliers for several operand lengths using a 40-nm library.
Abstract: Approximate computing is an emerging trend in digital design that trades off the requirement of exact computation for improved speed and power performance. This paper proposes novel approximate compressors and an algorithm to exploit them for the design of efficient approximate multipliers. By using the proposed approach, we have synthesized approximate multipliers for several operand lengths using a 40-nm library. Comparison with previously presented approximated multipliers shows that the proposed circuits provide better power or speed for a target precision. Applications to image filtering and to adaptive least mean squares filtering are also presented in the paper.

173 citations


Proceedings ArticleDOI
11 Jun 2018
TL;DR: In this article, a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient is presented. But this algorithm does not provide an efficient algorithm for maximizing Nash social welfare, which is NP-hard.
Abstract: We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.

163 citations


Proceedings ArticleDOI
08 May 2018
TL;DR: In this paper, the authors proposed a simple and fast algorithm that runs on the CPU and relies only on basic image processing operations to perform depth completion of sparse LIDAR depth data.
Abstract: With the rise of data driven deep neural networks as a realization of universal function approximators, most research on computer vision problems has moved away from handcrafted classical image processing algorithms. This paper shows that with a well designed algorithm, we are capable of outperforming neural network based methods on the task of depth completion. The proposed algorithm is simple and fast, runs on the CPU, and relies only on basic image processing operations to perform depth completion of sparse LIDAR depth data. We evaluate our algorithm on the challenging KITTI depth completion benchmark, and at the time of submission, our method ranks first on the KITTI test server among all published methods. Furthermore, our algorithm is data independent, requiring no training data to perform the task at hand. The code written in Python is publicly available at https://github.com/kujason/ip_basic

154 citations


Journal ArticleDOI
TL;DR: From the results of the experiments, it is found that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; and 3) the combination of VP algorithm and Levenberg–Marquardt method is more effective than the combined algorithm and Gauss–Newton method.
Abstract: For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman’s simplified one; and 4) the combination of VP algorithm and Levenberg–Marquardt method is more effective than the combination of VP algorithm and Gauss–Newton method.

145 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: In this paper, the authors studied the problem of constrained ranking with fairness and diversity constraints and showed that the problem is hard to approximate even with simple constraints such as gender, race, and political opinion constraints.
Abstract: Ranking algorithms are deployed widely to order a set of items in applications such as search engines, news feeds, and recommendation systems. Recent studies, however, have shown that, left unchecked, the output of ranking algorithms can result in decreased diversity in the type of content presented, promote stereotypes, and polarize opinions. In order to address such issues, we study the following variant of the traditional ranking problem when, in addition, there are fairness or diversity constraints. Given a collection of items along with 1) the value of placing an item in a particular position in the ranking, 2) the collection of sensitive attributes (such as gender, race, political opinion) of each item and 3) a collection of fairness constraints that, for each k, bound the number of items with each attribute that are allowed to appear in the top k positions of the ranking, the goal is to output a ranking that maximizes the value with respect to the original rank quality metric while respecting the constraints. This problem encapsulates various well-studied problems related to bipartite and hypergraph matching as special cases and turns out to be hard to approximate even with simple constraints. Our main technical contributions are fast exact and approximation algorithms along with complementary hardness results that, together, come close to settling the approximability of this constrained ranking maximization problem. Unlike prior work on the approximability of constrained matching problems, our algorithm runs in linear time, even when the number of constraints is (polynomially) large, its approximation ratio does not depend on the number of constraints, and it produces solutions with small constraint violations. Our results rely on insights about the constrained matching problem when the objective function satisfies certain properties that appear in common ranking metrics such as discounted cumulative gain (DCG), Spearman's rho or Bradley-Terry, along with the nested structure of fairness constraints.

Posted Content
TL;DR: An approximation algorithm is presented for solving a class of bilevel programming problem where the inner objective function is strongly convex and its finite-time convergence analysis under different convexity assumption on the outer objective function.
Abstract: In this paper, we study a class of bilevel programming problem where the inner objective function is strongly convex More specifically, under some mile assumptions on the partial derivatives of both inner and outer objective functions, we present an approximation algorithm for solving this class of problem and provide its finite-time convergence analysis under different convexity assumption on the outer objective function We also present an accelerated variant of this method which improves the rate of convergence under convexity assumption Furthermore, we generalize our results under stochastic setting where only noisy information of both objective functions is available To the best of our knowledge, this is the first time that such (stochastic) approximation algorithms with established iteration complexity (sample complexity) are provided for bilevel programming

Proceedings ArticleDOI
01 Oct 2018
TL;DR: The gap between the known randomized and deterministic local distributed algorithms underlies arguably the most fundamental and central open question in distributed graph algorithms as mentioned in this paper, leading to significant improvements on a number of problems, in cases resolving known open problems.
Abstract: The gap between the known randomized and deterministic local distributed algorithms underlies arguably the most fundamental and central open question in distributed graph algorithms. In this paper, we combine the method of conditional expectation with network decompositions to obtain a generic and clean recipe for derandomizing LOCAL algorithms. This leads to significant improvements on a number of problems, in cases resolving known open problems. Two main results are: - An improved deterministic distributed algorithm for hypergraph maximal matching, improving on Fischer, Ghaffari, and Kuhn [FOCS '17]. This yields improved algorithms for edge-coloring, maximum matching approximation, and low out-degree edge orientation. The last result gives the first positive resolution in the Open Problem 11.10 in the book of Barenboim and Elkin. - Improved randomized and deterministic distributed algorithms for the Lovasz Local Lemma, which get closer to a conjecture of Chang and Pettie [FOCS '17].

Proceedings ArticleDOI
20 Jun 2018
TL;DR: The result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint.
Abstract: In this paper we study the adaptive complexity of submodular optimization. Informally, the adaptive complexity of a problem is the minimal number of sequential rounds required to achieve a constant factor approximation when polynomially-many queries can be executed in parallel at each round. Adaptivity is a fundamental concept that is heavily studied in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, very little is known about adaptivity in submodular optimization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, to the best of our knowledge, all that is known to date is that the adaptive complexity is between 1 and Ω(n). Our main result in this paper is a tight characterization showing that the adaptive complexity of maximizing a monotone submodular function under a cardinality constraint is Θ(log n): - We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3; - We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any known existing algorithm for submodular maximization. Importantly, the approximation algorithm is achieved via adaptive sampling and complements a recent line of work on optimization of functions learned from data. In many cases we do not know the functions we optimize and learn them from labeled samples. Recent results show that no algorithm can obtain a constant factor approximation guarantee using polynomially-many labeled samples as in the PAC and PMAC models, drawn from any distribution. Since learning with non-adaptive samples over any distribution results in a sharp impossibility, we consider learning with adaptive samples where the learner obtains poly(n) samples drawn from a distribution of her choice in every round. Our result implies that in the realizable case, where there is a true underlying function generating the data, Θ(log n) batches of adaptive samples are necessary and sufficient to approximately “learn to optimize” a monotone submodular function under a cardinality constraint.

Proceedings ArticleDOI
19 Apr 2018
TL;DR: This work describes a pruned lattice-rescoring algorithm for ASR, improving the n-gram approximation method and bringing a 4x speedup for latticescoring with 4- gram approximation while giving better recognition accuracies than the standard algorithm.
Abstract: Lattice-rescoring is a common approach to take advantage of recurrent neural language models in ASR, where a word-lattice is generated from 1st-pass decoding and the lattice is then rescored with a neural model, and an n-gram approximation method is usually adopted to limit the search space. In this work, we describe a pruned lattice-rescoring algorithm for ASR, improving the n-gram approximation method. The pruned algorithm further limits the search space and uses heuristic search to pick better histories when expanding the lattice. Experiments show that the proposed algorithm achieves better ASR accuracies while running much faster than the standard algorithm. In particular, it brings a 4x speedup for lattice-rescoring with 4-gram approximation while giving better recognition accuracies than the standard algorithm.

Proceedings ArticleDOI
16 Apr 2018
TL;DR: This paper considers IoT applications that receive continuous data streams from multiple sources in the network, and study joint application placement and data routing to support all data streams with both bandwidth and delay guarantees.
Abstract: S-The emergence of the Internet-of-Things (IoT) has inspired numerous new applications. However, due to the limited resources in current IoT infrastructures and the stringent quality-of-service requirements of the applications, providing computing and communication supports for the applications is becoming increasingly difficult. In this paper, we consider IoT applications that receive continuous data streams from multiple sources in the network, and study joint application placement and data routing to support all data streams with both bandwidth and delay guarantees. We formulate the application provisioning problem both for a single application and for multiple applications, with both cases proved to be NP-hard. For the case with a single application, we propose a fully polynomial-time approximation scheme. For the multi-application scenario, if the applications can be parallelized among multiple distributed instances, we propose a fully polynomial-time approximation scheme; for general non-parallelizable applications, we propose a randomized algorithm and analyze its performance. Simulations show that the proposed algorithms greatly improve the quality-of-service of the IoT applications compared to the heuristics.

Posted Content
TL;DR: This paper applies Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility, and designs an approximation algorithm based on Markov approximation to seek a near-optimal solution.
Abstract: Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge. However, with the sinking of computing capabilities, the new challenge incurred by user mobility arises: since end-users typically move erratically, the services should be dynamically migrated among multiple edges to maintain the service performance, i.e., user-perceived latency. Tackling this problem is non-trivial since frequent service migration would greatly increase the operational cost. To address this challenge in terms of the performance-cost trade-off, in this paper we study the mobile edge service performance optimization problem under long-term cost budget constraint. To address user mobility which is typically unpredictable, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility. As the decomposed problem is NP-hard, we first design an approximation algorithm based on Markov approximation to seek a near-optimal solution. To make our solution scalable and amenable to future 5G application scenario with large-scale user devices, we further propose a distributed approximation scheme with greatly reduced time complexity, based on the technique of best response update. Rigorous theoretical analysis and extensive evaluations demonstrate the efficacy of the proposed centralized and distributed schemes.

Journal ArticleDOI
TL;DR: A two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images, which is comparable and even superior to some state-of-the-art inpainting algorithms.
Abstract: To recover the corrupted pixels, traditional inpainting methods based on low-rank priors generally need to solve a convex optimization problem by an iterative singular value shrinkage algorithm. In this paper, we propose a simple method for image inpainting using low rank approximation, which avoids the time-consuming iterative shrinkage. Specifically, if similar patches of a corrupted image are identified and reshaped as vectors, then a patch matrix can be constructed by collecting these similar patch-vectors. Due to its columns being highly linearly correlated, this patch matrix is low-rank. Instead of using an iterative singular value shrinkage scheme, the proposed method utilizes low rank approximation with truncated singular values to derive a closed-form estimate for each patch matrix. Depending upon an observation that there exists a distinct gap in the singular spectrum of patch matrix, the rank of each patch matrix is empirically determined by a heuristic procedure. Inspired by the inpainting algorithms with component decomposition, a two-stage low rank approximation (TSLRA) scheme is designed to recover image structures and refine texture details of corrupted images. Experimental results on various inpainting tasks demonstrate that the proposed method is comparable and even superior to some state-of-the-art inpainting algorithms.

Journal ArticleDOI
TL;DR: An algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing into expectation-maximization-based sparse Bayesian learning (SBL) and is much more robust to arbitrary measurement matrix $\boldsymbol{A}$ than the standard damped GAMP algorithm.
Abstract: In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into expectation-maximization-based sparse Bayesian learning (SBL). In particular, GGAMP is used to implement the E-step in SBL in place of matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to arbitrary measurement matrix $\boldsymbol{A}$ than the standard damped GAMP algorithm while being much lower complexity than the standard SBL algorithm. We then extend the approach from the single measurement vector case to the temporally correlated multiple measurement vector case, leading to the GGAMP-TSBL algorithm. We verify the robustness and computational advantages of the proposed algorithms through numerical experiments.

Journal ArticleDOI
TL;DR: This paper suggests a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multiobjective optimization algorithm using reference points using multicriterion decision making approach.
Abstract: Measuring the performance of an algorithm for solving multiobjective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained tradeoff solutions. There are a number of metrics for evaluating the performance of a multiobjective optimizer that approximates the whole Pareto-optimal front. However, for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multiobjective optimization algorithm using reference points. The basic idea is to preprocess the preferred solution set according to a multicriterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios, and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various reference points supplied by a decision maker.

Proceedings ArticleDOI
01 Jul 2018
TL;DR: In this paper, the authors introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes.
Abstract: Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be "fair" with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.

Journal ArticleDOI
TL;DR: This work proposes a truthful, reverse-auction-based incentive mechanism that includes an approximation algorithm to select winning bids with a nearly minimum social cost and a payment algorithm to determine payments for all participants, and extends the problem to a more complex case in which the Quality of sensing Data of each vehicle is taken into consideration.
Abstract: In this paper, we focus on the incentive mechanism design for a vehicle-based, nondeterministic crowdsensing system. In this crowdsensing system, vehicles move along their trajectories and perform corresponding sensing tasks with different probabilities. Each task may be performed by multiple vehicles jointly so as to ensure a high probability of success. Designing an incentive mechanism for such a crowdsensing system is challenging since it contains a non-trivial set cover problem. To solve this problem, we propose a truthful, reverse-auction-based incentive mechanism that includes an approximation algorithm to select winning bids with a nearly minimum social cost and a payment algorithm to determine payments for all participants. Moreover, we extend the problem to a more complex case in which the Quality of sensing Data (QoD) of each vehicle is taken into consideration. For this problem, we propose a QoD-aware incentive mechanism, which consists of a QoD-aware winning-bid selection algorithm and a QoD-aware payment determination algorithm. We prove that the proposed incentive mechanisms have truthfulness, individual rationality, and computational efficiency. Moreover, we analyze the approximation ratios of the winning-bid selection algorithms. The simulations, based on a real vehicle trace, also demonstrate the significant performances of our incentive mechanisms.

Journal ArticleDOI
TL;DR: A novel online approximation algorithm is proposed by resorting to the regularization, rounding, and decomposition technique, which can be proved to have a parameterized competitive ratio with a polynomial running time and achieves an empirical competitive ratio around 2 – 4.
Abstract: In this paper, we advocate edge caching in cloud radio access networks (C-RAN) to facilitate the ever-increasing mobile multimedia services. In our framework, central offices will cooperatively allocate cloud resources to cache popular contents and satisfy user requests for those contents, so as to minimize the system costs in terms of storage, VM reconfiguration, content access latency, and content migration. However, this joint resource allocation, content placement and request routing, is nontrivial, since it needs to be continuously adjusted to accommodate system dynamics, such as user movement and content slashdot effect, while taking into account the time-correlated adjustment costs for VM reconfiguration and content migration. To this end, we build a comprehensive model to capture the key components of edge caching in C-RAN and formulate a joint optimization problem, aiming at minimizing the system costs over time and meanwhile satisfying the time-varying user requests and respecting various practical constraints (e.g., storage and bandwidth). Then, we propose a novel online approximation algorithm by resorting to the regularization, rounding, and decomposition technique, which can be proved to have a parameterized competitive ratio with a polynomial running time. Extensive trace-driven simulations corroborate the efficiency, flexibility, and lightweight of our proposed online algorithm; for instance, it achieves an empirical competitive ratio around 2 – 4 and gains over 30% improvement compared with many state-of-the-art algorithms in various system settings.

Posted Content
TL;DR: In this article, a numerical approximation of the Kolmogorov PDE on an entire region $[a,b]^d$ without suffering from the curse of dimensionality is presented.
Abstract: Stochastic differential equations (SDEs) and the Kolmogorov partial differential equations (PDEs) associated to them have been widely used in models from engineering, finance, and the natural sciences. In particular, SDEs and Kolmogorov PDEs, respectively, are highly employed in models for the approximative pricing of financial derivatives. Kolmogorov PDEs and SDEs, respectively, can typically not be solved explicitly and it has been and still is an active topic of research to design and analyze numerical methods which are able to approximately solve Kolmogorov PDEs and SDEs, respectively. Nearly all approximation methods for Kolmogorov PDEs in the literature suffer under the curse of dimensionality or only provide approximations of the solution of the PDE at a single fixed space-time point. In this paper we derive and propose a numerical approximation method which aims to overcome both of the above mentioned drawbacks and intends to deliver a numerical approximation of the Kolmogorov PDE on an entire region $[a,b]^d$ without suffering from the curse of dimensionality. Numerical results on examples including the heat equation, the Black-Scholes model, the stochastic Lorenz equation, and the Heston model suggest that the proposed approximation algorithm is quite effective in high dimensions in terms of both accuracy and speed.

Journal ArticleDOI
TL;DR: A cooperative secure transmission beamforming scheme is designed, which is realized through the satellite’s adaptive beamforming, artificial noise, and BSs’ cooperative beamforming implemented by terrestrial BSs, to maximize the achievable secrecy rate of the eavesdropped fixed satellite service.
Abstract: In this paper, we consider a scenario where the satellite-terrestrial network is overlaid over the legacy cellular network. The established communication system is operated in the millimeter wave (mmWave) frequencies, which enables the massive antennas arrays to be equipped on the satellite and terrestrial base stations (BSs). The secure communication in this coexistence system of the satellite-terrestrial network and cellular network through the physical-layer security techniques is studied in this paper. To maximize the achievable secrecy rate of the eavesdropped fixed satellite service, we design a cooperative secure transmission beamforming scheme, which is realized through the satellite’s adaptive beamforming, artificial noise, and BSs’ cooperative beamforming implemented by terrestrial BSs. A non-cooperative beamforming scheme is also designed, according to which BSs implement the maximum ratio transmission beamforming strategy. Applying the designed secure beamforming schemes to the coexistence system established, we formulate the secrecy rate maximization problems subjected to the power and transmission quality constraints. To solve the nonconvex optimization problems, we design an approximation and iteration-based genetic algorithm, through which the original problems can be transformed into a series of convex quadratic problems. Simulation results show the impact of multiple antenna arrays at the mmWave on improving the secure communication. Our results also indicate that through the cooperative and adaptive beamforming, the secrecy rate can be greatly increased. In addition, the convergence and efficiency of the proposed iteration-based approximation algorithm are verified by the simulations.

Proceedings Article
25 Apr 2018
TL;DR: This paper investigates the ride-sharing assignment problem as a combinatorial optimization problem, shows that it is NP-hard, and designs an approximation algorithm which guarantees to output a solution with at most 2.5 times the optimal cost.
Abstract: We investigate the ride-sharing assignment problem from an algorithmic resource allocation point of view. Given a number of requests with source and destination locations, and a number of available car locations, the task is to assign cars to requests with two requests sharing one car. We formulate this as a combinatorial optimization problem, and show that it is NP-hard. We then design an approximation algorithm which guarantees to output a solution with at most 2.5 times the optimal cost. Experiments are conducted showing that our algorithm actually has a much better approximation ratio (around 1.2) on synthetically generated data.

Journal ArticleDOI
01 Jun 2018
TL;DR: This paper develops an efficient exact algorithm with several pruning techniques, a novel quadtree based index to support the efficient retrieval of users in a region and optimise the search regions with regards to the given query region, and develops an approximation algorithm with adjustable accuracy guarantees.
Abstract: The problem of k-truss search has been well defined and investigated to find the highly correlated user groups in social networks. But there is no previous study to consider the constraint of users' spatial information in k-truss search, denoted as co-located community search in this paper. The co-located community can serve many real applications. To search the maximum co-located communities efficiently, we first develop an efficient exact algorithm with several pruning techniques. After that, we further develop an approximation algorithm with adjustable accuracy guarantees and explore more effective pruning rules, which can reduce the computational cost significantly. To accelerate the real-time efficiency, we also devise a novel quadtree based index to support the efficient retrieval of users in a region and optimise the search regions with regards to the given query region. Finally, we verify the performance of our proposed algorithms and index using five real datasets.

Journal ArticleDOI
TL;DR: This paper presents novel techniques to reformulate SCAPE into a traditional linear programming problem, and proposes a distributed algorithm with provable approximation ratio (1 - ε) that outperforms the Set-Cover algorithm and has an average performance gain of 41.1% over the SCP algorithm in terms of the overall charging utility.
Abstract: Wireless power transfer technology is considered as one of the promising solutions to address the energy limitation problems for end-devices, but its incurred potential risk of electromagnetic radiation (EMR) exposure is largely overlooked by most existing works. In this paper, we consider the Safe Charging with Adjustable PowEr (SCAPE) problem, namely, how to adjust the power of chargers to maximize the charging utility of devices, while assuring that EMR intensity at any location in the field does not exceed a given threshold $R_{t}$ . We present novel techniques to reformulate SCAPE into a traditional linear programming problem, and then remove its redundant constraints as much as possible to reduce computational effort. Next, we propose a series of distributed algorithms, including a fully distributed algorithm that provably achieves $(1-\epsilon)$ approximation ratio and requires only communications with neighbors within a constant distance for each charger. Through extensive simulation and testbed experiments, we demonstrate that our proposed algorithms can outperform the set-cover algorithm by up to 17.05%, and has an average performance gain of 41.1% over the existing algorithm in terms of the overall charging utility.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: It is proved that the multivariate generating polynomial of the bases of any matroid is log-concave as a function over the positive orthant, and a general framework for approximate counting in discrete problems, based on convex optimization is developed.
Abstract: We give a deterministic polynomial time 2^O(r)-approximation algorithm for the number of bases of a given matroid of rank r and the number of common bases of any two matroids of rank r. To the best of our knowledge, this is the first nontrivial deterministic approximation algorithm that works for arbitrary matroids. Based on a lower bound of Azar, Broder, and Frieze this is almost the best possible assuming oracle access to independent sets of the matroid. There are two main ingredients in our result: For the first, we build upon recent results of Adiprasito, Huh, and Katz and Huh and Wang on combinatorial hodge theory to derive a connection between matroids and log-concave polynomials. We expect that several new applications in approximation algorithms will be derived from this connection in future. Formally, we prove that the multivariate generating polynomial of the bases of any matroid is log-concave as a function over the positive orthant. For the second ingredient, we develop a general framework for approximate counting in discrete problems, based on convex optimization. The connection goes through subadditivity of the entropy. For matroids, we prove that an approximate superadditivity of the entropy holds by relying on the log-concavity of the corresponding polynomials.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new algorithm, dubbed Picard, which makes use of sparse approximate Hessians only as a preconditioner to the L-BFGS algorithm, refining the Hessian approximation from a memory of the past iterates.
Abstract: Independent Component Analysis (ICA) is a technique for unsupervised exploration of multichannel data that is widely used in observational sciences. In its classic form, ICA relies on modeling the data as linear mixtures of nonGaussian independent sources. The maximization of the corresponding likelihood is a challenging problem if it has to be completed quickly and accurately on large sets of real data. This problem has been addressed by resorting to quasi-Newton methods, which rely on sparse approximations of the Hessian of the log-likelihood. However, those approximations are not accurate when the ICA model does not hold exactly, as is often the case for real datasets. We propose a new algorithm, dubbed Picard, which makes use of sparse approximate Hessians only as a preconditioner to the L-BFGS algorithm, refining the Hessian approximation from a memory of the past iterates. Extensive numerical comparisons to several algorithms of the same class demonstrate the superior performance of the proposed technique, especially on real data.