scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors formulated the robustness-oriented edge application deployment problem as a constrained optimization problem and proved its hardness, and provided an integer programming-based approach named READ- $\mathcal {O}$ for solving this problem precisely.
Abstract: In recent years, edge computing has emerged as a prospective distributed computing paradigm that overcomes several limitations of cloud computing. In the edge computing environment, a service provider can deploy its application instances on edge servers at the edge of the network to serve its own users with low latency. Given a limited budget $\mathcal {K}$ for deploying applications on the edge servers in a particular geographical area, a number of approaches have been proposed very recently to determine the optimal deployment strategy that achieves various optimization objectives, e.g., to maximize the servers’ coverage, to minimize the average network latency, etc. However, the robustness of the services collectively delivered by the service provider’s applications deployed on the edge servers has not been considered at all. This is a critical issue, especially in the highly distributed, dynamic and volatile edge computing environment. In this article, we make the first attempt to tackle this challenge. Specifically, we formulate this Robustness-oriented Edge Application Deployment (READ) problem as a constrained optimization problem and prove its $\mathcal {NP}$ -hardness. Then, we provide an integer programming based approach named READ- $\mathcal {O}$ for solving this problem precisely. We also provide an approximation algorithm, namely READ- $\mathcal {A}$ , for finding near-optimal solutions to large-scale READ problems efficiently. We prove its approximation ratio is not worse than $\mathcal {K}/2$ , which is a constant regardless of the total number of edge servers. We evaluate our approaches experimentally on a widely-used real-world dataset against five representative approaches. The experiment results demonstrate that our approaches can solve the READ problem effectively and efficiently.

28 citations


Journal ArticleDOI
TL;DR: In this paper , the authors studied the fundamental problem of scheduling the power of chargers so that the charging utility for all rechargeable devices is maximized while the probability that the expected EMR anywhere does not exceed a threshold $R_t$ is no less than a given confidence.
Abstract: One critical issue for wireless power transfer is to avoid human health impairments caused by electromagnetic radiation (EMR) exposure. The existing studies mainly focus on scheduling wireless chargers so that (expected) EMR at any point in the area does not exceed a threshold $R_t$ . Nevertheless, they overlook the EMR jitter that leads to exceeding of $R_t$ even if the expected EMR is no more than $R_t$ . This paper studies the fundamental problem of RO bustly S af E charging for wireless power transfer (ROSE), that is, scheduling the power of chargers so that the charging utility for all rechargeable devices is maximized while the probability that EMR anywhere does not exceed $R_t$ is no less than a given confidence. We first build our empirical probabilistic charging model and EMR model. Then, we present EMR approximation and area discretization techniques to formulate ROSE into a Second-Order Cone Program. After that, we propose the first redundant second-order cone constraints reduction algorithm to reduce the computational cost, and therefore obtain a $(1-\epsilon)$ -approximation centralized algorithm. Further, we propose a $(1-\epsilon)$ -approximation fully distributed algorithm scalable with network size for ROSE. We conduct both simulation and field experiments, and the results show that our algorithms can outperform comparison algorithms by 480.19 percent.

22 citations


Journal ArticleDOI
TL;DR: This paper forms two novel optimization problems for delay-sensitive IoT applications, i.e., the total utility maximization problems under both static and dynamic offloading task request settings, and develops efficient approximation and online algorithms with provable performance guarantees for the problems in a special case where the bandwidth capacity constraint is negligible.
Abstract: The Internet of Things (IoT) technology provisions unprecedented opportunities to evolve the interconnection among human beings. However, the latency brought by unstable wireless networks and computation failures caused by limited resources on IoT devices prevents users from experiencing high efficiency and seamless user experience. To address these shortcomings, the integrated Mobile Edge Computing (MEC) with remote clouds is a promising platform to enable delay-sensitive service provisioning for IoT applications, where edge-clouds (cloudlets) are co-located with wireless access points in the proximity of IoT devices. Thus, computation-intensive and sensing data from IoT devices can be offloaded to the MEC network immediately for processing, and the service response latency can be significantly reduced. In this paper, we first formulate two novel optimization problems for delay-sensitive IoT applications, i.e., the total utility maximization problems under both static and dynamic offloading task request settings, with the aim to maximize the accumulative user satisfaction on the use of the services provided by the MEC, and show the NP-hardness of the defined problems. We then devise efficient approximation and online algorithms with provable performance guarantees for the problems in a special case where the bandwidth capacity constraint is negligible. We also develop efficient heuristic algorithms for the problems with the bandwidth capacity constraint. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising in reducing service delays and enhancing user satisfaction, and the proposed algorithms outperform their counterparts by at least 10.8 percent.

22 citations


Journal ArticleDOI
TL;DR: A modular approximation methodology for efficient fixed-point hardware implementation of the sigmoid function that consists of three modules: piecewise linear (PWL) approximation as the initial solution, Taylor series approximation of the exponential function, and Newton–Raphson method-based solution as the final solution.
Abstract: The sigmoid function is a widely used nonlinear activation function in neural networks. In this article, we present a modular approximation methodology for efficient fixed-point hardware implementation of the sigmoid function. Our design consists of three modules: piecewise linear (PWL) approximation as the initial solution, Taylor series approximation of the exponential function, and Newton–Raphson method-based approximation as the final solution. Its modularity enables the designer to flexibly choose the most appropriate approximation method for each module separately. Performance evaluation results indicate that our work strikes an appropriate balance among the objectives of approximation accuracy, hardware resource utilization, and performance.

16 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: This work addresses model-free distributed stabilization of heterogeneous continuous-time linear multi-agent systems using reinforcement learning (RL) and builds upon the results of the first algorithm, and extends it to distributed stabilized systems with predefined interaction graphs.
Abstract: We address model-free distributed stabilization of heterogeneous continuous-time linear multi-agent systems using reinforcement learning (RL). Two algorithms are developed. The first algorithm solves a centralized linear quadratic regulator (LQR) problem without knowing any initial stabilizing gain in advance. The second algorithm builds upon the results of the first algorithm, and extends it to distributed stabilization of multi-agent systems with predefined interaction graphs. Rigorous proofs are provided to show that the proposed algorithms achieve guaranteed convergence if specific conditions hold. A simulation example is presented to demonstrate the theoretical results.

15 citations


Journal ArticleDOI
TL;DR: In this article , a primal-dual (3α+1)-approximation algorithm was proposed for a submodular penalty function that is normalized and nondecreasing, and a polynomial time approximation scheme based on a plane subdivision technique was presented for a linear penalty function.
Abstract: In this paper, we introduce the minimum power cover problem with submodular and linear penalties. Suppose U is a set of users and S is a set of sensors in a d-dimensional space Rd with d≥2. Each sensor can adjust its power and the relationship between the power p(s) and the radius r(s) of the service area of sensor s satisfies p(s)=c⋅r(s)α, where c>0 and α≥1. Let p be the power assignment for each sensor and R be the set of users who are not covered by any sensor supported by p. The objective is to minimize the total power of p plus the rejected penalty of R. For a submodular penalty function that is normalized and nondecreasing, we present a combinatorial primal-dual (3α+1)-approximation algorithm. For the case in which the submodular penalty function is linear, we present a polynomial time approximation scheme based on a plane subdivision technique.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors formulated the revenue-driven online task offloading problem as a linear fractional programming problem and proposed a Level Balanced Allocation (LBA) algorithm to solve it.
Abstract: Mobile Edge Computing (MEC) has become an attractive solution to enhance the computing and storage capacity of mobile devices by leveraging available resources on edge nodes. In MEC, the arrivals of tasks are highly dynamic and are hard to predict precisely. It is of great importance yet very challenging to assign the tasks to edge nodes with guaranteed system performance. In this article, we aim to optimize the revenue earned by each edge node by optimally offloading tasks to the edge nodes. We formulate the revenue-driven online task offloading (ROTO) problem, which is proved to be NP-hard. We first relax ROTO to a linear fractional programming problem, for which we propose the Level Balanced Allocation (LBA) algorithm. We then show the performance guarantee of LBA through rigorous theoretical analysis, and present the LB-Rounding algorithm for ROTO using the primal-dual technique. The algorithm achieves an approximation ratio of $2(1+\xi)\ln (d+1)$ 2 ( 1 + ξ ) ln ( d + 1 ) with a considerable probability, where $d$ d is the maximum number of process slots of an edge node and $\xi$ ξ is a small constant. The performance of the proposed algorithm is validated through both trace-driven simulations and testbed experiments. Results show that our proposed scheme is more efficient compared to baseline algorithms.

14 citations


Book ChapterDOI
01 Jan 2022
TL;DR: This paper presents a 4-approximation algorithm for MISR which is based on a similar recursive partitioning scheme, however, it uses a more general class of polygons—polygons that are horizontally or vertically convex—which allows it to provide an arguably simpler analysis and improve the approximation ratio.
Abstract: We study the Maximum Independent Set of Rectangles (MISR) problem, where we are given a set of axis-parallel rectangles in the plane and the goal is to select a subset of non-overlapping rectangles of maximum cardinality. In a recent breakthrough, Mitchell [28] obtained the first constant-factor approximation algorithm for MISR. His algorithm achieves an approximation ratio of 10 and it is based on a dynamic program that intuitively recursively partitions the input plane into special polygons called corner-clipped rectangles (CCRs). In this paper, we present a 4-approximation algorithm for MISR which is based on a similar recursive partitioning scheme. However, we use a more general class of polygons—polygons that are horizontally or vertically convex—which allows us to provide an arguably simpler analysis and already improve the approximation ratio. Using a new fractional charging argument and fork-fences to guide the partitions, we improve the approximation ratio even more to 4. We hope that our ideas will lead to further progress towards a PTAS for MISR.

14 citations



Proceedings ArticleDOI
11 Apr 2022
TL;DR: A new primal-dual algorithm is proposed, inspired by the classic algorithm of Jain and Vazirani and the recent algorithm of Ahmadian, Norouzi-Fard, Svensson, and Ward, that achieves an approximation ratio of 2.406 and 5.912 for Euclidean k-median and k-means problems.
Abstract: Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean k-median and k-means problems. We propose a new primal-dual algorithm, inspired by the classic algorithm of Jain and Vazirani and the recent algorithm of Ahmadian, Norouzi-Fard, Svensson, and Ward. Our algorithm achieves an approximation ratio of 2.406 and 5.912 for Euclidean k-median and k-means, respectively, improving upon the 2.633 approximation ratio of Ahmadian et al. and the 6.1291 approximation ratio of Grandoni, Ostrovsky, Rabani, Schulman, and Venkat. Our techniques involve a much stronger exploitation of the Euclidean metric than previous work on Euclidean clustering. In addition, we introduce a new method of removing excess centers using a variant of independent sets over graphs that we dub a “nested quasi-independent set”. In turn, this technique may be of interest for other optimization problems in Euclidean and ℓp metric spaces.

11 citations


Journal ArticleDOI
TL;DR: The aim is to find a feasible solution to the single-machine scheduling problem that minimizes the makespan plus the total resource consumption cost, and focuses on the design of pseudo-polynomial time and approximation algorithms.

Journal ArticleDOI
TL;DR: In this article , a bifactor approximation algorithm is proposed to solve the heterogeneous cloudlet placement problem to guarantee a bounded latency and placement cost, while fully mapping user applications to appropriate cloudlets.
Abstract: Emerging applications with low-latency requirements such as real-time analytics, immersive media applications, and intelligent virtual assistants have rendered Edge Computing as a critical computing infrastructure. Existing studies have explored the cloudlet placement problem in a homogeneous scenario with different goals such as latency minimization, load balancing, energy efficiency, and placement cost minimization. However, placing cloudlets in a highly heterogeneous deployment scenario considering the next-generation 5G networks and IoT applications is still an open challenge. The novel requirements of these applications indicate that there is still a gap in ensuring low-latency service guarantees when deploying cloudlets. Furthermore, deploying cloudlets in a cost-effective manner and ensuring full coverage for all users in edge computing are other critical conflicting issues. In this article, we address these issues by designing a bifactor approximation algorithm to solve the heterogeneous cloudlet placement problem to guarantee a bounded latency and placement cost, while fully mapping user applications to appropriate cloudlets. We first formulate the problem as a multi-objective integer programming model and show that it is a computationally NP-hard problem. We then propose a bifactor approximation algorithm, ACP, to tackle its intractability. We investigate the effectiveness of ACP by performing extensive theoretical analysis and experiments on multiple deployment scenarios based on New York City OpenData. We prove that ACP provides a (2,4)-approximation ratio for the latency and the placement cost. The experimental results show that ACP obtains near-optimal results in a polynomial running time making it suitable for both short-term and long-term cloudlet placement in heterogeneous deployment scenarios.

Proceedings ArticleDOI
25 Apr 2022
TL;DR: FirmCore is presented, a new family of dense subgraphs in ML networks, and it satisfies many of the nice properties of k-cores in single-layer graphs, and has a polynomial time algorithm.
Abstract: A key graph mining primitive is extracting dense structures from graphs, and this has led to interesting notions such as k-cores which subsequently have been employed as building blocks for capturing the structure of complex networks and for designing efficient approximation algorithms for challenging problems such as finding the densest subgraph. In applications such as biological, social, and transportation networks, interactions between objects span multiple aspects. Multilayer (ML) networks have been proposed for accurately modeling such applications. In this paper, we present FirmCore, a new family of dense subgraphs in ML networks, and show that it satisfies many of the nice properties of k-cores in single-layer graphs. Unlike the state of the art core decomposition of ML graphs, FirmCores have a polynomial time algorithm, making them a powerful tool for understanding the structure of massive ML networks. We also extend FirmCore for directed ML graphs. We show that FirmCores and directed FirmCores can be used to obtain efficient approximation algorithms for finding the densest subgraphs of ML graphs and their directed counterparts. Our extensive experiments over several real ML graphs show that our FirmCore decomposition algorithm is significantly more efficient than known algorithms for core decompositions of ML graphs. Furthermore, it returns solutions of matching or better quality for the densest subgraph problem over (possibly directed) ML graphs.

Journal ArticleDOI
26 Jan 2022-Sensors
TL;DR: An innovative approach for adaptive piecewise linear interval approximation of sensor characteristics, which are differentiable functions, which can easily be extended to many other sensor types and can improve the performance of resource-constrained devices.
Abstract: In this work, we introduce and use an innovative approach for adaptive piecewise linear interval approximation of sensor characteristics, which are differentiable functions. The aim is to obtain a discreet type of inverse sensor characteristic, with a predefined maximum approximation error, with minimization of the number of points defining the characteristic, which in turn is related to the possibilities for using microcontrollers with limited energy and memory resources. In this context, the results from the study indicate that to overcome the problems arising from the resource constraints of smart devices, appropriate “lightweight” algorithms are needed that allow efficient connectivity and intelligent management of the measurement processes. The method has two benefits: first, low-cost microcontrollers could be used for hardware implementation of the industrial sensor devices; second, the optimal subdivision of the measurement range reduces the space in the memory of the microcontroller necessary for storage of the parameters of the linearized characteristic. Although the discussed computational examples are aimed at building adaptive approximations for temperature sensors, the algorithm can easily be extended to many other sensor types and can improve the performance of resource-constrained devices. For prescribed maximum approximation error, the inverse sensor characteristic is found directly in the linearized form. Further advantages of the proposed approach are: (i) the maximum error under linearization of the inverse sensor characteristic at all intervals, except in the general case of the last one, is the same; (ii) the approach allows non-uniform distribution of maximum approximation error, i.e., different maximum approximation errors could be assigned to particular intervals; (iii) the approach allows the application to the general type of differentiable sensor characteristics with piecewise concave/convex properties.

Proceedings ArticleDOI
02 Jun 2022
TL;DR: This paper is the first to consider the online benchmark for the edge arrival version of the max-weight stochastic matchings on online bipartite graphs under both vertex and edge arrivals and designs a simple algorithm with a significantly improved approximation ratio of (1-1/e).
Abstract: In this paper, we study max-weight stochastic matchings on online bipartite graphs under both vertex and edge arrivals. We focus on designing polynomial time approximation algorithms with respect to the online benchmark, which was first considered by Papadimitriou, Pollner, Saberi, and Wajc [EC'21]. In the vertex arrival version of the problem, the goal is to find an approximate max-weight matching of a given bipartite graph when the vertices in one part of the graph arrive online in a fixed order with independent chances of failure. Whenever a vertex arrives we should decide, irrevocably, whether to match it with one of its unmatched neighbors or leave it unmatched forever. There has been a long line of work designing approximation algorithms for different variants of this problem with respect to the offline benchmark (prophet). Papadimitriou et al., however, propose the alternative online benchmark and show that considering this new benchmark allows them to improve the 0.5 approximation ratio, which is the best ratio achievable with respect to the offline benchmark. They provide a 0.51-approximation algorithm which was later improved to 0.526 by Saberi and Wajc [ICALP'21]. The main contribution of this paper is designing a simple algorithm with a significantly improved approximation ratio of (1-1/e) for this problem. We also consider the edge arrival version in which, instead of vertices, edges of the graph arrive in an online fashion with independent chances of failure. Designing approximation algorithms for this problem has also been studied extensively with the best approximation ratio being 0.337 with respect to the offline benchmark. This paper, however, is the first to consider the online benchmark for the edge arrival version of the problem. For this problem, we provide a simple algorithm with an approximation ratio of 0.5 with respect to the online benchmark.


Proceedings ArticleDOI
09 Jun 2022
TL;DR: In this paper , the authors considered the turnstile model and gave a one-pass streaming algorithm for MST and a two-pass algorithm for EMD, both achieving an approximation factor of Õ(logn) and using (n,d,Δ)-space only.
Abstract: We study streaming algorithms for two fundamental geometric problems: computing the cost of a Minimum Spanning Tree (MST) of an n-point set X ⊂ {1,2,…,Δ}d, and computing the Earth Mover Distance (EMD) between two multi-sets A,B ⊂ {1,2,…,Δ}d of size n. We consider the turnstile model, where points can be added and removed. We give a one-pass streaming algorithm for MST and a two-pass streaming algorithm for EMD, both achieving an approximation factor of Õ(logn) and using (n,d,Δ)-space only. Furthermore, our algorithm for EMD can be compressed to a single pass with a small additive error. Previously, the best known sublinear-space streaming algorithms for either problem achieved an approximation of O(min{ logn , log(Δ d)} logn). For MST, we also prove that any constant space streaming algorithm can only achieve an approximation of Ω(logn), analogous to the Ω(logn) lower bound for EMD.

Journal ArticleDOI
01 Jan 2022
TL;DR: In this article, the authors proposed a scenario sampling algorithm that is provably asymptotically optimal in obtaining the safe invariant set with arbitrarily high accuracy for an arbitrary scenario testing strategy.
Abstract: A typical scenario-based evaluation framework seeks to characterize a black-box system's safety performance (e.g., failure rate) through repeatedly sampling initialization configurations (scenario sampling) and executing a certain test policy for scenario propagation (scenario testing) with the black-box system involved as the test subject. In this letter, we first present a novel safety evaluation criterion that seeks to characterize the actual operational domain within which the test subject would remain safe indefinitely with high probability. By formulating the black-box testing scenario as a dynamic system, we show that the presented problem is equivalent to finding a certain almost robustly forward invariant set for the given system. Second, for an arbitrary scenario testing strategy, we propose a scenario sampling algorithm that is provably asymptotically optimal in obtaining the safe invariant set with arbitrarily high accuracy. Moreover, as one considers different testing strategies (e.g., biased sampling of safety-critical cases), we show that the proposed algorithm still converges to the unbiased approximation of the safety characterization outcome if the scenario testing satisfies a certain condition. Finally, the effectiveness of the presented scenario sampling algorithms and various theoretical properties are demonstrated in a case study of the safety evaluation of a control barrier function-based mobile robot collision avoidance system.

Book ChapterDOI
TL;DR: In this paper , the authors proposed a simple algorithm that, guided by an optimal solution to the cut LP, first selects a DFS tree and then finds a solution to MAP by computing an optimum augmentation of this tree.
Abstract: AbstractThe Matching Augmentation Problem (MAP) has recently received significant attention as an important step towards better approximation algorithms for finding cheap 2-edge connected subgraphs. This has culminated in a \(\frac{5}{3}\)-approximation algorithm. However, the algorithm and its analysis are fairly involved and do not compare against the problem’s well-known LP relaxation called the cut LP.In this paper, we propose a simple algorithm that, guided by an optimal solution to the cut LP, first selects a DFS tree and then finds a solution to MAP by computing an optimum augmentation of this tree. Using properties of extreme point solutions, we show that our algorithm always returns (in polynomial time) a better than 2-approximation when compared to the cut LP. We thereby also obtain an improved upper bound on the integrality gap of this natural relaxation.

Journal ArticleDOI
TL;DR: A new method is proposed in which the MINLP model is reduced into an ILP model---more precisely, a binary linear programming (BLP) model---without compromise of achieving global optimum, but also with extremely high efficiency.

Journal ArticleDOI
TL;DR: In this article , the authors presented an algorithm which given any $m$-edge directed graph with positive integer capacities at most $U, vertices vertices $a$ and $b, and an approximation parameter $\epsilon \in (0, 1)$ computes an additive mU$-approximate $a $-$b$ maximum flow in time $m^{1+o(1)}/\sqrt{\epsilon}$.
Abstract: We present an algorithm which given any $m$-edge directed graph with positive integer capacities at most $U$, vertices $a$ and $b$, and an approximation parameter $\epsilon \in (0, 1)$ computes an additive $\epsilon mU$-approximate $a$-$b$ maximum flow in time $m^{1+o(1)}/\sqrt{\epsilon}$. By applying the algorithm for $\epsilon = (mU)^{-2/3}$, rounding to an integral flow, and using augmenting paths, we obtain an algorithm which computes an exact $a$-$b$ maximum flow in time $m^{4/3+o(1)}U^{1/3}$ and an algorithm which given an $m$-edge bipartite graph computes an exact maximum cardinality matching in time $m^{4/3+o(1)}$.

Proceedings ArticleDOI
01 Feb 2022
TL;DR: In this paper , the authors gave a polynomial-time constant-factor approximation algorithm for maximum independent set for (axis-aligned) rectangles in the plane, based on a new form of recursive partitioning, in which faces that are constant-complexity and orthogonally convex are recursively partitioned into a constant number of such faces.
Abstract: We give a polynomial-time constant-factor approximation algorithm for maximum independent set for (axis-aligned) rectangles in the plane. Using a polynomial-time algorithm, the best approximation factor previously known is $O(\log\log n)$ . The results are based on a new form of recursive partitioning in the plane, in which faces that are constant-complexity and orthogonally convex are recursively partitioned into a constant number of such faces.

Proceedings ArticleDOI
01 Feb 2022
TL;DR: In this paper , an approximation algorithm for weighted tree augmentation with approximation factor 1 + 1.7 was presented, which is the first algorithm beating the longstanding factor of 2.
Abstract: We present an approximation algorithm for Weighted Tree Augmentation with approximation factor 1 + $\ln 2+\varepsilon < 1.7$ . This is the first algorithm beating the longstanding factor of 2, which can be achieved through many standard techniques. −

Proceedings ArticleDOI
08 Nov 2022
TL;DR: For any > 0, this paper gave a deterministic (4 + 2 + )-approximation algorithm for the Nash social welfare problem under submodular valuations, where the ratio between the largest weight and the average weight is at most ω.
Abstract: For any >0, we give a simple, deterministic (4+)-approximation algorithm for the Nash social welfare (NSW) problem under submodular valuations. The previous best approximation factor was 380 via a randomized algorithm. We also consider the asymmetric variant of the problem, where the objective is to maximize the weighted geometric mean of agents’ valuations, and give an (ω + 2 + ) -approximation if the ratio between the largest weight and the average weight is at most ω. We also show that the 12-EFX envy-freeness property can be attained simultaneously with a constant-factor approximation. More precisely, we can find an allocation in polynomial time which is both 12-EFX and a (8+)-approximation to the symmetric NSW problem under submodular valuations. The previous best approximation factor under 12-EFX was linear in the number of agents.

Journal ArticleDOI
TL;DR: In this paper, a new approximation algorithm for solving generalized Lyapunov matrix equations is proposed and a convergence analysis for this algorithm is presented, where the optimal parameter is determined to minimize the corresponding spectral radius of iteration matrix to obtain fastest speed of convergence.

Proceedings ArticleDOI
09 Jun 2022
TL;DR: This work establishes that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes, and shows the following equivalence: APEP has apseudeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP.
Abstract: A probabilistic algorithm A is pseudodeterministic if, on every input, there exists a canonical value that is output with high probability. If the algorithm outputs one of k canonical values with high probability, then it is called a k-pseudodeterministic algorithm. In the study of pseudodeterminism, the Acceptance Probability Estimation Problem (APEP), which is to additively approximate the acceptance probability of a Boolean circuit, is emerging as a central computational problem. This problem admits a 2-pseudodeterministic algorithm. Recently, it was shown that a pseudodeterministic algorithm for this problem would imply that any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k (including approximation algorithms) also admits a pseudodeterministic algorithm (Dixon, Pavan, Vinodchandran; ITCS 2021). The contribution of the present work is two-fold. First, as our main conceptual contribution, we establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: APEP has a pseudodeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP. A conceptual interpretation of this equivalence is that the algorithmic gap between 2-pseudodeterminism and pseudodeterminism is equivalent to the gap between PromiseBPP and BPP. Based on this connection, we show that designing pseudodeterministic algorithms for APEP leads to the solution of some open problems in complexity theory, including new Boolean circuit lower bounds. This equivalence also explains how multi-pseudodeterminism is connected to problems in SearchBPP. In particular, we show that if APEP has a pseudodeterministic algorithm, then every problem that admits a k(n)-pseudodeterministic algorithm (for any polynomial k) is in SearchBPP and admits a pseudodeterministic algorithm. Motivated by this connection, we also explore its connection to probabilistic search problems and establish that APEP is complete for certain notions of search problems in the context of pseudodeterminism. Our second contribution is establishing query complexity lower bounds for multi-pseudodeterministic computations. We prove that for every k ≥ 1, there exists a problem whose (k+1)-pseudodeterministic query complexity, in the uniform query model, is O(1) but has a k-pseudodeterministic query complexity of Ω(n), even in the more general nonadaptive query model. A key contribution of this part of the work is the utilization of Sperner’s lemma in establishing query complexity lower bounds.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a contact coefficient to quantify the influence weight for each user and classified the users to different groups based on their influence weights so that the rumors can be controlled by the measures accordingly.

Proceedings ArticleDOI
09 Jun 2022
TL;DR: This work presents a deterministic algorithm that computes, in n·(ε−1 logn)O(d) time, a perfect matching between A and B whose cost is within a (1+ε) factor of the optimal matching under any ℓp-norm.
Abstract: Given two point sets A and B in ℝd of size n each, for some constant dimension d≥ 1, and a parameter ε>0, we present a deterministic algorithm that computes, in n·(ε−1 logn)O(d) time, a perfect matching between A and B whose cost is within a (1+ε) factor of the optimal matching under any ℓp-norm. Although a Monte-Carlo algorithm with a similar running time is proposed by Raghvendra and Agarwal [J. ACM 2020], the best-known deterministic ε-approximation algorithm takes Ω(n3/2) time. Our algorithm constructs a (refinement of a) tree cover of ℝd, and we develop several new tools to apply a tree-cover based approach to compute an ε-approximate perfect matching.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an accurate and simplified worst-case approximation method to solve the problem of non-line-of-sight (NLOS) influence for source positioning.
Abstract: Worst-case robust approximation was proven to be efficient in alleviating the non-line-of-sight (NLOS) influence for source positioning. However, the existing time-difference-of-arrival (TDOA)-based worst-case solutions still have two issues: 1) Inaccurate objective transformations are introduced in some algorithms, which reduce the accuracy; 2) A method with higher accuracy is computationally intensive. This study proposes an accurate and simplified worst-case approximation method to tackle the troubles. Precisely, we first prove that the nonconvex worst-case objective is piecewise monotone to the NLOS bias. We further use monotonicity to derive an accurate and convex expression of the worst-case objective. Then, we propose simplified transformations to redefine the worst-case approximation problem with fewer constraints. Besides, we prove the effectiveness of the simplified transformations. Simulations and experiments demonstrate that the proposed method with moderate computation exhibits better performance than the state-of-the-art worst-case approximation algorithms.

Proceedings ArticleDOI
01 Feb 2022
TL;DR: In this paper , a 380-approximation algorithm for the Nash Social Welfare problem with submodular valuations is presented, which builds on and extends a recent constant-factor approximation for Rado valuations.
Abstract: We present a 380-approximation algorithm for the Nash Social Welfare problem with submodular valuations. Our algorithm builds on and extends a recent constant-factor approximation for Rado valuations [15].