scispace - formally typeset
Search or ask a question

Showing papers on "Approximation algorithm published in 2021"


Journal ArticleDOI
TL;DR: A channel estimation framework based on the parallel factor decomposition to unfold the resulting cascaded channel model is proposed and it is demonstrated that the sum rate using the estimated channels always reach that of perfect channels under various settings, thus, verifying the effectiveness and robustness of the proposed estimation algorithms.
Abstract: Reconfigurable Intelligent Surfaces (RISs) have been recently considered as an energy-efficient solution for future wireless networks due to their fast and low-power configuration, which has increased potential in enabling massive connectivity and low-latency communications. Accurate and low-overhead channel estimation in RIS-based systems is one of the most critical challenges due to the usually large number of RIS unit elements and their distinctive hardware constraints. In this paper, we focus on the uplink of a RIS-empowered multi-user Multiple Input Single Output (MISO) uplink communication systems and propose a channel estimation framework based on the parallel factor decomposition to unfold the resulting cascaded channel model. We present two iterative estimation algorithms for the channels between the base station and RIS, as well as the channels between RIS and users. One is based on alternating least squares (ALS), while the other uses vector approximate message passing to iteratively reconstruct two unknown channels from the estimated vectors. To theoretically assess the performance of the ALS-based algorithm, we derived its estimation Cramer-Rao Bound (CRB). We also discuss the downlink achievable sum rate computation with estimated channels and different precoding schemes for the base station. Our extensive simulation results show that our algorithms outperform benchmark schemes and that the ALS technique achieves the CRB. It is also demonstrated that the sum rate using the estimated channels always reach that of perfect channels under various settings, thus, verifying the effectiveness and robustness of the proposed estimation algorithms.

260 citations


Journal ArticleDOI
TL;DR: The first attempt to formulate this Edge Data Distribution (EDD) problem as a constrained optimization problem from the app vendor's perspective and proposes an optimal approach named EDD-IP to solve this problem exactly with the Integer Programming technique.
Abstract: Edge computing, as an extension of cloud computing, distributes computing and storage resources from centralized cloud to distributed edge servers, to power a variety of applications demanding low latency, e.g., IoT services, virtual reality, real-time navigation, etc. From an app vendor's perspective, app data needs to be transferred from the cloud to specific edge servers in an area to serve the app users in the area. However, according to the pay-as-you-go business model, distributing a large amount of data from the cloud to edge servers can be expensive. The optimal data distribution strategy must minimize the cost incurred, which includes two major components, the cost of data transmission between the cloud to edge servers and the cost of data transmission between edge servers. In the meantime, the delay constraint must be fulfilled - the data distribution must not take too long. In this article, we make the first attempt to formulate this Edge Data Distribution (EDD) problem as a constrained optimization problem from the app vendor's perspective and prove its $\mathcal {NP}$ NP -hardness. We propose an optimal approach named EDD-IP to solve this problem exactly with the Integer Programming technique. Then, we propose an $O(k)$ O ( k ) -approximation algorithm named EDD-A for finding approximate solutions to large-scale EDD problems efficiently. EDD-IP and EDD-A are evaluated on a real-world dataset and the results demonstrate that they significantly outperform three representative approaches.

108 citations


Journal ArticleDOI
TL;DR: This work puts forth a neural network architecture inspired by the algorithmic unfolding of the iterative weighted minimum mean squared error (WMMSE) method, that is permutation equivariant, thus facilitating generalizability across network topologies.
Abstract: We study the problem of optimal power allocation in a single-hop ad hoc wireless network. In solving this problem, we depart from classical purely model-based approaches and propose a hybrid method that retains key modeling elements in conjunction with data-driven components. More precisely, we put forth a neural network architecture inspired by the algorithmic unfolding of the iterative weighted minimum mean squared error (WMMSE) method, that we denote by unfolded WMMSE (UWMMSE). The learnable weights within UWMMSE are parameterized using graph neural networks (GNNs), where the time-varying underlying graphs are given by the fading interference coefficients in the wireless network. These GNNs are trained through a gradient descent approach based on multiple instances of the power allocation problem. We show that the proposed architecture is permutation equivariant, thus facilitating generalizability across network topologies. Comprehensive numerical experiments illustrate the performance attained by UWMMSE along with its robustness to hyper-parameter selection and generalizability to unseen scenarios such as different network densities and network sizes.

100 citations


Journal ArticleDOI
TL;DR: This article shows that it suffices to optimize the total charging rates to fulfill the charging requests before departure times and proposes a feature-based linear function approximator for the state–value function to further enhance the efficiency and generalization ability of the proposed algorithm.
Abstract: This article proposes a reinforcement-learning (RL) approach for optimizing charging scheduling and pricing strategies that maximize the system objective of a public electric vehicle (EV) charging station. The proposed algorithm is “online” in the sense that the charging and pricing decisions made at each time depend only on the observation of past events, and is “model-free” in the sense that the algorithm does not rely on any assumed stochastic models of uncertain events. To cope with the challenge arising from the time-varying continuous state and action spaces in the RL problem, we first show that it suffices to optimize the total charging rates to fulfill the charging requests before departure times. Then, we propose a feature-based linear function approximator for the state–value function to further enhance the efficiency and generalization ability of the proposed algorithm. Through numerical simulations with real-world data, we show that the proposed RL algorithm achieves on average 138.5% higher charging-station profit than representative benchmark algorithms.

93 citations


Journal ArticleDOI
TL;DR: To suppress noises and improve the accuracy in solving FDNO problems, a novel noise-tolerant neural (NTN) algorithm based on zeroing neural dynamics is proposed and investigated and the quasi-Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is employed to eliminate the intensively computational burden for matrix inversion.
Abstract: Nonlinear optimization problems with dynamical parameters are widely arising in many practical scientific and engineering applications, and various computational models are presented for solving them under the hypothesis of short-time invariance. To eliminate the large lagging error in the solution of the inherently dynamic nonlinear optimization problem, the only way is to estimate the future unknown information by using the present and previous data during the solving process, which is termed the future dynamic nonlinear optimization (FDNO) problem. In this paper, to suppress noises and improve the accuracy in solving FDNO problems, a novel noise-tolerant neural (NTN) algorithm based on zeroing neural dynamics is proposed and investigated. In addition, for reducing algorithm complexity, the quasi-Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is employed to eliminate the intensively computational burden for matrix inversion, termed NTN-BFGS algorithm. Moreover, theoretical analyses are conducted, which show that the proposed algorithms are able to globally converge to a tiny error bound with or without the pollution of noises. Finally, numerical experiments are conducted to validate the superiority of the proposed NTN and NTN-BFGS algorithms for the online solution of FDNO problems.

88 citations


Journal ArticleDOI
TL;DR: In this paper, a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed in matrix form to solve the sum-rate maximization problem for precoding design in MU-MIMO systems.
Abstract: Optimization theory assisted algorithms have received great attention for precoding design in multiuser multiple-input multiple-output (MU-MIMO) systems. Although the resultant optimization algorithms are able to provide excellent performance, they generally require considerable computational complexity, which gets in the way of their practical application in real-time systems. In this work, in order to address this issue, we first propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed in matrix form to better solve the problems in communication systems. Then, we implement the proposed deep-unfolding framework to solve the sum-rate maximization problem for precoding design in MU-MIMO systems. An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed. Specifically, the iterative WMMSE algorithm is unfolded into a layer-wise structure, where a number of trainable parameters are introduced to replace the high-complexity operations in the forward propagation. To train the network, a generalized chain rule of the IAIDNN is proposed to depict the recurrence relation of gradients between two adjacent layers in the back propagation. Moreover, we discuss the computational complexity and generalization ability of the proposed scheme. Simulation results show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.

84 citations


Journal ArticleDOI
TL;DR: This work first utilizes Lyapunov optimization method to decompose the long-term optimization problem into a series of instant optimization problems, then a sample average approximation-based stochastic algorithm is proposed to approximate the future expected system utility.
Abstract: The explosive growth of mobile devices promotes the prosperity of novel mobile applications, which can be realized by service offloading with the assistance of edge computing servers. However, due to limited computation and storage capabilities of a single server, long service latency hinders the continuous development of service offloading in mobile networks. By supporting multi-server cooperation, Pervasive Edge Computing (PEC) is promising to enable service migration in highly dynamic mobile networks. With the objective of maximizing the system utility, we formulate the optimization problem by jointly considering the constraints of server storage capability and service execution latency. To enable dynamic service placement, we first utilize Lyapunov optimization method to decompose the long-term optimization problem into a series of instant optimization problems. Then, a sample average approximation-based stochastic algorithm is proposed to approximate the future expected system utility. Afterwards, a distributed Markov approximation algorithm is utilized to determine the service placement configurations. Through theoretical analysis, the time complexity of our proposed algorithm is linear to the number of users, and the backlog queue of PEC servers is stable. Performance evaluations are conducted based on both synthetic and real trace-driven scenarios, with numerical results demonstrating the effectiveness of our proposed algorithm from various aspects.

83 citations


Journal ArticleDOI
TL;DR: The simulation results prove that the proposed density-based content distribution method can obviously reduce the average transmission delay of content distribution under different network conditions and has better stability and self-adaptability under continuous time variation.
Abstract: The satellite-terrestrial networks (STN) utilize the spacious coverage and low transmission latency of Low Earth Orbit (LEO) constellation to distribute requested content for ground subscribers. With the development of storage and computing capacity of satellite onboard equipment, it is considered promising to leverage in-network caching technology on STN to improve content distribution efficiency. However, traditional ground network caching schemes are not suitable in STN, considering dynamic satellite propagation and time-varying topology. More specifically, the unevenness of user distribution results in difficulties for assurance of quality of experience. To address these problems, we firstly propose a density-based block division algorithm to divide the content subscribers into a series of blocks with different sizes according to user density. The LEO satellite orbit and time-varying network model is established to describe STN topology. Next, we propose an approximate minimum coverage vertex set algorithm and a novel cache node selection algorithm for optimal user blocks matching. The simulation results prove that the proposed density-based content distribution method can obviously reduce the average transmission delay of content distribution under different network conditions and has better stability and self-adaptability under continuous time variation.

81 citations


Journal ArticleDOI
TL;DR: This study investigates the problem of finding an optimal offloading scheme in which the objective of optimization aims to maximize the system utility for leveraging between throughput and fairness and provides an increment-based greedy approximation algorithm with 1 + 1/e-1 ratio.

73 citations


Journal ArticleDOI
TL;DR: A novel algorithm is designed to efficiently exploit the sparsity of PMP in deblurring, which is much more sparse than that of blurred images, and hence is very effective in discriminating between clear and blurred images.
Abstract: Blind image deblurring is a long standing challenging problem in image processing and low-level vision. Recently, sophisticated priors such as dark channel prior, extreme channel prior, and local maximum gradient prior, have shown promising effectiveness. However, these methods are computationally expensive. Meanwhile, since these priors involved subproblems cannot be solved explicitly, approximate solution is commonly used, which limits the best exploitation of their capability. To address these problems, this work firstly proposes a simplified sparsity prior of local minimal pixels, namely patch-wise minimal pixels (PMP). The PMP of clear images is much more sparse than that of blurred ones, and hence is very effective in discriminating between clear and blurred images. Then, a novel algorithm is designed to efficiently exploit the sparsity of PMP in deblurring. The new algorithm flexibly imposes sparsity inducing on the PMP under the maximum a posterior (MAP) framework rather than directly uses the half quadratic splitting algorithm. By this, it avoids non-rigorous approximation solution in existing algorithms, while being much more computationally efficient. Extensive experiments demonstrate that the proposed algorithm can achieve better practical stability compared with state-of-the-arts. In terms of deblurring quality, robustness and computational efficiency, the new algorithm is superior to state-of-the-arts. Code for reproducing the results of the new method is available at https://github.com/FWen/deblur-pmp.git .

68 citations


Journal ArticleDOI
TL;DR: A quantum algorithm for RR is presented, where the technique of parallel Hamiltonian simulation to simulate a number of Hermitian matrices in parallel is proposed and used to develop a quantum version of inline-formula LaTeX, which can efficiently handle non-sparse data matrices.
Abstract: Ridge regression (RR) is an important machine learning technique which introduces a regularization hyperparameter $\alpha$ α to ordinary multiple linear regression for analyzing data suffering from multicollinearity. In this paper, we present a quantum algorithm for RR, where the technique of parallel Hamiltonian simulation to simulate a number of Hermitian matrices in parallel is proposed and used to develop a quantum version of $K$ K -fold cross-validation approach, which can efficiently estimate the predictive performance of RR. Our algorithm consists of two phases: (1) using quantum $K$ K -fold cross-validation to efficiently determine a good $\alpha$ α with which RR can achieve good predictive performance, and then (2) generating a quantum state encoding the optimal fitting parameters of RR with such $\alpha$ α , which can be further utilized to predict new data. Since indefinite dense Hamiltonian simulation has been adopted as a key subroutine, our algorithm can efficiently handle non-sparse data matrices. It is shown that our algorithm can achieve exponential speedup over the classical counterpart for (low-rank) data matrices with low condition numbers. But when the condition numbers of data matrices are large to be amenable to full or approximately full ranks of data matrices, only polynomial speedup can be achieved.

Journal ArticleDOI
TL;DR: This work investigates the problem of distributed representation learning from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting, and produces representations that preserve as much information as possible about LaTeX.
Abstract: The problem of distributed representation learning is one in which multiple sources of information $X_1,\ldots, X_K$ X 1 , ... , X K are processed separately so as to learn as much information as possible about some ground truth $Y$ Y . We investigate this problem from information-theoretic grounds, through a generalization of Tishby's centralized Information Bottleneck (IB) method to the distributed setting. Specifically, $K$ K encoders, $K \geq 2$ K ≥ 2 , compress their observations $X_1,\ldots, X_K$ X 1 , ... , X K separately in a manner such that, collectively, the produced representations preserve as much information as possible about $Y$ Y . We study both discrete memoryless (DM) and memoryless vector Gaussian data models. For the discrete model, we establish a single-letter characterization of the optimal tradeoff between complexity (or rate) and relevance (or information) for a class of memoryless sources (the observations $X_1,\ldots, X_K$ X 1 , ... , X K being conditionally independent given $Y$ Y ). For the vector Gaussian model, we provide an explicit characterization of the optimal complexity-relevance tradeoff. Furthermore, we develop a variational bound on the complexity-relevance tradeoff which generalizes the evidence lower bound (ELBO) to the distributed setting. We also provide two algorithms that allow to compute this bound: i) a Blahut-Arimoto type iterative algorithm which enables to compute optimal complexity-relevance encoding mappings by iterating over a set of self-consistent equations, and ii) a variational inference type algorithm in which the encoding mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms developed in this paper.

Journal ArticleDOI
TL;DR: A general primal-dual algorithmic framework that unifies many existing state-of-the-art algorithms is proposed that establishes linear convergence of the proposed method to the exact minimizer in the presence of the nonsmooth term.
Abstract: This article studies a class of nonsmooth decentralized multiagent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common nonsmooth term. We propose a general primal-dual algorithmic framework that unifies many existing state-of-the-art algorithms. We establish linear convergence of the proposed method to the exact minimizer in the presence of the nonsmooth term. Moreover, for the more general class of problems with agent specific nonsmooth terms, we show that linear convergence cannot be achieved (in the worst case) for the class of algorithms that uses the gradients and the proximal mappings of the smooth and nonsmooth parts, respectively. We further provide a numerical counterexample that shows how some state-of-the-art algorithms fail to converge linearly for strongly convex objectives and different local non smooth terms.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors studied how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching, and designed an efficient convex programming based algorithm (CP) to solve this problem.
Abstract: In Mobile Edge Computing (MEC), many tasks require specific service support for execution and in addition, have a dependent order of execution among the tasks. However, previous works often ignore the impact of having limited services cached at the edge nodes on (dependent) task offloading, thus may lead to an infeasible offloading decision or a longer completion time. To bridge the gap, this article studies how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching. We formally define the problem of offloading dependent tasks with service caching (ODT-SC), and prove that there exists no algorithm with constant approximation for this hard problem. Then, we design an efficient convex programming based algorithm (CP) to solve this problem. Moreover, we study a special case with a homogeneous MEC and propose a favorite successor based algorithm (FS) to solve this special case with a competitive ratio of $O(1)$ O ( 1 ) . Extensive simulation results using Google data traces show that our proposed algorithms can significantly reduce applications’ completion time by about 21-47 percent compared with other alternatives.

Journal ArticleDOI
TL;DR: By clustering multiple users into independent communities based on their geographic locations, a 5G-enabled UAV-to-community offloading system is designed, able to maximize the system throughput while guaranteeing the fraction of served users.
Abstract: Due to line-of-sight communication links and distributed deployment, Unmanned Aerial Vehicles (UAVs) have attracted substantial interest in agile Mobile Edge Computing (MEC) service provision. In this paper, by clustering multiple users into independent communities based on their geographic locations, we design a 5G-enabled UAV-to-community offloading system. A system throughput maximization problem is formulated, subjected to the transmission rate, atomicity of tasks and speed of UAVs. By relaxing the transmission rate constraint, the mixed integer non-linear program is transformed into two subproblems. We first develop an average throughput maximization-based auction algorithm to determine the trajectory of UAVs, where a community-based latency approximation algorithm is developed to regulate the designed auction bidding. Then, a dynamic task admission algorithm is proposed to solve the task scheduling subproblem within one community. Performance analyses demonstrate that our designed auction bidding can guarantee user truthfulness, and can be fulfilled in polynomial time. Extensive simulations based on real-world data in health monitoring and online YouTube video services show that our proposed algorithm is able to maximize the system throughput while guaranteeing the fraction of served users.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed hybrid detection algorithm can not only approach the performance of the near-optimal symbol-wise maximum a posteriori MAP algorithms, but also offer a substantial performance gain compared with existing algorithms.
Abstract: Orthogonal time frequency space (OTFS) modulation has attracted substantial attention recently due to its great potential of providing reliable communications in high-mobility scenarios. In this article, we propose a novel hybrid signal detection algorithm for OTFS modulation. Based on the system model, we first derive the near-optimal symbol-wise maximum a posteriori (MAP) detection algorithm for OTFS modulation. Then, in order to reduce the detection complexity, we propose a partitioning rule that separates the related received symbols into two subsets for detecting each transmitted symbol, according to the corresponding path gains. According to the partitioning rule, we design the hybrid detection algorithm to exploit the power discrepancy of each subset, where the MAP detection is applied to the subset with larger channel gains, while the parallel interference cancellation (PIC) detection is applied to the subset with smaller channel gains. We also provide the error performance analysis of the proposed hybrid detection algorithm. Simulation results show that the proposed hybrid detection algorithm can not only approach the performance of the near-optimal symbol-wise MAP algorithms, but also offer a substantial performance gain compared with existing algorithms.

Journal ArticleDOI
TL;DR: The geometric consistency index (GCI) approximated thresholds are extended to measure the degree of consistency for an FPR and an integrated algorithm is proposed to improve simultaneously the ordinal and multiplicative consistencies.
Abstract: Consistency, multiplicative and ordinal, of fuzzy preference relations (FPRs) is investigated. The geometric consistency index (GCI) approximated thresholds are extended to measure the degree of consistency for an FPR. For inconsistent FPRs, two algorithms are devised: 1) to find the multiplicative inconsistent elements and 2) to detect the ordinally inconsistent elements. An integrated algorithm is proposed to improve simultaneously the ordinal and multiplicative consistencies. Finally, some examples, comparative analysis, and simulation experiments are provided to demonstrate the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of solving complex multi-stage decision problems using methods that are based on the idea of policy iteration (PI), i.e., start from some base policy and generate an improved policy.
Abstract: We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration (PI), i.e., start from some base policy and generate an improved policy. Rollout is the simplest method of this type, where just one improved policy is generated. We can view PI as repeated application of rollout, where the rollout policy at each iteration serves as the base policy for the next iteration. In contrast with PI, rollout has a robustness property: it can be applied on-line and is suitable for on-line replanning. Moreover, rollout can use as base policy one of the policies produced by PI, thereby improving on that policy. This is the type of scheme underlying the prominently successful AlphaZero chess program. In this paper we focus on rollout and PI-like methods for problems where the control consists of multiple components each selected (conceptually) by a separate agent. This is the class of multiagent problems where the agents have a shared objective function, and a shared and perfect state information. Based on a problem reformulation that trades off control space complexity with state space complexity, we develop an approach, whereby at every stage, the agents sequentially (one-at-a-time) execute a local rollout algorithm that uses a base policy, together with some coordinating information from the other agents. The amount of total computation required at every stage grows linearly with the number of agents. By contrast, in the standard rollout algorithm, the amount of total computation grows exponentially with the number of agents. Despite the dramatic reduction in required computation, we show that our multiagent rollout algorithm has the fundamental cost improvement property of standard rollout: it guarantees an improved performance relative to the base policy. We also discuss autonomous multiagent rollout schemes that allow the agents to make decisions autonomously through the use of precomputed signaling information, which is sufficient to maintain the cost improvement property, without any on-line coordination of control selection between the agents. For discounted and other infinite horizon problems, we also consider exact and approximate PI algorithms involving a new type of one-agent-at-a-time policy improvement operation. For one of our PI algorithms, we prove convergence to an agent-by-agent optimal policy, thus establishing a connection with the theory of teams. For another PI algorithm, which is executed over a more complex state space, we prove convergence to an optimal policy. Approximate forms of these algorithms are also given, based on the use of policy and value neural networks. These PI algorithms, in both their exact and their approximate form are strictly off-line methods, but they can be used to provide a base policy for use in an on-line multiagent rollout scheme.

Journal ArticleDOI
TL;DR: This paper investigates resource allocation for IRS-assisted green multiuser multiple-input single-output (MISO) systems and shows that the proposed algorithms can significantly reduce the total transmit power at the AP compared to various baseline schemes and that the optimal numbers of transmit antennas and IRS reflecting elements are finite.
Abstract: In this paper, we investigate resource allocation for IRS-assisted green multiuser multiple-input single-output (MISO) systems. To minimize the total transmit power, both the beamforming vectors at the access point (AP) and the phase shifts at multiple IRSs are jointly optimized, while taking into account the minimum required quality-of-service (QoS) of multiple users. First, two novel algorithms, namely a penalty-based alternating minimization (AltMin) algorithm and an inner approximation (IA) algorithm, are developed to tackle the non-convexity of the formulated optimization problem when perfect channel state information (CSI) is available. Existing designs employ semidefinite relaxation in AltMin-based algorithms, which, however, cannot ensure convergence. In contrast, the proposed penalty-based AltMin and IA algorithms are guaranteed to converge to a stationary point and a Karush-Kuhn-Tucker (KKT) solution of the design problem, respectively. Second, the impact of imperfect knowledge of the CSI of the channels between the AP and the users is investigated. To this end, a non-convex robust optimization problem is formulated and the penalty-based AltMin algorithm is extended to obtain a stationary solution. Simulation results reveal a key trade-off between the speed of convergence and the achievable total transmit power for the two proposed algorithms. In addition, we show that the proposed algorithms can significantly reduce the total transmit power at the AP compared to various baseline schemes and that the optimal numbers of transmit antennas and IRS reflecting elements, which maximize the system energy efficiency of the considered system, are finite.

Journal ArticleDOI
TL;DR: The authors have developed a new algorithm for when the number of sensors is greater than that of state variables (oversampling) and the maximization of the determinant of the matrix which appears in pseudo-inverse matrix operations is employed as an objective function of the problem in the present extended approach.
Abstract: In this paper, the sparse sensor placement problem for least-squares estimation is considered, and the previous novel approach of the sparse sensor selection algorithm is extended. The maximization of the determinant of the matrix which appears in pseudo-inverse matrix operations is employed as an objective function of the problem in the present extended approach. The procedure for the maximization of the determinant of the corresponding matrix is proved to be mathematically the same as that of the previously proposed QR method when the number of sensors is less than that of state variables (undersampling). On the other hand, the authors have developed a new algorithm for when the number of sensors is greater than that of state variables (oversampling). Then, a unified formulation of the two algorithms is derived, and the lower bound of the objective function given by this algorithm is shown using the monotone submodularity of the objective function. The effectiveness of the proposed algorithm on the problem using real datasets is demonstrated by comparing with the results of other algorithms. The numerical results show that the proposed algorithm improves the estimation error by approximately 10% compared with the conventional methods in the oversampling case, where the estimation error is defined as the ratio of the difference between the reconstructed data and the full observation data to the full observation. For the NOAA-SST sensor problem, which has more than ten thousand sensor candidate points, the proposed algorithm selects the sensor positions in few seconds, which required several hours with the other algorithms in the oversampling case on a 3.40 GHz computer.

Journal ArticleDOI
TL;DR: An integer programming based approach READ-O for solving the robustness-oriented Edge Application Deployment problem as a constrained optimization problem and its NP-hardness is proved, and an approximation algorithm READ-A for efficiently finding near-optimal solutions to large-scale problems is provided.
Abstract: Edge computing (EC) can overcome several limitations of cloud computing. In the EC environment, a service provider can deploy its application instances on edge servers to serve users with low latency. Given a limited budget K for deploying applications in a particular geographical area, some approaches have been proposed to achieves various optimization objectives, e.g., to maximize the servers' coverage, to minimize the average network latency, etc. However, the robustness of the services collectively delivered by the service provider's applications deployed on the edge servers has not been considered at all. This is a critical issue, especially in the highly distributed, dynamic and volatile EC environment. We make the first attempt to tackle this challenge. Specifically, we formulate this Robustness-oriented Edge Application Deployment(READ) problem as a constrained optimization problem and prove its NP-hardness. Then, we provide an integer programming based approach READ-O for solving it precisely, and an approximation algorithm READ-A for efficiently finding near-optimal solutions to large-scale problems. READ-A's approximation ratio is not worse than K/2, which is constant regardless of the total number of edge servers. Evaluation of the widely-used real-world dataset against five representative approaches demonstrates that our approaches can solve the READ problem effectively and efficiently.

Journal ArticleDOI
TL;DR: This paper studies the problem of how to place VNFs on edge and public clouds and route the traffic among adjacent VNF pairs, such that the maximum link load ratio is minimized and each user's requested delay is satisfied and an efficient randomized rounding approximation algorithm is proposed.
Abstract: Mobile Edge Computing (MEC) offers a way to shorten the cloud servicing delay by building the small-scale cloud infrastructures at the network edge, which are in close proximity to the end users. Moreover, Network Function Virtualization (NFV) has been an emerging technology that transforms from traditional dedicated hardware implementations to software instances running in a virtualized environment. In NFV, the requested service is implemented by a sequence of Virtual Network Functions (VNF) that can run on generic servers by leveraging the virtualization technology. Service Function Chaining (SFC) is defined as a chain-ordered set of placed VNFs that handles the traffic of the delivery and control of a specific application. NFV therefore allows to allocate network resources in a more scalable and elastic manner, offer a more efficient and agile management and operation mechanism for network functions and hence can largely reduce the overall costs in MEC. In this paper, we study the problem of how to place VNFs on edge and public clouds and route the traffic among adjacent VNF pairs, such that the maximum link load ratio is minimized and each user's requested delay is satisfied. We consider this problem for both totally ordered SFCs and partially ordered SFCs. We prove that this problem is NP-hard, even for the special case when only one VNF is requested. We subsequently propose an efficient randomized rounding approximation algorithm to solve this problem. Extensive simulation results show that the proposed approximation algorithm can achieve close-to-optimal performance in terms of acceptance ratio and maximum link load ratio.

Journal ArticleDOI
TL;DR: The paper discusses the recent connections established between RM codes, thresholds of Boolean functions, polarization theory, hypercontractivity, and the techniques of approximating low weight codewords using lower degree polynomials, as well as some of the algorithmic developments.
Abstract: Reed-Muller (RM) codes are among the oldest, simplest and perhaps most ubiquitous family of codes They are used in many areas of coding theory in both electrical engineering and computer science Yet, many of their important properties are still under investigation This paper covers some of the recent developments regarding the weight enumerator and the capacity-achieving properties of RM codes, as well as some of the algorithmic developments In particular, the paper discusses the recent connections established between RM codes, thresholds of Boolean functions, polarization theory, hypercontractivity, and the techniques of approximating low weight codewords using lower degree polynomials (when codewords are viewed as evaluation vectors of degree $r$ polynomials in $m$ variables) It then overviews some of the algorithms for decoding RM codes It covers both algorithms with provable performance guarantees for every block length, as well as algorithms with state-of-the-art performances in practical regimes, which do not perform as well for large block length Finally, the paper concludes with a few open problems

Journal ArticleDOI
TL;DR: This paper develops a new approach that gives a simple algorithm for showing the existence of a 3 4 -MMS allocation, and shows that there always exists a ( 3 4 + 1 12 n ) -M MS allocation, improving the approximation guarantee.

Journal ArticleDOI
TL;DR: First, it is proved that the solution of the constrained optimization problem can be obtained through solving an array of optimal control problems of constrained auxiliary subsystems, and under the framework of approximate dynamic programming, a simultaneous policy iteration (SPI) algorithm is presented to solve the Hamilton–Jacobi–Bellman equations corresponding to the constrainediliary subsystems.
Abstract: In this paper, we study the constrained optimization problem of a class of uncertain nonlinear interconnected systems. First, we prove that the solution of the constrained optimization problem can be obtained through solving an array of optimal control problems of constrained auxiliary subsystems. Then, under the framework of approximate dynamic programming, we present a simultaneous policy iteration (SPI) algorithm to solve the Hamilton–Jacobi–Bellman equations corresponding to the constrained auxiliary subsystems. By building an equivalence relationship, we demonstrate the convergence of the SPI algorithm. Meanwhile, we implement the SPI algorithm via an actor–critic structure, where actor networks are used to approximate optimal control policies and critic networks are applied to estimate optimal value functions. By using the least squares method and the Monte Carlo integration technique together, we are able to determine the weight vectors of actor and critic networks. Finally, we validate the developed control method through the simulation of a nonlinear interconnected plant.

Journal ArticleDOI
TL;DR: A novel value iteration based off-policy adaptive dynamic programming (ADP) algorithm is proposed for a general class of CTLP systems, so that approximate optimal solutions can be obtained directly from the collected data, without the exact knowledge of system dynamics.
Abstract: This article studies the infinite-horizon adaptive optimal control of continuous-time linear periodic (CTLP) systems. A novel value iteration (VI) based off-policy adaptive dynamic programming (ADP) algorithm is proposed for a general class of CTLP systems, so that approximate optimal solutions can be obtained directly from the collected data, without the exact knowledge of system dynamics. Under mild conditions, the proofs on uniform convergence of the proposed algorithm to the optimal solutions are given for both the model-based and model-free cases. The VI-based ADP algorithm is able to find suboptimal controllers without assuming the knowledge of an initial stabilizing controller. Application to the optimal control of a triple inverted pendulum subjected to a periodically varying load demonstrates the feasibility and effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This article investigates a linear-quadratic-Gaussian control and sensing codesign problem, and presents the first polynomial time algorithms with per-instance suboptimality guarantees, and develops and proves original results on the performance of the algorithms and establish connections between their sub Optimality and control-theoretic quantities.
Abstract: We investigate a linear-quadratic-Gaussian (LQG) control and sensing codesign problem, where one jointly designs sensing and control policies. We focus on the realistic case where the sensing design is selected among a finite set of available sensors, where each sensor is associated with a different cost (e.g., power consumption). We consider two dual problem instances: sensing-constrained LQG control, where one maximizes a control performance subject to a sensor cost budget, and minimum-sensing LQG control, where one minimizes a sensor cost subject to performance constraints. We prove that no polynomial time algorithm guarantees across all problem instances a constant approximation factor from the optimal. Nonetheless, we present the first polynomial time algorithms with per-instance suboptimality guarantees. To this end, we leverage a separation principle, which partially decouples the design of sensing and control. Then, we frame LQG codesign as the optimization of approximately supermodular set functions; we develop novel algorithms to solve the problems; and we prove original results on the performance of the algorithms and establish connections between their suboptimality and control-theoretic quantities. We conclude the article by discussing two applications, namely, sensing-constrained formation control and resource-constrained robot navigation .

Journal ArticleDOI
TL;DR: This article proposes a new paradigm of the parallel EC algorithm by making the first attempt to parallelize the algorithm in the generation level, inspired by the industrial pipeline technique and shows that generation-level parallelism is possible in EC algorithms and may have significant potential applications in time-consumption optimization problems.
Abstract: Due to the population-based and iterative-based characteristics of evolutionary computation (EC) algorithms, parallel techniques have been widely used to speed up the EC algorithms. However, the parallelism usually performs in the population level where multiple populations (or subpopulations) run in parallel or in the individual level where the individuals are distributed to multiple resources. That is, different populations or different individuals can be executed simultaneously to reduce running time. However, the research into generation-level parallelism for EC algorithms has seldom been reported. In this article, we propose a new paradigm of the parallel EC algorithm by making the first attempt to parallelize the algorithm in the generation level. This idea is inspired by the industrial pipeline technique. Specifically, a kind of EC algorithm called local version particle swarm optimization (PSO) is adopted to implement a pipeline-based parallel PSO (PPPSO, i.e., P3SO). Due to the generation-level parallelism in P3SO, when some particles still perform their evolutionary operations in the current generation, some other particles can simultaneously go to the next generation to carry out the new evolutionary operations, or even go to further next generation(s). The experimental results show that the problem-solving ability of P3SO is not affected while the evolutionary speed has been substantially accelerated in a significant fashion. Therefore, generation-level parallelism is possible in EC algorithms and may have significant potential applications in time-consumption optimization problems.

Journal ArticleDOI
TL;DR: A novel UAV-assisted MEC architecture is proposed to provision services to IoTDs, where a UAV provides both communication and computing services or works as a relay node and the performance of the AA-CAP algorithm is demonstrated to be superior to the baseline algorithms via simulations.
Abstract: Mobile edge computing ( MEC ) is leveraged to reduce the latency for the computation-intensive and latency-critical tasks offloaded from wireless devices and Internet of Things Devices ( IoTDs ). Unmanned aerial vehicles ( UAV s) have attracted much attention from both academia and industry attributed to high mobility, high flexibility, and high maneuverability of UAVs. In this article, a novel UAV-assisted MEC architecture is proposed to provision services to IoTDs, where a UAV provides both communication and computing services or works as a relay node. We then formulate the joint c omputation offloading, spectrum resource a llocation, computation resource allocation, and UAV p lacement ( Joint-CAP ) problem in the UAV-MEC network to minimize the operation cost of provisioning IoTDs. Since the Joint-CAP problem is a mixed integer non-linear programming problem and NP-hard, we decompose it into two sub-problems and solve the sub-problems sequentially. Then, we propose a $(1+\epsilon)$ -approximation algorithm, named AA-CAP, to solve the Joint-CAP problem, and the performance of the AA-CAP algorithm is demonstrated to be superior to the baseline algorithms via simulations.

Proceedings ArticleDOI
15 Jun 2021
TL;DR: In this article, the authors gave a 3/2-approximation algorithm for metric TSP for some ε > 10^{-36}$ for some ϵ > 0.
Abstract: For some $\epsilon > 10^{-36}$ we give a $3/2-\epsilon$ approximation algorithm for metric TSP.