scispace - formally typeset
Journal ArticleDOI

Approximation Algorithms for Orienteering and Discounted-Reward TSP

Reads0
Chats0
TLDR
This paper gives the first constant-factor approximation algorithm for the rooted Orienteering problem, as well as a new problem that is motivated by the Discounted-Reward traveling salesman problem (TSP), motivated by robot navigation.
Abstract
In this paper, we give the first constant-factor approximation algorithm for the rooted Orienteering problem, as well as a new problem that we call the Discounted-Reward traveling salesman problem (TSP), motivated by robot navigation. In both problems, we are given a graph with lengths on edges and rewards on nodes, and a start node $s$. In the Orienteering problem, the goal is to find a path starting at $s$ that maximizes the reward collected, subject to a hard limit on the total length of the path. In the Discounted-Reward TSP, instead of a length limit we are given a discount factor $\gamma$, and the goal is to maximize the total discounted reward collected, where the reward for a node reached at time $t$ is discounted by $\gamma^t$. This problem is motivated by an approximation to a planning problem in the Markov decision process (MDP) framework under the commonly employed infinite horizon discounted reward optimality criterion. The approximation arises from a need to deal with exponentially large state spaces that emerge when trying to model one-time events and nonrepeatable rewards (such as for package deliveries). We also consider tree and multiple-path variants of these problems and provide approximations for those as well. Although the unrooted Orienteering problem, where there is no fixed start node $s$, has been known to be approximable using algorithms for related problems such as $k$-TSP (in which the amount of reward to be collected is fixed and the total length is approximately minimized), ours is the first to approximate the rooted question, solving an open problem in [E. M. Arkin, J. S. B. Mitchell, and G. Narasimhan, Proceedings of the $14$th ACM Symposium on Computational Geometry, 1998, pp. 307-316] and [B. Awerbuch, Y. Azar, A. Blum, and S. Vempala, SIAM J. Comput., 28 (1998), pp. 254-262]. We complement our approximation result for Orienteering by showing that the problem is APX-hard.

read more

Citations
More filters
Journal ArticleDOI

Sensor Planning for a Symbiotic UAV and UGV System for Precision Agriculture

TL;DR: An approximation algorithm for SamplingTSPN is presented, and how to model the UAV planning problem using a metric graph and formulate an orienteering instance to which a known approximation algorithm can be applied is shown.
Proceedings ArticleDOI

TimeAware test suite prioritization

TL;DR: Experimental results indicate that the prioritization approach frequently yields higher average percentage of faults detected (APFD) values, for two case study applications, when basic block level coverage is used instead of method level coverage.
Proceedings ArticleDOI

Toward Optimal Allocation of Location Dependent Tasks in Crowdsensing

TL;DR: A pricing mechanism based on bargaining theory is designed, in which the price of each task is determined by the performing cost and market demand (i.e., the number of mobile users who intend to perform the task).
Journal ArticleDOI

A survey on algorithmic approaches for solving tourist trip design problems

TL;DR: Survey models, algorithmic approaches and methodologies concerning tourist trip design problems, focusing on problem models that best capture a multitude of realistic POIs attributes and user constraints are examined.
Journal ArticleDOI

Improved algorithms for orienteering and related problems

TL;DR: Chekuri and Pal as discussed by the authors gave a (2+e)-approximation algorithm for orienteering in undirected and directed graphs, which is the first algorithm to achieve a polylogarithmic approximation ratio.
References
More filters
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Book

Dynamic Programming and Optimal Control

TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Posted Content

Reinforcement Learning: A Survey

TL;DR: A survey of reinforcement learning from a computer science perspective can be found in this article, where the authors discuss the central issues of RL, including trading off exploration and exploitation, establishing the foundations of RL via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state.

Neuro-Dynamic Programming.

TL;DR: In this article, the authors present the first textbook that fully explains the neuro-dynamic programming/reinforcement learning methodology, which is a recent breakthrough in the practical application of neural networks and dynamic programming to complex problems of planning, optimal decision making, and intelligent control.
Proceedings Article

A general approximation technique for constrained forest problems

TL;DR: A 2-approximation algorithm for the Steiner tree problem was given in this paper with running time of O(n 2 log n) for the shortest path problem, where n is the number of vertices in a graph.