scispace - formally typeset
Search or ask a question
Author

Piotr Faliszewski

Bio: Piotr Faliszewski is an academic researcher from AGH University of Science and Technology. The author has contributed to research in topics: Voting & Condorcet method. The author has an hindex of 46, co-authored 198 publications receiving 6737 citations. Previous affiliations of Piotr Faliszewski include Humboldt University of Berlin & University of Rochester.


Papers
More filters
Journal ArticleDOI
TL;DR: Among systems with a polynomial-time winner problem, Copeland voting is the first natural election system proven to have full resistance to constructive control and vulnerability results for microbribery are proven via a novel technique involving min-cost network flow.
Abstract: Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Constructive control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins. Destructive control refers to attempts by an agent to, via the same actions, preclude a given candidate's victory. An election system in which an agent can sometimes affect the result and it can be determined in polynomial time on which inputs the agent can succeed is said to be vulnerable to the given type of control. An election system in which an agent can sometimes affect the result, yet in which it is NP-hard to recognize the inputs on which the agent can succeed, is said to be resistant to the given type of control. Aside from election systems with an NP-hard winner problem, the only systems previously known to be resistant to all the standard control types were highly artificial election systems created by hybridization. This paper studies a parameterized version of Copeland voting, denoted by Copelandα, where the parameter α is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates. In every previously studied constructive or destructive control scenario, we determine which of resistance or vulnerability holds for Copelandα for each rational α, 0 ≤ α ≤ 1. In particular, we prove that Copeland0.5, the system commonly referred to as "Copeland voting," provides full resistance to constructive control, and we prove the same for Copelandα, for all rational α, 0 < α < 1. Among systems with a polynomial-time winner problem, Copeland voting is the first natural election system proven to have full resistance to constructive control. In addition, we prove that both Copeland0 and Copeland1 (interestingly, Copeland1 is an election system developed by the thirteenth-century mystic Llull) are resistant to all standard types of constructive control other than one variant of addition of candidates. Moreover,we show that for each rational α, 0 ≤ α ≤ 1, Copelandα voting is fully resistant to bribery attacks, and we establish fixed-parameter tractability of bounded-case control for Copelandα. We also study Copelandα elections under more flexible models such as microbribery and extended control, we integrate the potential irrationality of voter preferences into many of our results, and we prove our results in both the unique-winner model and the nonunique-winner model. Our vulnerability results for microbribery are proven via a novel technique involving min-cost network flow.

288 citations

Journal ArticleDOI
TL;DR: This work obtains both polynomial-time bribery algorithms and proofs of the intractability of bribery, and results show that the complexity of bribery is extremely sensitive to the setting.
Abstract: We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by paying certain voters to change their preferences a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k- approval,and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.

233 citations

Journal ArticleDOI
TL;DR: An overview of more than two decades of work that studies computational complexity as a barrier against manipulation in elections is provided.
Abstract: We provide an overview of more than two decades of work, mostly in AI, that studies computational complexity as a barrier against manipulation in elections.

233 citations

Journal ArticleDOI
TL;DR: Computational complexity may truly be the shield against election manipulation.
Abstract: Computational complexity may truly be the shield against election manipulation.

221 citations

Posted Content
TL;DR: The complexity of influencing elections through bribery was studied in this paper, where the complexity of determining whether by a certain amount of bribing voters a specified candidate can be made the election's winner was studied.
Abstract: We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.

216 citations


Cited by
More filters
Journal ArticleDOI

784 citations

Journal ArticleDOI
TL;DR: This article characterize the exact number of candidates for which manipulation becomes hard for the plurality, Borda, STV, Copeland, maximin, veto, plurality with runoff, regular cup, and randomized cup protocols and shows that for simpler manipulation problems, manipulation cannot be hard with few candidates.
Abstract: In multiagent settings where the agents have different preferences, preference aggregation is a central issue. Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable. One could try to avoid manipulation by using protocols where determining a beneficial manipulation is hard. Especially among computational agents, it is reasonable to measure this hardness by computational complexity. Some earlier work has been done in this area, but it was assumed that the number of voters and candidates is unbounded. Such hardness results lose relevance when the number of candidates is small, because manipulation algorithms that are exponential only in the number of candidates (and only slightly so) might be available. We give such an algorithm for an individual agent to manipulate the Single Transferable Vote (STV) protocol, which has been shown hard to manipulate in the above sense. This motivates the core of this article, which derives hardness results for realistic elections where the number of candidates is a small constant (but the number of voters can be large).The main manipulation question we study is that of coalitional manipulation by weighted voters. (We show that for simpler manipulation problems, manipulation cannot be hard with few candidates.) We study both constructive manipulation (making a given candidate win) and destructive manipulation (making a given candidate not win). We characterize the exact number of candidates for which manipulation becomes hard for the plurality, Borda, STV, Copeland, maximin, veto, plurality with runoff, regular cup, and randomized cup protocols. We also show that hardness of manipulation in this setting implies hardness of manipulation by an individual in unweighted settings when there is uncertainty about the others' votes (but not vice-versa). To our knowledge, these are the first results on the hardness of manipulation when there is uncertainty about the others' votes.

411 citations

BookDOI
TL;DR: This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively and offers detailed introductions to each of the field's major themes.
Abstract: The rapidly growing field of computational social choice, at the intersection of computer science and economics, deals with the computational aspects of collective decision making. This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively. Chapters devoted to each of the field's major themes offer detailed introductions. Topics include voting theory (such as the computational complexity of winner determination and manipulation in elections), fair allocation (such as algorithms for dividing divisible and indivisible goods), coalition formation (such as matching and hedonic games), and many more. Graduate students, researchers, and professionals in computer science, economics, mathematics, political science, and philosophy will benefit from this accessible and self-contained book.

396 citations

Book
25 Oct 2011
TL;DR: This talk introduces basic concepts from cooperative game theory, and in particular the key solution concepts: the core and the Shapley value, and introduces the key issues that arise if one is to consider the cooperative games in a computational setting.
Abstract: The theory of cooperative games provides a rich mathematical framework with which to understand the interactions between self-interested agents in settings where they can benefit from cooperation, and where binding agreements between agents can be made. Our aim in this talk is to describe the issues that arise when we consider cooperative game theory through a computational lens. We begin by introducing basic concepts from cooperative game theory, and in particular the key solution concepts: the core and the Shapley value. We then introduce the key issues that arise if one is to consider the cooperative games in a computational setting: in particular, the issue of representing games, and the computational complexity of cooperative solution concepts.

395 citations

Proceedings ArticleDOI
06 Jul 2009
TL;DR: This paper establishes tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost, and the maximum cost.
Abstract: The literature on algorithmic mechanism design is mostly concerned with game-theoretic versions of optimization problems to which standard economic money-based mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on payments. In this article, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are almost ubiquitous and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity.We present a case study in approximate mechanism design without money. In our basic setting, agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located and a domain where each agent controls multiple locations.

379 citations