scispace - formally typeset
Search or ask a question

Showing papers by "Silvio Micali published in 2018"


Posted Content
TL;DR: A simple Byzantine agreement protocol with leader election, that works under > 2/3 honest majority and does not rely on the participants having synchronized clocks, that is resilient to arbitrary network partitions with unknown length, and recovers fast after the partition is resolved and bounded message delay is restored.
Abstract: We present a simple Byzantine agreement protocol with leader election, that works under > 2/3 honest majority and does not rely on the participants having synchronized clocks. When honest messages are delivered within a bounded worst-case delay, agreement is reached in expected constant number of steps when the elected leader is malicious, and is reached after two steps when the elected leader is honest. Our protocol is resilient to arbitrary network partitions with unknown length, and recovers fast after the partition is resolved and bounded message delay is restored. We will briefly discuss how the protocol applies to blockchains in a permissionless system. In particular, when an honest leader proposes a block of transactions, the first voting step happens in parallel with the block propagation. Effectively, after the block propagates, a certificate is generated in just one step of voting.

63 citations


Journal ArticleDOI
TL;DR: It is proved that, for every continuous function f and every e > 0, the principal can— by learning a single component x1 of x—incentivize the agent to report the correct value f(x) with accuracy e.
Abstract: Collecting and processing large amounts of data is becoming increasingly crucial in our society. We model this task as evaluating a function f over a large vector x = (x1, . . . , xn), which is unknown, but drawn from a publicly known distribution X. In our model learning each component of the input x is costly, but computing the output f(x) has zero cost once x is known. We consider the problem of a principal who wishes to delegate the evaluation of f to an agent, whose cost of learning any number of components of x is always lower than the corresponding cost of the principal. We prove that, for every continuous function f and every e > 0, the principal can— by learning a single component x1 of x—incentivize the agent to report the correct value f(x) with accuracy e.

5 citations