scispace - formally typeset
Search or ask a question
Author

Nicholas Hopper

Other affiliations: Carnegie Mellon University
Bio: Nicholas Hopper is an academic researcher from University of Minnesota. The author has contributed to research in topics: Anonymity & Cryptography. The author has an hindex of 36, co-authored 116 publications receiving 5712 citations. Previous affiliations of Nicholas Hopper include Carnegie Mellon University.


Papers
More filters
Book ChapterDOI
04 May 2003
TL;DR: This work introduces captcha, an automated test that humans can pass, but current computer programs can't pass; any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem; and provides several novel constructions of captchas, which imply a win-win situation.
Abstract: We introduce captcha, an automated test that humans can pass, but current computer programs can't pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. captchas based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels.

1,525 citations

Book ChapterDOI
09 Dec 2001
TL;DR: This paper provides definitions of what they believe to be reasonable goals for secure human identification and demonstrates that existing solutions do not meet these reasonable definitions and provides solutions which demonstrate the feasibility of the security conditions attached to these definitions, but which are impractical for use by humans.
Abstract: One interesting and important challenge for the cryptologic community is that of providing secure authentication and identification for unassisted humans. There are a range of protocols for secure identification which require various forms of trusted hardware or software, aimed at protecting privacy and financial assets. But how do we verify our identity, securely, when we don't have or don't trust our smart card, palmtop, or laptop? In this paper, we provide definitions of what we believe to be reasonable goals for secure human identification. We demonstrate that existing solutions do not meet these reasonable definitions. Finally, we provide solutions which demonstrate the feasibility of the security conditions attached to our definitions, but which are impractical for use by humans.

567 citations

Journal ArticleDOI
TL;DR: This work presents two attacks on low-latency anonymity schemes using information about the network latency between the sender and one or more nodes in the system, and concludes that both protocols are highly vulnerable to these attacks.
Abstract: Low-latency anonymity systems such as Tor, AN.ON, Crowds, and Anonymizer.com aim to provide anonymous connections that are both untraceable by “local” adversaries who control only a few machines and have low enough delay to support anonymous use of network services like Web browsing and remote login. One consequence of these goals is that these services leak some information about the network latency between the sender and one or more nodes in the system. We present two attacks on low-latency anonymity schemes using this information. The first attack allows a pair of colluding Web sites to predict, based on local timing information and with no additional resources, whether two connections from the same Tor exit node are using the same circuit with high confidence. The second attack requires more resources but allows a malicious Web site to gain several bits of information about a client each time he visits the site. We evaluate both attacks against two low-latency anonymity protocols—the Tor network and the MultiProxy proxy aggregator service—and conclude that both are highly vulnerable to these attacks.

182 citations

Journal ArticleDOI
TL;DR: Methods to mitigate resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power, are discussed, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.
Abstract: Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes' battery power. These "Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O(N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.

170 citations

Book ChapterDOI
18 Aug 2002
TL;DR: In this article, the authors introduce definitions based on computational indistinguishability and prove that the existence of one-way functions implies secure steganographic protocols, and they also prove that secure protocols can be constructed from a complexity-theoretic point of view.
Abstract: Informally, steganography is the process of sending a secret message from Alice to Bob in such a way that an eavesdropper (who listens to all communications) cannot even tell that a secret message is being sent. In this work, we initiate the study of steganography from a complexity-theoretic point of view. We introduce definitions based on computational indistinguishability and we prove that the existence of one-way functions implies the existence of secure steganographic protocols.

163 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book ChapterDOI
04 Oct 2019
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Abstract: Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian.In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.

1,962 citations

Book
26 Mar 2008
TL;DR: A unique overview of this exciting technique is written by three of the most active scientists in GP, which starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination until high-fitness solutions emerge.
Abstract: Genetic programming (GP) is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until high-fitness solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. This unique overview of this exciting technique is written by three of the most active scientists in GP. See www.gp-field-guide.org.uk for more information on the book.

1,856 citations

Proceedings ArticleDOI
17 May 2008
TL;DR: In this article, the authors show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors).
Abstract: We show how to construct a variety of "trapdoor" cryptographic tools assuming the worst-case hardness of standard lattice problems (such as approximating the length of the shortest nonzero vector to within certain polynomial factors). Our contributions include a new notion of trapdoor function with preimage sampling, simple and efficient "hash-and-sign" digital signature schemes, and identity-based encryption. A core technical component of our constructions is an efficient algorithm that, given a basis of an arbitrary lattice, samples lattice points from a discrete Gaussian probability distribution whose standard deviation is essentially the length of the longest Gram-Schmidt vector of the basis. A crucial security property is that the output distribution of the algorithm is oblivious to the particular geometry of the given basis.

1,834 citations

Book
23 Nov 2007
TL;DR: This new edition now contains essential information on steganalysis and steganography, and digital watermark embedding is given a complete update with new processes and applications.
Abstract: Digital audio, video, images, and documents are flying through cyberspace to their respective owners. Unfortunately, along the way, individuals may choose to intervene and take this content for themselves. Digital watermarking and steganography technology greatly reduces the instances of this by limiting or eliminating the ability of third parties to decipher the content that he has taken. The many techiniques of digital watermarking (embedding a code) and steganography (hiding information) continue to evolve as applications that necessitate them do the same. The authors of this second edition provide an update on the framework for applying these techniques that they provided researchers and professionals in the first well-received edition. Steganography and steganalysis (the art of detecting hidden information) have been added to a robust treatment of digital watermarking, as many in each field research and deal with the other. New material includes watermarking with side information, QIM, and dirty-paper codes. The revision and inclusion of new material by these influential authors has created a must-own book for anyone in this profession. *This new edition now contains essential information on steganalysis and steganography *New concepts and new applications including QIM introduced *Digital watermark embedding is given a complete update with new processes and applications

1,773 citations