scispace - formally typeset
Search or ask a question
Book ChapterDOI

Why Isn't Trust Transitive?

TL;DR: The notion of trust is distinguished from a number of other (transitive) notions with which it is frequently confused, and it is argued that “proofs” of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.
Abstract: One of the great strengths of public-key cryptography is its potential to allow the localization of trust. This potential is greatest when cryptography is present to guarantee data integrity rather than secrecy, and where there is no natural hierarchy of trust. Both these conditions are typically fulfilled in the commercial world, where CSCW requires sharing of data and resources across organizational boundaries. One property which trust is frequently assumed or “proved” to have is transitivity (if A trusts B and B trusts C then A trusts C) or some generalization of transitivity such as *-closure. We use the loose term unintensional transitivity of trust to refer to a situation where B can effectively put things into A's set of trust assumptions without A's explicit consent (or sometimes even awareness.) Any account of trust which allows such situations to arise clearly poses major obstacles to the effective confinement (localization) of trust. In this position paper, we argue against the need to accept unintensional transitivity of trust. We distinguish the notion of trust from a number of other (transitive) notions with which it is frequently confused, and argue that “proofs” of the unintensional transitivity of trust typically involve unpalatable logical assumptions as well as undesirable consequences.

Content maybe subject to copyright    Report






Citations
More filters
Proceedings ArticleDOI
01 Jan 2005
TL;DR: Taking into consideration the power and bandwidth constraints of WSNs, a novel agent-based trust and reputation management scheme (ATRM) from a system design point of view is proposed, which is to manageTrust and reputation with minimal overhead in terms of extra messages and time delay.
Abstract: The operation of wireless sensor networks (WSNs) is functionally affected by the selfish and/or malicious network nodes; and their resource constraints complicate the design of any WSN-based protocol and application. Rating nodes' trust and reputation have proven to be an effective solution to improve security, to support decision-making and to promote node collaboration in both wired and wireless networks. However, existing approaches to trust and reputation management emphasize mostly on trust and reputation modeling and ignore the overhead problems brought by their proposed schemes. In this paper, taking into consideration the power and bandwidth constraints of WSNs, we propose a novel agent-based trust and reputation management scheme (ATRM) from a system design point of view. Our objective is to manage trust and reputation with minimal overhead in terms of extra messages and time delay. The main contribution of our work is the introduction of a localized trust and reputation management strategy, which reduces both communication cost and acquisition latency.

64 citations


Cites background from "Why Isn't Trust Transitive?"

  • ...Trust is the degree of belief about the future behavior of other entities, which is based on the one’s the past experience with and observation of their actions [1]; and its properties can be summarized as: subjectivity [2], non-transitivity [3], temporalness [4], contextualness and dynamicity as well as non-monotonicity [1]....

    [...]

Book
15 Jun 2007
TL;DR: The author reveals how the design of the IPSec architecture and components changed over time from simple to complex, and how the architecture of the VPN itself changed over the course of development.
Abstract: Preface. Contributors. 1. Computer Network Security: Basic Background and Current Issues (Panayiotis Kotzanikolaou and Christos Douligeris). 1.1 Some Terminology on Network Security. 1.2 ISO/OSI Reference Model for Networks. 1.3 Network Security Attacks. 1.4 Mechanisms and Controls for Network Security: Book Overview and Structure. References. Part One Internet Security. 2. Secure Routing (Ioannis Avramopoulos, Hisashi Kobayashi, Arvind Krishnamurthy, and Randy Wang). 2.1 Introduction. 2.2 Networking Technologies. 2.3 Attacks in Networks. 2.4 State of the Art. 2.5 Conclusion and Research Issues. References. 3. Designing Firewalls: A Survey (Angelos D. Keromytis and Vassilis Prevelakis). 3.1 Introduction. 3.2 Firewall Classifi cation. 3.3 Firewall Deployment: Management. 3.4 Conclusions. References. 4. Security in Virtual Private Networks (Srinivas Sampalli). 4.1 Introduction. 4.2 VPN Overview. 4.3 VPN Benefi ts. 4.4 VPN Terminology. 4.5 VPN Taxonomy. 4.6 IPSec. 4.7 Current Research on VPNs. 4.8 Conclusions. References. 5. IP Security (IPSec) (Anirban Chakrabarti and Manimaran Govindarasu). 5.1 Introduction. 5.2 IPSec Architecture and Components. 5.3 Benefi ts and Applications of IPSec. 5.4 Conclusions. References. 6. IDS for Networks (John C. McEachen and John M. Zachary). 6.1 Introduction. 6.2 Background. 6.3 Modern NIDSs. 6.4 Research and Trends. 6.5 Conclusions. References. 7. Intrusion Detection Versus Intrusion Protection (Luis Sousa Cardoso). 7.1 Introduction. 7.2 Detection Versus Prevention. 7.3 Intrusion Prevention Systems: The Next Step in Evolution of IDS. 7.4 Architecture Matters. 7.5 IPS Deployment. 7.6 IPS Advantages. 7.7 IPS Requirements: What to Look For. 7.8 Conclusions. References. 8. Denial-of-Service Attacks (Aikaterini Mitrokotsa and Christos Douligeris). 8.1 Introduction. 8.2 DoS Attacks. 8.3 DDoS Attacks. 8.4 DDoS Defense Mechanisms. 8.5 Conclusions. References. 9. Secure Architectures with Active Networks (Srinivas Sampalli, Yaser Haggag, and Christian Labonte). 9.1 Introduction. 9.2 Active Networks. 9.3 SAVE Test bed. 9.4 Adaptive VPN Architecture with Active Networks. 9.5 (SAM) Architecture. 9.6 Conclusions. References. Part Two Secure Services. 10. Security in E-Services and Applications (Manish Mehta, Sachin Singh, and Yugyung Lee). 10.1 Introduction. 10.2 What Is an E-Service? 10.3 Security Requirements for EServices and Applications. 10.4 Security for Future EServices. References. 11. Security in Web Services (Christos Douligeris and George P. Ninios). 11.1 Introduction. 11.2 Web Services Technologies and Standards. 11.3 Web Services Security Standard. 11.4 Conclusions. References. 12. Secure Multicasting (Constantinos Boukouvalas and Anthony G. Petropoulos). 12.1 Introduction 205 12.2 IP Multicast. 12.3 Application Security Requirements. 12.4 Multicast Security Issues. 12.5 Data Authentication. 12.6 Source Authentication Schemes. 12.7 Group Key Management. 12.8 Group Management and Secure Multicast Routing. 12.9 Secure IP Multicast Architectures. 12.10 Secure IP Multicast Standardization Efforts. 12.11 Conclusions. References. 13. Voice Over IP Security (Son Vuong and Kapil Kumar Singh). 13.1 Introduction. 13.2 Security Issues in VoIP. 13.3 Vulnerability Testing. 13.4 Intrusion Detection Systems. 13.5 Conclusions. References. 14. Grid Security (Kyriakos Stefanidis, Artemios G. Voyiatzis, and Dimitrios N. Serpanos). 14.1 Introduction. 14.2 Security Challenges for Grids. 14.3 Grid Security Infrastructure. 14.4 Grid Computing Environments. 14.5 Grid Network Security. 14.6 Conclusions and Future Directions. References. 15. Mobile Agent Security (Panayiotis Kotzanikolaou, Christos Douligeris, Rosa Mavropodi, and Vassilios Chrissikopoulos). 15.1 Introduction. 15.2 Taxonomy of Solutions. 15.3 Security Mechanisms for Mobile Agent Systems. References Part Three Mobile and Security. 16. Mobile Terminal Security (Olivier Benoit, Nora Dabbous, Laurent Gauteron, Pierre Girard, Helena Handschuh, David Naccache, Stephane Socie, and Claire Whelan). 16.1 Introduction. 16.2 WLAN and WPAN Security. 16.3 GSM and 3GPP Security. 16.4 Mobile Platform Layer Security. 16.5 Hardware Attacks on Mobile Equipment. 16.6 Conclusion. References. 17. IEEE 802.11 Security (Daniel L. Lough, David J. Robinson, and Ian G. Schneller). 17.1 Introduction. 17.2 Introduction to IEEE 802.11. 17.3 Wired Equivalent Privacy. 17.4 Additional IEEE 802.11 Security Techniques. 17.5 Wireless Intrusion Detection Systems. 17.6 Practical IEEE 802.11 Security Measures. 17.7 Conclusions. References. 18. Bluetooth Security (Christian Gehrmann). 18.1 Introduction. 18.2 Bluetooth Wireless Technology. 18.3 Security Architecture. 18.4 Security Weaknesses and Countermeasures. 18.5 Bluetooth Security: What Comes Next? References. 19. Mobile Telecom Networks (Christos Xenakis and Lazaros Merakos). 19.1 Introduction. 19.2 Architectures Network. 19.3 Security Architectures. 19.4 Research Issues. 19.5 Conclusions. References. 20. Security in Mobile Ad HocNetworks (Mike Burmester, Panayiotis Kotznanikolaou, and Christos Douligeris). 20.1 Introduction. 20.2 Routing Protocols. 20.3 Security Vulnerabilities. 20.4 Preventing Attacks in MANETs. 20.5 Trust in MANETs. 20.6 Establishing Secure Routes in a MANET. 20.7 Cryptographic Tools for MANETs. References. 21. Wireless Sensor Networks (Artemios G. Voyiatzis and Dimitrios N. Serpanos). 21.1 Introduction. 21.2 Sensor Devices. 21.3 Sensor Network Security. 21.4 Future Directions. 21.5 Conclusions. References. 22. Trust (Lidong Chen). 22.1 Introduction. 22.2 What Is a trust Model? 22.3 How Trust Models Work? 22.4 Where Trust Can Go Wrong? 22.5 Why Is It Diffi cult to Defi ne Trust? 22.6 Which Lessons Have We Learned? References. Part Four Trust, Anonymity, and Privacy. 23. PKI Systems (Nikos Komninos). 23.1 Introduction. 23.2 Origins of Cryptography. 23.3 Overview of PKI Systems. 23.4 Components of PKI Systems. 23.5 Procedures of PKI Systems. 23.6 Current and Future Aspects of PKI Systems. 23.7 Conclusions. References. 24. Privacy in Electronic Communications (Alf Zugenmaier and Joris Claessens). 24.1 Introduction. 24.2 Protection from Third Party: Confidentiality. 24.3 Protection from Communication Partner. 24.4 Invasions of Electronic Private Sphere. 24.5 Balancing Privacy with Other Needs. 24.6 Structure of Privacy. 24.7 Conclusion and Future Trends. References. 25. Securing Digital Content (Magda M. Mourad and Ahmed N. Tantawy). 25.1 Introduction. 25.2 Securing Digital Content: Need and Challenges. 25.3 Content Protection Techniques. 25.4 Illustrative Application: EPublishing of E-Learning Content. 25.5 Concluding Remarks. References. Appendix A. Cryptography Primer: Introduction to Cryptographic Principles and Algorithms (Panayiotis Kotzanikolaou and Christos Douligeris). A.1 Introduction. A.2 Cryptographic Primitives. A.3 Symmetric-Key Cryptography. A.4 Asymmetric-Key Cryptography. A.5 Key Management. A.6. Conclusions and Other Fields of Cryptography. References. Appendix B. Network Security: Overview of Current Legal and Policy Issues (Andreas Mitrakas). B.1 Introduction. B.2 Network Security as a Legal Requirement. B.3 Network Security Policy Overview. B.4 Legal Aspects of Network Security. B.5 Self-Regulatory Security Frameworks. B.6 Conclusions. References. Appendix C. Standards in Network Security (Despina Polemi and Panagiotis Sklavos). C.1 Introduction. C.2 Virtual Private Networks: Internet Protocol Security (IPSec). C.3 Multicast Security (MSEC). C.4 Transport Layer Security (TLS). C.5 Routing Security. C.6 ATM Networks Security. C.7 Third-Generation (3G) Mobile Networks. C.8 Wireless LAN (802.11) Security. C.9 E-Mail Security. C.10 Public-Key Infrastructure (X.509). Index. About the Editors and Authors.

63 citations


Cites background or methods from "Why Isn't Trust Transitive?"

  • ...In the Byzantine detection protocol of [60, 61], authentication is based on MACs (that protect data packets) and multiple short hash chains (that protect ACKs and fault announcements)....

    [...]

  • ...A fundamental ambiguity in detecting faults is also identifi ed in [60, 61]; malicious sources can exploit the replay protection mechanism so that nonfaulty routers will drop packets simply because a source is faulty....

    [...]

Journal ArticleDOI
TL;DR: A heuristic algorithm based on learning automata, called DLATrust, for discovering reliable paths between two users and inferring the value of trust using the proposed aggregation strategy is presented.
Abstract: Online social networks have provided an appropriate infrastructure for users to interact with one another and share information. Since trust is one of the most important factors in forming social interactions, it is necessary in these networks to evaluate trust from one user to another indirectly connected user, using propagating trust along reliable trust paths between the two users. The quality of trust inference based on trust propagation is affected by the length of trust paths and also different aggregation strategies for combining trust values derived from multiple paths. While evaluating trust value based on all paths provides more accurate trust inference results, it is very time consuming to be acceptable in large social networks. Therefore, discovering reliable trust paths is always challenging in these networks. Another important challenge is how to aggregate trust values of multiple paths. In this paper, we first propose a new aggregation strategy on the basis of the standard collaborative filtering. We then present a heuristic algorithm based on learning automata, called DLATrust , for discovering reliable paths between two users and inferring the value of trust using the proposed aggregation strategy. The experimental results conducted on the online social network dataset of Advogato demonstrate that DLATrust can efficiently identify reliable trust paths and predict trust with a high accuracy.

60 citations

Proceedings Article
01 Jan 2005
TL;DR: In this paper, the authors describe principles for expressing and analysing transitive trust networks, and define requirements for their validity, which can be used for modelling transitivetrust in computerised interactions, and can be combined with algebras and algorithms for computing propagation of both trust and distrust.
Abstract: To describe the concept of transitive trust in a simplified way, assume that agent A trusts agent B, and that agent B trusts agent C, then by transitivity, agent A trusts agent C. Trust transitivity manifests itself in various forms during real life human interaction, but can be challenging to concisely model in a formal way. In this paper we describe principles for expressing and analysing transitive trust networks, and define requirements for their validity. This framework can be used for modelling transitive trust in computerised interactions, and can be combined with algebras and algorithms for computing propagation of both trust and distrust. This is illustrated by an example where transitive trust is mathematically analysed with belief calculus.

54 citations

Proceedings ArticleDOI
13 Oct 2003
TL;DR: The term trust justification is used to describe the process in which an agent integrates the beliefs of other agents, trust information, and its own beliefs to update its trust model, and the results of simulation experiments of the use and evolution of trust in multiagent systems are described.
Abstract: The semantic Web enables intelligent agents to "outsource" knowledge, extending and enhancing their limited knowledge bases. An open question is how agents can efficiently and effectively access the vast knowledge on the inherently open and dynamic semantic Web. The problem is not that of finding a source for desired information, but deciding which among many possibly inconsistent sources is most reliable. We propose an approach to agent knowledge outsourcing inspired by the use trust in human society. Trust is a type of social knowledge and encodes evaluations about which agents can be taken as reliable sources of information or services. We focus on two important practical issues: learning trust and justifying trust. An agent can learn trust relationships by reasoning about its direct interactions with other agents and about public or private reputation information, i.e., the aggregate trust evaluations of other agents. We use the term trust justification to describe the process in which an agent integrates the beliefs of other agents, trust information, and its own beliefs to update its trust model. We describe the results of simulation experiments of the use and evolution of trust in multiagent systems. Our experiments demonstrate that the use of explicit trust knowledge can significantly improve knowledge outsourcing performance. We also describe a collaborative trust justification technique that focuses on reducing search complexity, handling inconsistent knowledge, and avoiding error propagation.

54 citations


Cites background from "Why Isn't Trust Transitive?"

  • ...Christianson and Harbison [7] narrowed trust as a highly subjective binary security primitive and argued that trust is intransitive....

    [...]

  • ...Christianson and Harbison [7] narrowed trust as a highly subjective binary security primitive and argued that trust is intransitive....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper describes the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication, and gives the results of the analysis of four published protocols.
Abstract: Authentication protocols are the basis of security in many distributed systems, and it is therefore essential to ensure that these protocols function correctly. Unfortunately, their design has been extremely error prone. Most of the protocols found in the literature contain redundancies or security flaws. A simple logic has allowed us to describe the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication. We have been able to explain a variety of authentication protocols formally, to discover subtleties and errors in them, and to suggest improvements. In this paper we present the logic and then give the results of our analysis of four published protocols, chosen either because of their practical importance or because they serve to illustrate our method.

2,638 citations


"Why Isn't Trust Transitive?" refers background in this paper

  • ...If Ted finds himself under physical threat (eg power loss) then at least two different security policies are possible, depending upon whether disclosure or non-delivery is considered the greater threat: ( 1 ) destroy the message M (2) broadcast the message "Bob: M" Since it may be important to conceal the fact of which policy is in operation, when power is cut, Ted may well broadcast the message "Bob: Aunt Agatha will arrive ......

    [...]

Book
06 Mar 2003
TL;DR: The first edition made a number of predictions, explicitly or implicitly, about the growth of the Web and the patterns of Internet connectivity vastly increased, and warned of issues posed by home LANs, and about the problems caused by roaming laptops.
Abstract: From the Book: But after a time, as Frodo did not show any sign of writing a book on the spot, the hobbits returned to their questions about doings in the Shire. Lord of the Rings —J.R.R. TOLKIEN The first printing of the First Edition appeared at the Las Vegas Interop in May, 1994. At that same show appeared the first of many commercial firewall products. In many ways, the field has matured since then: You can buy a decent firewall off the shelf from many vendors. The problem of deploying that firewall in a secure and useful manner remains. We have studied many Internet access arrangements in which the only secure component was the firewall itself—it was easily bypassed by attackers going after the “protected” inside machines. Before the trivestiture of AT&T/Lucent/NCR, there were over 300,000 hosts behind at least six firewalls, plus special access arrangements with some 200 business partners. Our first edition did not discuss the massive sniffing attacks discovered in the spring of 1994. Sniffers had been running on important Internet Service Provider (ISP) machines for months—machines that had access to a major percentage of the ISP’s packet flow. By some estimates, these sniffers captured over a million host name/user name/password sets from passing telnet, ftp, and rlogin sessions. There were also reports of increased hacker activity on military sites. It’s obvious what must have happened: If you are a hacker with a million passwords in your pocket, you are going to look for the most interesting targets, and .mil certainly qualifies. Since the First Edition, we have been slowlylosing the Internet arms race. The hackers have developed and deployed tools for attacks we had been anticipating for years. IP spoofing Shimomura, 1996 and TCP hijacking are now quite common, according to the Computer Emergency Response Team (CERT). ISPs report that attacks on the Internet’s infrastructure are increasing. There was one attack we chose not to include in the First Edition: the SYN-flooding denial-of- service attack that seemed to be unstoppable. Of course, the Bad Guys learned about the attack anyway, making us regret that we had deleted that paragraph in the first place. We still believe that it is better to disseminate this information, informing saints and sinners at the same time. The saints need all the help they can get, and the sinners have their own channels of communication.Crystal Ball or Bowling Ball?The first edition made a number of predictions, explicitly or implicitly. Was our foresight accurate? Our biggest failure was neglecting to foresee how successful the Internet would become. We barely mentioned the Web and declined a suggestion to use some weird syntax when listing software resources. The syntax, of course, was the URL... Concomitant with the growth of the Web, the patterns of Internet connectivity vastly increased. We assumed that a company would have only a few external connections—few enough that they’d be easy to keep track of, and to firewall. Today’s spaghetti topology was a surprise. We didn’t realize that PCs would become Internet clients as soon as they did. We did, however, warn that as personal machines became more capable, they’d become more vulnerable. Experience has proved us very correct on that point. We did anticipate high-speed home connections, though we spoke of ISDN, rather than cable modems or DSL. (We had high-speed connectivity even then, though it was slow by today’s standards.) We also warned of issues posed by home LANs, and we warned about the problems caused by roaming laptops. We were overly optimistic about the deployment of IPv6 (which was called IPng back then, as the choice hadn’t been finalized). It still hasn’t been deployed, and its future is still somewhat uncertain. We were correct, though, about the most fundamental point we made: Buggy host software is a major security issue. In fact, we called it the “fundamental theorem of firewalls”: Most hosts cannot meet our requirements: they run too many programs that are too large. Therefore, the only solution is to isolate them behind a firewall if you wish to run any programs at all. If anything, we were too conservative.Our ApproachThis book is nearly a complete rewrite of the first edition. The approach is different, and so are many of the technical details. Most people don’t build their own firewalls anymore. There are far more Internet users, and the economic stakes are higher. The Internet is a factor in warfare. The field of study is also much larger—there is too much to cover in a single book. One reviewer suggested that Chapters 2 and 3 could be a six-volume set. (They were originally one mammoth chapter.) Our goal, as always, is to teach an approach to security. We took far too long to write this edition, but one of the reasons why the first edition survived as long as it did was that we concentrated on the concepts, rather than details specific to a particular product at a particular time. The right frame of mind goes a long way toward understanding security issues and making reasonable security decisions. We’ve tried to include anecdotes, stories, and comments to make our points. Some complain that our approach is too academic, or too UNIX-centric, that we are too idealistic, and don’t describe many of the most common computing tools. We are trying to teach attitudes here more than specific bits and bytes. Most people have hideously poor computing habits and network hygiene. We try to use a safer world ourselves, and are trying to convey how we think it should be. The chapter outline follows, but we want to emphasize the following: It is OK to skip the hard parts. If we dive into detail that is not useful to you, feel free to move on. The introduction covers the overall philosophy of security, with a variety of time-tested maxims. As in the first edition, Chapter 2 discusses most of the important protocols, from a security point of view. We moved material about higher-layer protocols to Chapter 3. The Web merits a chapter of its own. The next part discusses the threats we are dealing with: the kinds of attacks in Chapter 5, and some of the tools and techniques used to attack hosts and networks in Chapter 6. Part III covers some of the tools and techniques we can use to make our networking world safer. We cover authentication tools in Chapter 7, and safer network servicing software in Chapter 8. Part IV covers firewalls and virtual private networks (VPNs). Chapter 9 introduces various types of firewalls and filtering techniques, and Chapter 10 summarizes some reasonable policies for filtering some of the more essential services discussed in Chapter 2. If you don’t find advice about filtering a service you like, we probably think it is too dangerous (refer to Chapter 2). Chapter 11 covers a lot of the deep details of firewalls, including their configuration, administration, and design. It is certainly not a complete discussion of the subject, but should give readers a good start. VPN tunnels, including holes through firewalls, are covered in some detail in Chapter 12. There is more detail in Chapter 18. In Part V, we apply these tools and lessons to organizations. Chapter 13 examines the problems and practices on modern intranets. See Chapter 15 for information about deploying a hacking-resistant host, which is useful in any part of an intranet. Though we don’t especially like intrusion detection systems (IDSs) very much, they do play a role in security, and are discussed in Chapter 15. The last part offers a couple of stories and some further details. The Berferd chapter is largely unchanged, and we have added “The Taking of Clark,” a real-life story about a minor break-in that taught useful lessons. Chapter 18 discusses secure communications over insecure networks, in quite some detail. For even further detail, Appendix A has a short introduction to cryptography. The conclusion offers some predictions by the authors, with justifications. If the predictions are wrong, perhaps the justifications will be instructive. (We don’t have a great track record as prophets.) Appendix B provides a number of resources for keeping up in this rapidly changing field.Errata and UpdatesEveryone and every thing seems to have a Web site these days; this book is no exception. Our “official” Web site is . We’ll post an errata list there; we’ll also keep an up-to-date list of other useful Web resources. If you find any errors—we hope there aren’t many—please let us know via e-mail at .AcknowledgmentsFor many kindnesses, we’d like to thank Joe Bigler, Steve “Hollywood” Branigan, Hal Burch, Brian Clapper, David Crocker, Tom Dow, Phil Edwards and the Internet Public Library, Anja Feldmann, Karen Gettman, Brian Kernighan, David Korman, Tom Limoncelli, Norma Loquendi, Cat Okita, Robert Oliver, Vern Paxson, Marcus Ranum, Eric Rescorla, Guido van Rooij, Luann Rouff (a most excellent copy editor), Abba Rubin, Peter Salus, Glenn Sieb, Karl Siil (we’ll always have Boston), Irina Strizhevskaya, Rob Thomas, Win Treese, Dan Wallach, Avishai Wool, Karen Yannetta, and Michal Zalewski, among many others. BILL CHESWICK STEVE BELLOVIN AVI RUBIN 020163466XP01302003

730 citations

Proceedings ArticleDOI
07 May 1990
TL;DR: A mechanism is presented for reasoning about belief as a systematic way to understand the working of cryptographic protocols and places a strong emphasis on the separation between the content and the meaning of messages.
Abstract: A mechanism is presented for reasoning about belief as a systematic way to understand the working of cryptographic protocols. The mechanism captures more features of such protocols than that given by M. Burrows et al. (1989) to which the proposals are a substantial extension. The notion of possession incorporated in the approach assumes that principles can include in messages data they do not believe in, but merely possess. This also enables conclusions such as 'Q possesses the shared key', as in an example to be derived. The approach places a strong emphasis on the separation between the content and the meaning of messages. This can increase consistency in the analysis and, more importantly, introduce the ability to reason at more than one level. The final position in a given run will depend on the level of mutual trust of the specified principles participating in that run. >

682 citations

Book
01 Jan 1973

361 citations

Book
01 Jan 1994
TL;DR: The 2-amino-3-bromoanthraquinone which is isolated may be used for the manufacture of dyes and is at least as pure as that obtained from purified 2- aminoanthraquin one by the process of the prior art.
Abstract: In a process for the manufacture of 2-amino-3-bromoanthraquinone by heating 2-aminoanthraquinone with bromine (in the molar ratio of 1:1) in sulfuric acid, while mixing, the improvement wherein crude 2-aminoanthraquinone, in sulfuric acid of from 60 to 90 percent strength by weight, which contains from 10 to 15% by weight of an alkanecarboxylic acid of 3 or 4 carbon atoms or a mixture of such acids, is heated with from 1 to 1.05 moles of bromine per mole of 2-aminoanthraquinone at from 130 to 150 DEG C. The 2-amino-3-bromoanthraquinone which is isolated may be used for the manufacture of dyes. It is at least as pure as that obtained from purified 2-aminoanthraquinone by the process of the prior art.

356 citations


"Why Isn't Trust Transitive?" refers background in this paper

  • ...If Ted finds himself under physical threat (eg power loss) then at least two different security policies are possible, depending upon whether disclosure or non-delivery is considered the greater threat: (1) destroy the message M ( 2 ) broadcast the message "Bob: M" Since it may be important to conceal the fact of which policy is in operation, when power is cut, Ted may well broadcast the message "Bob: Aunt Agatha will arrive ......

    [...]