scispace - formally typeset
Search or ask a question

Showing papers by "Michael J. Freedman published in 2009"


Journal ArticleDOI
TL;DR: Ethane allows managers to define a single network-wide fine-grain policy and then enforces it directly, and is compatible with existing high-fanout switches by porting it to popular commodity switching chipsets.
Abstract: This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. We also show that it is compatible with existing high-fanout switches by porting it to popular commodity switching chipsets. We have deployed and managed two operational Ethane networks, one in the Stanford University Computer Science Department supporting over 300 hosts, and another within a small business of 30 hosts. Our deployment experiences have significantly affected Ethane's design.

187 citations


Proceedings Article
14 Jun 2009
TL;DR: Additional design and implementation considerations for geo-replicated CRAQ storage across multiple datacenters to provide locality-optimized operations are explored and multi-object atomic updates and multicast optimizations for large-object updates are discussed.
Abstract: Massive storage systems typically replicate and partition data over many potentially-faulty components to provide both reliability and scalability. Yet many commerciallydeployed systems, especially those designed for interactive use by customers, sacrifice stronger consistency properties in the desire for greater availability and higher throughput. This paper describes the design, implementation, and evaluation of CRAQ, a distributed object-storage system that challenges this inflexible tradeoff. Our basic approach, an improvement on Chain Replication, maintains strong consistency while greatly improving read throughput. By distributing load across all object replicas, CRAQ scales linearly with chain size without increasing consistency coordination. At the same time, it exposes noncommitted operations for weaker consistency guarantees when this suffices for some applications, which is especially useful under periods of high system churn. This paper explores additional design and implementation considerations for geo-replicated CRAQ storage across multiple datacenters to provide locality-optimized operations. We also discuss multi-object atomic updates and multicast optimizations for large-object updates.

94 citations


Posted Content
TL;DR: In this paper, a semi-centralized architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that identifies the (blinded) keywords that have values satisfying some evaluation function is proposed.
Abstract: Combining and analyzing data collected at multiple locations is critical for a wide variety of applications, such as detecting and diagnosing malicious attacks or computing an accurate estimate of the popularity of Web sites. However, legitimate concerns about privacy often inhibit participation in collaborative data-analysis systems. In this paper, we design, implement, and evaluate a practical solution for privacy-preserving collaboration among a large number of participants. Scalability is achieved through a “semi-centralized” architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that identifies the (blinded) keywords that have values satisfying some evaluation function. Our solution leverages a novel cryptographic protocol that provably protects the privacy of both the participants and the keywords. For example, if web servers collaborate to detect source IP addresses responsible for denial-of-service attacks, our protocol would not reveal the traffic mix of the Web servers or the identity of the “good” IP addresses. We implemented a prototype of our design, including an amortized oblivious transfer protocol that substantially improves the efficiency of client-proxy interactions. Our experiments show that the performance of our system scales linearly with computing resources, making it easy to improve performance by adding more cores or machines. For collaborative diagnosis of denial-of-service attacks, our system can handle millions of suspect IP addresses per hour when the proxy and the database each run on two quad-core machines.

47 citations


Proceedings Article
21 Apr 2009
TL;DR: Firecoral provides a highly-configurable interface through which users can enforce privacy preferences by carefully specifying which content they will share, and a security model that guarantees content integrity even in the face of untrusted peers.
Abstract: Peer-to-peer systems have been a disruptive technology for enabling large-scale Internet content distribution. Yet web browsers, today's dominant application platform, seem inherently based on the client/server communication model. This paper presents the design of Firecoral, a browser-based extension platform that enables the peer-to-peer exchange of web content in a secure, flexible manner. Firecoral provides a highly-configurable interface through which users can enforce privacy preferences by carefully specifying which content they will share, and a security model that guarantees content integrity even in the face of untrusted peers. The Firecoral protocol is backwards compatible with today's web standards, integrates easily with existing web servers, and is designed not to interfere with a typical browsing experience and publishing ecosystem.

24 citations


Journal ArticleDOI
TL;DR: The Meru Project at Stanford University is designing and implementing an architecture for the virtual worlds of the future that can, like the original World Wide Web at CERN, investigate basic questions about system design.
Abstract: Outlines the Meru Project at Stanford University is designing and implementing an architecture for the virtual worlds of the future. The hope is that we can avoid some of the complexities the Web has encountered by learning how to build applications and services before they are subject to the short-term necessities of commercial development. While Meru cannot compete with the content creation of commercial virtual worlds, it can, like the original World Wide Web at CERN, investigate basic questions about system design. By doing so, the door can be opened to a future where physical sensors in the real world seed their virtual reflections, users can visually browse a sea of information, and virtual avatars convey physical social cues to bring distance interaction to the level of actual presence.

15 citations


Book ChapterDOI
27 Feb 2009
TL;DR: It is shown that multilateral exchanges satisfy several desirable efficiency and robustness properties that bilateral exchanges do not, and that an equilibrium in bilateral exchange corresponds to a multilateral exchange equilibrium if and only if it is robust to deviations by coalitions of users.
Abstract: Peer-assisted content distribution matches user demand for content with available supply at other peers in the network. Inspired by this supply-and-demand interpretation of the nature of content sharing, we employ price theory to study peer-assisted content distribution. In this approach, the market-clearing prices are those which exactly align supply and demand, and the system is studied through the characterization of price equilibria. We rigorously analyze the efficiency and robustness gains that are enabled by price-based multilateral exchange. We show that multilateral exchanges satisfy several desirable efficiency and robustness properties that bilateral exchanges do not, e.g. , equilibria in bilateral exchange may fail to exist, be inefficient if they do exist, and fail to remain robust to collusive deviations even if they are Pareto efficient. Further, we show that an equilibrium in bilateral exchange corresponds to a multilateral exchange equilibrium if and only if it is robust to deviations by coalitions of users.

8 citations


Proceedings ArticleDOI
12 Jun 2009
TL;DR: This work provides a formal comparison of P2P system designs based on bilateral exchange with those that enable multilateral exchange via a price-based market mechanism to match supply and demand.
Abstract: Users of peer-to-peer systems are often incentivized to contribute their upload capacity in a bilateral manner: downloading is possible in return for uploading to the same peer (e.g., BitTorrent). An alternative is to use multilateral exchange to match user demand for content to available supply at other peers in the system. Multilateral exchange can be enabled through prices and a virtual currency. Monetary incentives have been previously proposed to incentivize uploading in P2P systems [1], [2], [3], [4]. We provide a formal comparison of P2P system designs based on bilateral exchange with those that enable multilateral exchange via a price-based market mechanism to match supply and demand. This work surveys and generalizes [5] and [6].