scispace - formally typeset
Search or ask a question
Author

Robert N. M. Watson

Other affiliations: McAfee
Bio: Robert N. M. Watson is an academic researcher from University of Cambridge. The author has contributed to research in topics: Instruction set & Unix. The author has an hindex of 25, co-authored 80 publications receiving 2588 citations. Previous affiliations of Robert N. M. Watson include McAfee.


Papers
More filters
Journal ArticleDOI
14 Jun 2014
TL;DR: CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection, is presented, demonstrating that it enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety.
Abstract: Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection. We demonstrate that CHERI enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety. In contrast to past capability models, CHERI complements, rather than replaces, the ubiquitous page-based protection mechanism, providing a migration path towards deconflating data-structure protection and OS memory management. Furthermore, CHERI adheres to a strict RISC philosophy: it maintains a load-store architecture and requires only singlecycle instructions, and supplies protection primitives to the compiler, language runtime, and operating system. We demonstrate a mature FPGA implementation that runs the FreeBSD operating system with a full range of software and an open-source application suite compiled with an extended LLVM to use CHERI memory protection. A limit study compares published memory safety mechanisms in terms of instruction count and memory overheads. The study illustrates that CHERI is performance-competitive even while providing assurance and greater flexibility with simpler hardware

262 citations

Proceedings ArticleDOI
17 May 2015
TL;DR: This work demonstrates multiple orders-of-magnitude improvement in scalability, simplified programmability, and resulting tangible security benefits as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs.
Abstract: CHERI extends a conventional RISC Instruction-Set Architecture, compiler, and operating system to support fine-grained, capability-based memory protection to mitigate memory-related vulnerabilities in C-language TCBs. We describe how CHERI capabilities can also underpin a hardware-software object-capability model for application compartmentalization that can mitigate broader classes of attack. Prototyped as an extension to the open-source 64-bit BERI RISC FPGA soft-core processor, Free BSD operating system, and LLVM compiler, we demonstrate multiple orders-of-magnitude improvement in scalability, simplified programmability, and resulting tangible security benefits as compared to compartmentalization based on pure Memory-Management Unit (MMU) designs. We evaluate incrementally deployable CHERI-based compartmentalization using several real-world UNIX libraries and applications.

216 citations

Book ChapterDOI
28 Jun 2006
TL;DR: The so-called “Great Firewall of China” operates, in part, by inspecting TCP packets for keywords that are to be blocked, but if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered.
Abstract: The so-called “Great Firewall of China” operates, in part, by inspecting TCP packets for keywords that are to be blocked. If the keyword is present, TCP reset packets (viz: with the RST flag set) are sent to both endpoints of the connection, which then close. However, because the original packets are passed through the firewall unscathed, if the endpoints completely ignore the firewall's resets, then the connection will proceed unhindered. Once one connection has been blocked, the firewall makes further easy-to-evade attempts to block further connections from the same machine. This latter behaviour can be leveraged into a denial-of-service attack on third-party machines.

193 citations

Proceedings ArticleDOI
02 Nov 2016
TL;DR: Firmament is described, a centralized scheduler that scales to over ten thousand machines at sub-second placement latency even though it continuously reschedules all tasks via a min-cost max-flow (MCMF) optimization, and exceeds the placement quality of four widely-used centralized and distributed schedulers on a real-world cluster.
Abstract: Centralized datacenter schedulers can make high-quality placement decisions when scheduling tasks in a cluster. Today, however, high-quality placements come at the cost of high latency at scale, which degrades response time for interactive tasks and reduces cluster utilization.This paper describes Firmament, a centralized scheduler that scales to over ten thousand machines at sub-second placement latency even though it continuously reschedules all tasks via a min-cost max-flow (MCMF) optimization. Firmament achieves low latency by using multiple MCMF algorithms, by solving the problem incrementally, and via problem-specific optimizations.Experiments with a Google workload trace from a 12,500-machine cluster show that Firmament improves placement latency by 20× over Quincy [22], a prior centralized scheduler using the same MCMF optimization. Moreover, even though Firmament is centralized, it matches the placement latency of distributed schedulers for workloads of short tasks. Finally, Firmament exceeds the placement quality of four widely-used centralized and distributed schedulers on a real-world cluster, and hence improves batch task response time by 6×.

181 citations

Proceedings Article
04 May 2015
TL;DR: It is shown that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP and pFabric.
Abstract: QJUMP is a simple and immediately deployable approach to controlling network interference in datacenter networks. Network interference occurs when congestion from throughput-intensive applications causes queueing that delays traffic from latency-sensitive applications. To mitigate network interference, QJUMP applies Internet QoS-inspired techniques to datacenter applications. Each application is assigned to a latency sensitivity level (or class). Packets from higher levels are rate-limited in the end host, but once allowed into the network can "jump-the-queue" over packets from lower levels. In settings with known node counts and link speeds, QJUMP can support service levels ranging from strictly bounded latency (but with low rate) through to line-rate throughput (but with high latency variance). We have implemented QJUMP as a Linux Traffic Control module. We show that QJUMP achieves bounded latency and reduces in-network interference by up to 300×, outperforming Ethernet Flow Control (802.3x), ECN (WRED) and DCTCP. We also show that QJUMP improves average flow completion times, performing close to or better than DCTCP and pFabric.

176 citations


Cited by
More filters
01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: A new, general approach for safeguarding systems against any type of code-injection attack, by creating process-specific randomized instruction sets of the system executing potentially vulnerable software that can serve as a low-overhead protection mechanism, and can easily complement other mechanisms.
Abstract: We describe a new, general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoff's principle, by creating process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that randomized processor, causing a runtime exception. To determine the difficulty of integrating support for the proposed mechanism in the operating system, we modified the Linux kernel, the GNU binutils tools, and the bochs-x86 emulator. Although the performance penalty is significant, our prototype demonstrates the feasibility of the approach, and should be directly usable on a suitable-modified processor (e.g., the Transmeta Crusoe).Our approach is equally applicable against code-injecting attacks in scripting and interpreted languages, e.g., web-based SQL injection. We demonstrate this by modifying the Perl interpreter to permit randomized script execution. The performance penalty in this case is minimal. Where our proposed approach is feasible (i.e., in an emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a low-overhead protection mechanism, and can easily complement other mechanisms.

779 citations

Proceedings Article
Chris Wright, Crispin Cowan, Stephen Smalley, James Morris, Greg Kroah-Hartman1 
05 Aug 2002
TL;DR: The design and implementation of LSM are presented and the challenges in providing a truly general solution that minimally impacts the Linux kernel are discussed.
Abstract: The access control mechanisms of existing mainstream operating systems are inadequate to provide strong system security. Enhanced access control mechanisms have failed to win acceptance into mainstream operating systems due in part to a lack of consensus within the security community on the right solution. Since general-purpose operating systems must satisfy a wide range of user requirements, any access control mechanism integrated into such a system must be capable of supporting many different access control models. The Linux Security Modules (LSM) project has developed a lightweight, general purpose, access control framework for the mainstream Linux kernel that enables many different access control models to be implemented as loadable kernel modules. A number of existing enhanced access control implementations, including Linux capabilities, Security-Enhanced Linux (SELinux), and Domain and Type Enforcement (DTE), have already been adapted to use the LSM framework. This paper presents the design and implementation of LSM and discusses the challenges in providing a truly general solution that minimally impacts the Linux kernel.

471 citations

Patent
21 Mar 2001
TL;DR: A web agent is a component (usually software, but can be hardware or a combination of hardware and software) that plugs into (or otherwise integrates with) a web server (or equivalent) in order to participate in providing access services.
Abstract: An access system provides identity management and/or access management services for a network. An application program interface for the access system enables an application without a web agent front end to read and use contents of an existing encrypted cookie to bypass authentication and proceed to authorization. A web agent is a component (usually software, but can be hardware or a combination of hardware and software) that plugs into (or otherwise integrates with) a web server (or equivalent) in order to participate in providing access services.

464 citations

Journal ArticleDOI
28 Jul 2012
TL;DR: This work systematically reviews how researchers conducted Wizard of Oz experiments published in the primary HRI publication venues from 2001 -- 2011 and proposes new reporting guidelines to aid future research.
Abstract: Many researchers use Wizard of Oz (WoZ) as an experimental technique, but there are methodological concerns over its use, and no comprehensive criteria on how to best employ it. We systematically review 54 WoZ experiments published in the primary HRI publication venues from 2001 -- 2011. Using criteria proposed by Fraser and Gilbert (1991), Green et al. (2004), Steinfeld et al. (2009), and Kelley (1984), we analyzed how researchers conducted HRI WoZ experiments. Researchers mainly used WoZ for verbal (72.2%) and non-verbal (48.1%) processing. Most constrained wizard production (90.7%), but few constrained wizard recognition (11%). Few reported measuring wizard error (3.7%), and few reported pre-experiment wizard training (5.4%). Few reported using WoZ in an iterative manner (24.1%). Based on these results we propose new reporting guidelines to aid future research.

456 citations