scispace - formally typeset
Search or ask a question

Showing papers by "Srinivas Devadas published in 2002"


Proceedings ArticleDOI
18 Nov 2002
TL;DR: It is argued that a complex integrated circuit can be viewed as a silicon PUF and a technique to identify and authenticate individual integrated circuits (ICs) is described.
Abstract: We introduce the notion of a Physical Random Function (PUF). We argue that a complex integrated circuit can be viewed as a silicon PUF and describe a technique to identify and authenticate individual integrated circuits (ICs).We describe several possible circuit realizations of different PUFs. These circuits have been implemented in commodity Field Programmable Gate Arrays (FPGAs). We present experiments which indicate that reliable authentication of individual FPGAs can be performed even in the presence of significant environmental variations.We describe how secure smart cards can be built, and also briefly describe how PUFs can be applied to licensing and certification applications.

1,644 citations


Proceedings ArticleDOI
09 Dec 2002
TL;DR: Controlled physical random functions (CPUFs) are introduced which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way.
Abstract: A physical random function (PUF) is a random function that can only be evaluated with the help of a complex physical system. We introduce controlled physical random functions (CPUFs) which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way. CPUFs can be used to establish a shared secret between a physical device and a remote user. We present protocols that make this possible in a secure and flexible way, even in the case of multiple mutually mistrusting parties. Once established, the shared secret can be used to enable a wide range of applications. We describe certified execution, where a certificate is produced that proves that a specific computation was carried out on a specific processor. Certified execution has many benefits, including protection against malicious nodes in distributed computation networks. We also briefly discuss a software licensing application.

430 citations


Proceedings ArticleDOI
02 Feb 2002
TL;DR: A scheme that enables an accurate estimate of the isolated miss-rates of each process as a function of cache size under the standard LRU replacement policy is described, which can be used to schedule jobs or to partition the cache to minimize the overall miss-rate.
Abstract: We propose a low overhead, online memory monitoring scheme utilizing a set of novel hardware counters. The counters indicate the marginal gain in cache hits as the size of the cache is increased, which gives the cache miss-rate as a function of cache size. Using the counters, we describe a scheme that enables an accurate estimate of the isolated miss-rates of each process as a function of cache size under the standard LRU replacement policy. This information can be used to schedule jobs or to partition the cache to minimize the overall miss-rate. The data collected by the monitors can also be used by an analytical model of cache and memory behavior to produce a more accurate overall miss-rate for the collection of processes sharing a cache in both time and space. This overall miss-rate can be used to improve scheduling and partitioning schemes.

325 citations


Book ChapterDOI
26 Aug 2002
TL;DR: In this paper, the authentication problem is reduced to a simpler problem, in which the user carries a trusted device with her, and a description is given of two camera-based devices that are being developed.
Abstract: The use of computers in public places is increasingly common in everyday life. In using one of these computers, a user is trusting it to correctly carry out her orders. For many transactions, particularly banking operations, blind trust in a public terminal will not satisfy most users. In this paper the aim is therefore to provide the user with authenticated communication between herself and a remote trusted computer, via the untrusted computer.After defining the authentication problem that is to be solved, this paper reduces it to a simpler problem. Solutions to the simpler problem are explored in which the user carries a trusted device with her. Finally, a description is given of two camera-based devices that are being developed.

87 citations


Proceedings ArticleDOI
11 Mar 2002
TL;DR: A resource discovery and communication system designed for security and privacy that allows for secure, yet efficient, access to networked, mobile devices and a quantitative evaluation of this system using various metrics is presented.
Abstract: We describe a resource discovery and communication system designed for security and privacy. All objects in the system, e.g., appliances, wearable gadgets, software agents, and users have associated trusted software proxies that either run on the appliance hardware or on a trusted computer. We describe how security and privacy are enforced using two separate protocols: a protocol for secure device-to-proxy communication, and a protocol for secure proxy-to-proxy communication. Using two separate protocols allows us to run a computationally-inexpensive protocol on impoverished devices, and a sophisticated protocol for resource authentication and communication on more powerful devices.We detail the device-to-proxy protocol for lightweight wireless devices and the proxy-to-proxy protocol which is based on SPKI/SDSI (Simple Public Key Infrastructure / Simple Distributed Security Infrastructure). A prototype system has been constructed, which allows for secure, yet efficient, access to networked, mobile devices. We present a quantitative evaluation of this system using various metrics.

77 citations


01 Jan 2002
TL;DR: For most benchmarks, the performance overhead of authentication using the integrated Merkle tree/caching scheme is less than 25%, whereas the overhead for a naive scheme can be as large as 10×.
Abstract: We describe a hardware scheme to authenticate all or a part of untrusted external memory using trusted on-chip storage. Our scheme uses Merkle trees and caches to efficiently authenticate memory. Proper placement of Merkle tree checking and generation is critical to ensure good performance. Naive schemes where the Merkle tree machinery is placed between caches can result in a large increase in memory bandwidth usage. We integrate the Merkle tree machinery with one of the cache levels to significantly reduce memory bandwidth requirements. We present an evaluation of the area and performance costs of various schemes using simulation. For most benchmarks, the performance overhead of authentication using our integrated Merkle tree/caching scheme is less than 25%, whereas the overhead of authentication for a naive scheme can be as large as 10×. We explore tradeoffs between external memory overhead and processor performance.

50 citations


Book ChapterDOI
01 Jan 2002
TL;DR: This paper presents the first wave of a variety of new code-optimization approaches aimed at supplying the highest code quality possible in the coming generation of integrated circuits.
Abstract: The emergence of integrated circuits in which both the program-ROM and the processor are integrated on a single die initiates a new era of problems for programming language compilers. In such a micro-architecture, code performance, and particularly code density, gain an unprecedented level of importance and new code-optimization algorithms will be required to supply the required code quality. This paper presents the first wave of a variety of new code-optimization approaches aimed at supplying the highest code quality possible.

19 citations


01 Jan 2002
TL;DR: POWFs were introduced in [1], where they are implemented by shining a mobile laser beam through a nonhomogenous medium and observing the resulting speckle pattern and were used to make unclonable ID cards.
Abstract: POWFs were introduced in [1], where they are implemented by shining a mobile laser beam through a nonhomogenous medium and observing the resulting speckle pattern. They were used to make unclonable ID cards. Indeed, an important characteristic of POWFs is that when it is difficult to reproduce the physical system or to characterize it precisely enough to simulate it, an unclonable system results.

13 citations


Journal ArticleDOI
TL;DR: Direct search methods and observability-based code coverage metric (OCCOM) computation is integrated into an algorithm for generating test vectors under OCCOM for sequential HDL models for design validation and verification.
Abstract: Design validation and verification is the process of ensuring correctness of a design described at different levels of abstraction during the design process. Design validation is the main bottleneck in improving design turnaround time. Currently, simulation is the primary methodology for validation of the first description of a design. In this paper we integrate directed search methods and observability-based code coverage metric (OCCOM) computation into an algorithm for generating test vectors under OCCOM for sequential HDL models. A prototype system for design validation under OCCOM has been built. The system uses repeated coverage computation to minimize the number of vectors generated. Experimental results using the test vector generation system are presented.

12 citations


Proceedings Article
19 Jun 2002
TL;DR: This volume contains the proceedings of the Joint Languages, Compilers, and Tools for Embedded Systems (LCTES'02) and Software and Compilers for Emb embedded Systems (SCOPES' 02) Conference, which took place in Berlin during June 19th to the 21st.
Abstract: This volume contains the proceedings of the Joint Languages, Compilers, and Tools for Embedded Systems (LCTES'02) and Software and Compilers for Embedded Systems (SCOPES'02) Conference. LCTES/SCOPES'02 took place in Berlin during June 19th to the 21st. For the first time, LCTES and SCOPES were held together, resulting in stimulating contacts between researchers predominantly having a background in programming languages and electronic design automation, respectively. Also, for the very first time, LCTES was held as a conference and not as a workshop.LCTES/SCOPES'02 received a total of 73 papers. During a comprehensive review process a total of 234 reviews were submitted. Finally, 25 papers were accepted and included in the resulting high-quality program. Accepted papers covered the following areas: compilers including low-energy compilation, synthesis, design space exploration, debugging and validation, code generation and register allocation, processor modeling, hardware/software codesign, and real-time scheduling.Accepted papers were grouped into 10 sessions. In addition, two invited keynotes given by Dr. Philippe Magarshack of STMicroelectronics and Prof. Gerhard Fettweis of Dresden University emphasized industrial view points and perspectives.

5 citations


01 Jan 2002
TL;DR: It is argued that delaybased authentication of the key card is secure because it is hard to create an accurate timing model for the circuit used in the keyCard, and that key cards built in this fashion are resistant to many known kinds of attacks.
Abstract: We describe a technique to reliably identify individual integrated circuits (ICs), based on a prior delay characterization of the IC. We describe a circuit architecture for a key card for which authentication is delay based, rather than based on a digital secret key. We argue that delaybased authentication of the key card is secure because it is hard to create an accurate timing model for the circuit used in the key card. We also argue that key cards built in this fashion are resistant to many known kinds of attacks. Since the delay of ICs can vary with environmental conditions such as temperature, we develop compensation schemes and show experimentally that reliable authentication can be performed in the presence of significant environmental variations.

01 Jan 2002
TL;DR: A certificate-based authorization step during the resolution of an INS request is introduced in order for INS to know the identity and privileges of a requestor and a real-time maintenance of access control lists (ACLs) is implemented in the INS name resolvers in order to give INS knowledge about how each resource should be protected.
Abstract: Approach: Our approach to integrating access control with INS is based on the assumption that the interface between a resource discovery system and security infrastructure should not be hard. It is inefficient if a user (requestor) has to repeatedly iterate through lists of resources that he is prohibited from using while he searches for the most-optimal accessible resource. One only has to consider a scenario where a user is in an environment with several resources that are inaccesible to him to see how scalability becomes a major issue when adding security to resource discovery. If the resource discovery system has no knowledge about which resources the requestor can access, it could take tremendous computational effort to find an accessible resource. A better approach would be to give the resource discovery system knowledge about the access control lists (ACLs) that protect each of the resources and the access-control groups and capabilities of the requestor. A resource discovery system already knows about the service and performance characteristics of each resource it represents; an access control list is nothing more than an additional characteristic that defines the resource. The idea here is not to leave security entirely up to the resource discovery system, but, instead, to provide the discovery system with “hints” regarding the accessiblity of resources. Security is still enforced by an end-to-end proxy protocol (this protocol is described in our previous work [2]), but the sharing of access information enables the resource discovery system to find resources that are guaranteed to be accessible. Figure 1 shows a system level diagram of our entire security infrastructure. This summary focuses on integrating acess control into INS. INS provides users with a layer of abstraction so that applications do not need to know the availability or exact name of the resource for which they are looking. We extend INS to provide access-controlled resource discovery in two main ways: • Dynamic maintenance of ACLs in INS. We implement a real-time maintenance of access control lists (ACLs) in the INS name resolvers in order to give INS knowledge about how each resource should be protected. • User authorization. We introduce a certificate-based authorization step during the resolution of an INS request in order for INS to know the identity and privileges of a requestor.