scispace - formally typeset
S

Srinivas Devadas

Researcher at Massachusetts Institute of Technology

Publications -  498
Citations -  35003

Srinivas Devadas is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Sequential logic & Combinational logic. The author has an hindex of 88, co-authored 480 publications receiving 31897 citations. Previous affiliations of Srinivas Devadas include University of California, Berkeley & Cornell University.

Papers
More filters
Proceedings ArticleDOI

Static virtual channel allocation in oblivious routing

TL;DR: Methods that statically allocate channels to flows at each link when oblivious routing is used, and ensure deadlock freedom for arbitrary minimal routes when two or more virtual channels are available are presented.
Proceedings ArticleDOI

A Low-Latency, Low-Area Hardware Oblivious RAM Controller

TL;DR: Tiny ORAM, an Oblivious RAM prototype on FPGA, is the first hardware ORAM design to support arbitrary block sizes and is also the first design to implement and report real numbers for the cost of symmetric encryption in hardware OrAM constructions.
Proceedings ArticleDOI

Offline untrusted storage with immediate detection of forking and replay attacks

TL;DR: A log-based scheme in which the TTD is used to securely implement a large number of virtual monotonic counters, which can then be used to time-stamp data and provide tamper-evident storage is introduced.
Journal ArticleDOI

Computation of floating mode delay in combinational circuits: practice and implementation

TL;DR: The authors use a recently developed single-vector condition, that is known to be necessary and sufficient for a path to be responsible for the delay of a circuit (i.e., true) in the floating delay model, to develop an efficient and correct delay computation algorithm, for the floating mode delay.

Caches and Merkle Trees for Efficient Memory Authentication

TL;DR: For most benchmarks, the performance overhead of authentication using the integrated Merkle tree/caching scheme is less than 25%, whereas the overhead for a naive scheme can be as large as 10×.