scispace - formally typeset
Search or ask a question
Author

Shidhartha Das

Bio: Shidhartha Das is an academic researcher from University of Michigan. The author has contributed to research in topics: Dynamic voltage scaling & Error detection and correction. The author has an hindex of 19, co-authored 71 publications receiving 3982 citations.


Papers
More filters
Proceedings ArticleDOI
03 Dec 2003
TL;DR: A solution by which the circuit can be operated even below the ‘critical’ voltage, so that no margins are required and thus more energy can be saved.
Abstract: With increasing clock frequencies and silicon integration, power aware computing has become a critical concern in the design of embedded processors and systems-on-chip. One of the more effective and widely used methods for power-aware computing is dynamic voltage scaling (DVS). In order to obtain the maximum power savings from DVS, it is essential to scale the supply voltage as low as possible while ensuring correct operation of the processor. The critical voltage is chosen such that under a worst-case scenario of process and environmental variations, the processor always operates correctly. However, this approach leads to a very conservative supply voltage since such a worst-case combination of different variabilities is very rare. In this paper, we propose a new approach to DVS, called Razor, based on dynamic detection and correction of circuit timing errors. The key idea of Razor is to tune the supply voltage by monitoring the error rate during circuit operation, thereby eliminating the need for voltage margins and exploiting the data dependence of circuit delay. A Razor flip-flop is introduced that double-samples pipeline stage values, once with a fast clock and again with a time-borrowing delayed clock. A metastability-tolerant comparator then validates latch values sampled with the fast clock. In the event of timing error, a modified pipeline mispeculation recovery mechanism restores correct program state. A prototype Razor pipeline was designed in a 0.18 /spl mu/m technology and was analyzed. Razor energy overhead during normal operation is limited to 3.1%. Analyses of a full-custom multiplier and a SPICE-level Kogge-Stone adder model reveal that substantial energy savings are possible for these devices (up to 64.2%) with little impact on performance due to error recovery (less than 3%).

1,137 citations

Journal ArticleDOI
TL;DR: This paper presents a design (RazorII) which implements a flip-flop with in situ detection and architectural correction of variation-induced delay errors and demonstrates SER tolerance on the RazorII processor through radiation experiments.
Abstract: Traditional adaptive methods that compensate for PVT variations need safety margins and cannot respond to rapid environmental changes. In this paper, we present a design (RazorII) which implements a flip-flop with in situ detection and architectural correction of variation-induced delay errors. Error detection is based on flagging spurious transitions in the state-holding latch node. The RazorII flip-flop naturally detects logic and register SER. We implement a 64-bit processor in 0.13 mum technology which uses RazorII for SER tolerance and dynamic supply adaptation. RazorII based DVS allows elimination of safety margins and operation at the point of first failure of the processor. We tested and measured 32 different dies and obtained 33% energy savings over traditional DVS using RazorII for supply voltage control. We demonstrate SER tolerance on the RazorII processor through radiation experiments.

614 citations

Journal ArticleDOI
TL;DR: A dynamic voltage scaling (DVS) technique called Razor is presented which incorporates an in situ error detection and correction mechanism to recover from timing errors and achieves an average energy savings of 50% over worst case operating conditions.
Abstract: In this paper, we present a dynamic voltage scaling (DVS) technique called Razor which incorporates an in situ error detection and correction mechanism to recover from timing errors. We also present the implementation details and silicon measurements results of a 64-bit processor fabricated in 0.18-/spl mu/m technology that uses Razor for supply voltage control. Traditional DVS techniques require significant voltage safety margins to guarantee computational correctness at the worst case combination of process, voltage and temperature conditions, leading to a loss in energy efficiency. In Razor-based DVS, however, the supply voltage is automatically reduced to the point of first failure using the error detection and correction mechanism, thereby eliminating safety margins while still ensuring correct operation. In addition, the supply voltage can be intentionally scaled below the point of first failure of the processor to achieve an optimal tradeoff between energy savings from further voltage reduction and energy overhead from increased error detection and correction activity. We tested and measured savings due to Razor DVS for 33 different dies and obtained an average energy savings of 50% over worst case operating conditions by scaling supply voltage to achieve a 0.1% targeted error rate, at a fixed frequency of 120 MHz.

413 citations

Proceedings ArticleDOI
01 Feb 2008
TL;DR: A Razor II approach is proposed that introduces two components: first, instead of performing both error detection and correction in the FF, Razor II performs only detection in theFF, while correction is performed through architectural replay.
Abstract: We take advantage of these findings and propose a Razor II approach that introduces two components. First, instead of performing both error detection and correction in the FF, Razor II performs only detection in the FF, while correction is performed through architectural replay.

396 citations

Journal ArticleDOI
TL;DR: This work presents a DVS approach that uses dynamic detection and correction of circuit timing errors to tune processor supply voltage and eliminate the need for voltage margins.
Abstract: Dynamic voltage scaling is one of the more effective and widely used methods for power-aware computing. We present a DVS approach that uses dynamic detection and correction of circuit timing errors to tune processor supply voltage and eliminate the need for voltage margins

383 citations


Cited by
More filters
Journal ArticleDOI
Shekhar Borkar1
TL;DR: This article discusses effects of variability in transistor performance and proposes microarchitecture, circuit, and testing research that focuses on designing with many unreliable components (transistors) to yield reliable system designs.
Abstract: As technology scales, variability in transistor performance continues to increase, making transistors less and less reliable. This creates several challenges in building reliable systems, from the unpredictability of delay to increasing leakage current. Finding solutions to these challenges require a concerted effort on the part of all the players in a system design. This article discusses these effects and proposes microarchitecture, circuit, and testing research that focuses on designing with many unreliable components (transistors) to yield reliable system designs.

1,421 citations

Journal ArticleDOI
TL;DR: This Review provides an overview of memory devices and the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.
Abstract: Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing. This Review provides an overview of memory devices and the key computational primitives for in-memory computing, and examines the possibilities of applying this computing approach to a wide range of applications.

841 citations

Journal ArticleDOI
22 Jan 2010
TL;DR: In this paper, the authors define and explore near-threshold computing (NTC), a design space where the supply voltage is approximately equal to the threshold voltage of the transistors.
Abstract: Power has become the primary design constraint for chip designers today. While Moore's law continues to provide additional transistors, power budgets have begun to prohibit those devices from actually being used. To reduce energy consumption, voltage scaling techniques have proved a popular technique with subthreshold design representing the endpoint of voltage scaling. Although it is extremely energy efficient, subthreshold design has been relegated to niche markets due to its major performance penalties. This paper defines and explores near-threshold computing (NTC), a design space where the supply voltage is approximately equal to the threshold voltage of the transistors. This region retains much of the energy savings of subthreshold operation with more favorable performance and variability characteristics. This makes it applicable to a broad range of power-constrained computing segments from sensors to high performance servers. This paper explores the barriers to the widespread adoption of NTC and describes current work aimed at overcoming these obstacles.

767 citations

01 Jan 2016

733 citations

01 Jan 2010
TL;DR: The barriers to the widespread adoption of near-threshold computing are explored and current work aimed at overcoming these obstacles are described.
Abstract: Power has become the primary design constraint for chip designers today. While Moore's law continues to provide additional transistors, power budgets have begun to prohibit those devices from actually being used. To reduce energy consumption, voltage scaling techniques have proved a popular technique with subthreshold design representing the endpoint of voltage scaling. Although it is extremely energy efficient, subthreshold design has been relegated to niche markets due to its major performance penalties. This paper defines and explores near-threshold computing (NTC), a design space where the supply voltage is approximately equal to the threshold voltage of the transistors. This region retains much of the energy savings of subthreshold operation with more favor- able performance and variability characteristics. This makes it applicable to a broad range of power-constrained computing segments from sensors to high performance servers. This paper explores the barriers to the widespread adoption of NTC and describes current work aimed at overcoming these obstacles.

695 citations