scispace - formally typeset
Search or ask a question
Author

Kee Sup Kim

Bio: Kee Sup Kim is an academic researcher from Intel. The author has contributed to research in topics: Design for testing & Soft error. The author has an hindex of 14, co-authored 20 publications receiving 1849 citations.

Papers
More filters
Journal ArticleDOI
Subhasish Mitra1, N. Seifert1, Ming Zhang1, Quan Shi1, Kee Sup Kim1 
TL;DR: A new design paradigm reuses design-for-testability and debug resources to eliminate transient errors caused by terrestrial radiation in chip designs.
Abstract: Transient errors caused by terrestrial radiation pose a major barrier to robust system design. A system's susceptibility to such errors increases in advanced technologies, making the incorporation of effective protection mechanisms into chip designs essential. A new design paradigm reuses design-for-testability and debug resources to eliminate such errors.

600 citations

Proceedings ArticleDOI
Subhasish Mitra1, Kee Sup Kim1
07 Oct 2002
TL;DR: This work presents a technique for compacting test response data using combinational logic circuits that enables up to an exponential reduction in the number of pins required to collect test response from a chip.
Abstract: We present a technique for compacting test response data using combinational logic circuits. Our compaction technique enables up to an exponential reduction in the number of pins required to collect test response from a chip. The combinational circuits require negligible area, do not add any extra delay during normal operation, guarantee detection of defective chips even in the presence of sources of unknown logic values (often referred to as Xs) and preserve diagnosis capabilities for all practical scenarios. The technique has minimum impact on current design and test flows, and can be used to reduce test time, test data volume, test-I/O pins and tester channels, and also to improve test quality.

243 citations

Journal ArticleDOI
Subhasish Mitra1, Kee Sup Kim1
TL;DR: X-Compact is an X-tolerant test response compaction technique that enables up to exponential reduction in the test response data volume and the number of pins required to collect test response from a chip.
Abstract: X-Compact is an X-tolerant test response compaction technique. It enables up to exponential reduction in the test response data volume and the number of pins required to collect test response from a chip. The compaction hardware requires negligible area, does not add any extra delay during normal operation, guarantees detection of defective chips even in the presence of unknown logic values (often referred to as X's), and preserves diagnosis capabilities for most practical scenarios. The technique has minimum impact on current design and test flows, and can be used to reduce test time, test-data volume, test-input/output pins and tester channels, and also to improve test quality.

240 citations

Journal ArticleDOI
TL;DR: The presented error-correcting latch and flip-flop designs are power efficient, introduce minimal speed penalty, and employ reuse of on-chip scan design- for-testability and design-for-debug resources to minimize area overheads.
Abstract: This paper presents a built-in soft error resilience (BISER) technique for correcting radiation-induced soft errors in latches and flip-flops. The presented error-correcting latch and flip-flop designs are power efficient, introduce minimal speed penalty, and employ reuse of on-chip scan design-for-testability and design-for-debug resources to minimize area overheads. Circuit simulations using a sub-90-nm technology show that the presented designs achieve more than a 20-fold reduction in cell-level soft error rate (SER). Fault injection experiments conducted on a microprocessor model further demonstrate that chip-level SER improvement is tunable by selective placement of the presented error-correcting designs. When coupled with error correction code to protect in-pipeline memories, the BISER flip-flop design improves chip-level SER by 10 times over an unprotected pipeline with the flip-flops contributing an extra 7-10.5% in power. When only soft errors in flips-flops are considered, the BISER technique improves chip-level SER by 10 times with an increased power of 10.3%. The error correction mechanism is configurable (i.e., can be turned on or off) which enables the use of the presented techniques for designs that can target multiple applications with a wide range of reliability requirements

226 citations

Proceedings ArticleDOI
Subhasish Mitra1, Ming Zhang2, S. Waqas2, N. Seifert2, B. Gill2, Kee Sup Kim2 
01 Oct 2006
TL;DR: Two techniques for correcting radiation-induced soft errors in combinational logic are presented - error correction using duplication, and error Correction using time-shifted outputs.
Abstract: We present two techniques for correcting radiation-induced soft errors in combinational logic ? Error Correction using Duplication, and Error Correction using Time-Shifted Outputs. Simulation results show that both techniques reduce combinational logic soft error rate by more than an order of magnitude. Soft errors affecting sequential elements (latches and flip-flops) at combinational logic outputs are automatically corrected using these techniques.

158 citations


Cited by
More filters
Proceedings ArticleDOI
C. Ranger1, R. Raghuraman1, A. Penmetsa1, Gary Bradski1, Christos Kozyrakis1 
10 Feb 2007
TL;DR: It is established that, given a careful implementation, MapReduce is a promising model for scalable performance on shared-memory systems with simple parallel code.
Abstract: This paper evaluates the suitability of the MapReduce model for multi-core and multi-processor systems. MapReduce was created by Google for application development on data-centers with thousands of servers. It allows programmers to write functional-style code that is automatically parallelized and scheduled in a distributed system. We describe Phoenix, an implementation of MapReduce for shared-memory systems that includes a programming API and an efficient runtime system. The Phoenix runtime automatically manages thread creation, dynamic task scheduling, data partitioning, and fault tolerance across processor nodes. We study Phoenix with multi-core and symmetric multiprocessor systems and evaluate its performance potential and error recovery features. We also compare MapReduce code to code written in lower-level APIs such as P-threads. Overall, we establish that, given a careful implementation, MapReduce is a promising model for scalable performance on shared-memory systems with simple parallel code

1,058 citations

Journal ArticleDOI
TL;DR: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The presented scheme is widely applicable and easy to deploy because it is based on the standard scan/ATPG methodology and adopts a very simple flow. It is nonintrusive as it does not require any modifications to the core logic such as the insertion of test points or logic bounding unknown states. The EDT scheme consists of logic embedded on a chip and a new deterministic test-pattern generation technique. The main contributions of the paper are test-stimuli compression schemes that allow us to deliver test data to the on-chip continuous-flow decompressor. In particular, it can be done by repeating certain patterns at the rates, which are adjusted to the requirements of the test cubes. Experimental results show that for industrial circuits with test cubes with very low fill rates, ranging from 3% to 0.2%, these schemes result in compression ratios of 30 to 500 times. A comprehensive analysis of the encoding efficiency of the proposed compression schemes is also provided.

529 citations

Book
01 Jul 2006
TL;DR: This book is a comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time-to-market and time- to-volume.

522 citations

Journal ArticleDOI
01 May 2014
TL;DR: This report presents a report produced by a workshop on ‘Addressing failures in exascale computing’ held in Park City, Utah, 4–11 August 2012, which summarizes and builds on discussions on resilience.
Abstract: We present here a report produced by a workshop on 'Addressing failures in exascale computing' held in Park City, Utah, 4-11 August 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system, discuss existing knowledge on resilience across the various hardware and software layers of an exascale system, and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, and academia, and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.

406 citations

Book
21 Jul 2006
TL;DR: A comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time to market and time-to-volume as mentioned in this paper.
Abstract: This book is a comprehensive guide to new DFT methods that will show the readers how to design a testable and quality product, drive down test cost, improve product quality and yield, and speed up time-to-market and time-to-volume. · Most up-to-date coverage of design for testability. · Coverage of industry practices commonly found in commercial DFT tools but not discussed in other books. · Numerous, practical examples in each chapter illustrating basic VLSI test principles and DFT architectures. · Lecture slides and exercise solutions for all chapters are now available. · Instructors are also eligible for downloading PPT slide files and MSWORD solutions files from the manual website. Table of Contents Chapter 1 - Introduction Chapter 2 - Design for Testability Chapter 3 - Logic and Fault Simulation Chapter 4 - Test Generation Chapter 5 - Logic Built-In Self-Test Chapter 6 - Test Compression Chapter 7 - Logic Diagnosis Chapter 8 - Memory Testing and Built-In Self-Test Chapter 9 - Memory Diagnosis and Built-In Self-Repair Chapter 10 - Boundary Scan and Core-Based Testing Chapter 11 - Analog and Mixed-Signal Testing Chapter 12 - Test Technology Trends in the Nanometer Age

340 citations