scispace - formally typeset
Search or ask a question
Author

Xinli Gu

Bio: Xinli Gu is an academic researcher from Cisco Systems, Inc.. The author has contributed to research in topics: Automatic test pattern generation & Design for testing. The author has an hindex of 10, co-authored 32 publications receiving 367 citations. Previous affiliations of Xinli Gu include Linköping University & Synopsys.

Papers
More filters
Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper describes an approach to extend the functionalities of structural test techniques to the board and system level to improve the test accessibility, test time, and diagnostic capability in a large telecommunication company.
Abstract: The success of system test is measured by test quality and cost. System test quality and cost rely on several factors, such as component and board test quality, system test completeness, the support of system diagnostics, and a process that controls overall quality, resource and cost balances. Traditional structural test techniques used at the component level can achieve both high test quality and low test costs. This paper describes an approach to extend the functionalities of structural test techniques to the board and system level to improve the test accessibility, test time, and diagnostic capability. This approach has become practice in a large telecommunication company and the benefits received from this practice are tremendous. Examples will be given at the end of the paper.

47 citations

Proceedings ArticleDOI
23 Sep 1994
TL;DR: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time.
Abstract: This paper presents a testability improvement method for digital systems described in VHDL behavioral specification. The method is based on testability analysis at registertransfer (RT) level which reflects test pattern generation costs, fault coverage and test application time. The testability is measured by controllability and observability, and determined by the structure of a design, the depth from I/O ports and the functional units used. In our approach, hardto-test parts are detected by a testability analysis algorithm and transformed by some known DFT techniques. Our experimental results show that testability improvement transformations guided by the RT level testability analysis have a strong correlation to ATPG results at gate level.

46 citations

Proceedings ArticleDOI
08 Nov 2005
TL;DR: This paper presents some of the issues that Cisco Systems has experienced with respect to NTFs and how some of those issues were resolved and advocates for much better correlation between the ASIC test on the component tester and the functional test in the system chassis.
Abstract: As chip, board and system technologies scale towards higher speeds and greater logic density, the effect of defects becomes more subtle, but more pervasive. As hardware designers push technologies to the limit, system DPM rates continue to increase, yet it becomes increasingly difficult to determine the nature of the failures. More and more often, components/ASICs which fail at board and system test are sent to suppliers, only to have them returned "NTF" (no trouble found). This paper presents some of the issues that Cisco Systems has experienced with respect to NTFs, and how some of those issues were resolved. These issues span from chip to system and from process to test to debug. The paper discusses the importance of a process to deal with NTFs and the importance of accurate data to determine and fix unwanted trends. Ultimately, most problems were resolved once the trend data and the offending logic were completely understood. Not all NTFs resulted from test escapes. It was clear, however, that some sort of "correlation" between the ASIC test and the system test needed to be in place to resolve/prevent NTF issues. In its conclusion, this paper advocates for much better correlation between the ASIC test on the component tester and the functional test in the system chassis

40 citations

Proceedings ArticleDOI
19 Apr 2010
TL;DR: This paper uses Bayesian inference to develop a new board-level diagnosis framework that allows us to identify faulty devices or faulty modules within a device on a failing board with high confidence and highlights the effectiveness of the proposed framework in terms of fault-localization accuracy and correctness of diagnosis.
Abstract: Increasing integration densities and high operating speeds are leading to subtle manifestations of defects at the board level. Board-level functional test is therefore necessary for product qualification. The diagnosis of functional failures is especially challenging, and the cost associated with board-level diagnosis is escalating rapidly. An effective and cost-efficient board-level diagnosis strategy is needed to reduce manufacturing cost and time-to-market, as well as to improve product quality. In this paper, we use Bayesian inference to develop a new board-level diagnosis framework that allows us to identify faulty devices or faulty modules within a device on a failing board with high confidence. Bayesian inference offers a powerful probabilistic method for pattern analysis, classification, and decision making under uncertainty. We apply this inference technique by first generating a database of fault syndromes obtained using fault-insertion test at the module pin level on a fault-free board, and then use this database along with the observed erroneous behavior of a failing board to infer the most likely faulty device. Results on a case study using an open-source RISC system-on-chip highlight the effectiveness of the proposed framework in terms of fault-localization accuracy and correctness of diagnosis.

36 citations

Proceedings ArticleDOI
Xinli Gu1, Weili Wang1, K. Li1, Heon Kim1, S.S. Chung1 
07 Oct 2002
TL;DR: By re-configuring the existing DFT logic implemented on an ASIC, this paper is able to test each part of an ASIC in a system environment separately and thus locate manufacturing defects, and use structural tests to cover device and their interconnect tests on a board.
Abstract: This paper presents a technique of re-using DFT logic for system functional and silicon debugging. By re-configuring the existing DFT logic implemented on an ASIC, we are able to 1) test each part of an ASIC in a system environment separately and thus locate manufacturing defects, 2) control and observe any state elements of an ASIC to facilitate system function and silicon debugging, and 3) use structural tests to cover device and their interconnect tests on a board. Therefore, we can achieve debugging and test at both device level and system board level.

27 citations


Cited by
More filters
02 Nov 2011
TL;DR: This paper presents a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments that is accurately and efficiently estimated by a method of direct density-ratio estimation.
Abstract: The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method.

271 citations

Journal ArticleDOI
TL;DR: This is it, the handbook of data mining and knowledge discovery that will be your best choice for better reading book that you will not spend wasted by reading this website.
Abstract: Give us 5 minutes and we will show you the best book to read today. This is it, the handbook of data mining and knowledge discovery that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.

252 citations

Book
11 Mar 2009
TL;DR: EDA/VLSI practitioners and researchers in need of fluency in an "adjacent" field will find this an invaluable reference to the basic EDA concepts, principles, data structures, algorithms, and architectures for the design, verification, and test of VLSI circuits.
Abstract: This book provides broad and comprehensive coverage of the entire EDA flow. EDA/VLSI practitioners and researchers in need of fluency in an "adjacent" field will find this an invaluable reference to the basic EDA concepts, principles, data structures, algorithms, and architectures for the design, verification, and test of VLSI circuits. Anyone who needs to learn the concepts, principles, data structures, algorithms, and architectures of the EDA flow will benefit from this book. Covers complete spectrum of the EDA flow, from ESL design modeling to logic/test synthesis, verification, physical design, and test - helps EDA newcomers to get "up-and-running" quickly Includes comprehensive coverage of EDA concepts, principles, data structures, algorithms, and architectures - helps all readers improve their VLSI design competence Contains latest advancements not yet available in other books, including Test compression, ESL design modeling, large-scale floorplanning, placement, routing, synthesis of clock and power/ground networks - helps readers to design/develop testable chips or products Includes industry best-practices wherever appropriate in most chapters - helps readers avoid costly mistakes Table of Contents Chapter 1: Introduction Chapter 2: Fundamentals of CMOS Design Chapter 3: Design for Testability Chapter 4: Fundamentals of Algorithms Chapter 5: Electronic System-Level Design and High-Level Synthesis Chapter 6: Logic Synthesis in a Nutshell Chapter 7: Test Synthesis Chapter 8: Logic and Circuit Simulation Chapter 9:?Functional Verification Chapter 10: Floorplanning Chapter 11: Placement Chapter 12: Global and Detailed Routing Chapter 13: Synthesis of Clock and Power/Ground Networks Chapter 14: Fault Simulation and Test Generation.

200 citations

Book
20 Nov 2007
TL;DR: This book is a comprehensive guide to new VLSI Testing and Design-for-Testability techniques that will allow students, researchers, DFT practitioners, and V LSI designers to master quickly System-on-Chip Test architectures, for test debug and diagnosis of digital, memory, and analog/mixed-signal designs.
Abstract: Modern electronics testing has a legacy of more than 40 years. The introduction of new technologies, especially nanometer technologies with 90nm or smaller geometry, has allowed the semiconductor industry to keep pace with the increased performance-capacity demands from consumers. As a result, semiconductor test costs have been growing steadily and typically amount to 40% of today's overall product cost. This book is a comprehensive guide to new VLSI Testing and Design-for-Testability techniques that will allow students, researchers, DFT practitioners, and VLSI designers to master quickly System-on-Chip Test architectures, for test debug and diagnosis of digital, memory, and analog/mixed-signal designs. KEY FEATURES * Emphasizes VLSI Test principles and Design for Testability architectures, with numerous illustrations/examples. * Most up-to-date coverage available, including Fault Tolerance, Low-Power Testing, Defect and Error Tolerance, Network-on-Chip (NOC) Testing, Software-Based Self-Testing, FPGA Testing, MEMS Testing, and System-In-Package (SIP) Testing, which are not yet available in any testing book. * Covers the entire spectrum of VLSI testing and DFT architectures, from digital and analog, to memory circuits, and fault diagnosis and self-repair from digital to memory circuits. * Discusses future nanotechnology test trends and challenges facing the nanometer design era; promising nanotechnology test techniques, including Quantum-Dots, Cellular Automata, Carbon-Nanotubes, and Hybrid Semiconductor/Nanowire/Molecular Computing. * Practical problems at the end of each chapter for students.

151 citations

Journal ArticleDOI
TL;DR: This paper describes a high-level synthesis system, called CAMAD, for transforming algorithms into hardware implementation structures at register-transfer level and shows that this approach produces improved register- transfer designs, especially in the cases when the designed hardware consists of data paths and control logics that are tightly coupled.
Abstract: This paper describes a high-level synthesis system, called CAMAD, for transforming algorithms into hardware implementation structures at register-transfer level. The algorithms are used to specify the behaviors of the hardware to be designed. They are first translated into a formal representation model which is based on timed Petri nets and consists of separate but related descriptions of control and data path. The formal model is used as an intermediate design representation and supports an iterative transformation approach to high-level synthesis. The basic idea is that once the behavioral specification is translated into the initial design representation, it can be viewed as a primitive implementation. Correctness-preserving transformations are then used to successively transform the initial design into an efficient implementation. Selection of transformations is guided by an optimization strategy which makes design decisions concerning operation scheduling, data path allocation, and control allocation simultaneously. The integration of these several synthesis subtasks has resulted in a better chance to reach the globally optimal solution. Experimental results show that our approach produces improved register-transfer designs, especially in the cases when the designed hardware consists of data paths and control logics that are tightly coupled. >

103 citations