scispace - formally typeset
Search or ask a question

Showing papers by "Matteo Sonza Reorda published in 2000"


Journal ArticleDOI
TL;DR: In this article, the authors propose a suite of RT-level benchmarks that help improve research in high-level ATPG tools, such as testability evaluation of circuits and the evaluation of testability of circuits.
Abstract: New design flows require reducing work at the gate level and performing most activities before the synthesis step, including evaluation of testability of circuits. We propose a suite of RT-level benchmarks that help improve research in high-level ATPG tools. First results on the benchmarks obtained with our prototype tool show the feasibility of the approach.

464 citations


Proceedings ArticleDOI
30 Apr 2000
TL;DR: This paper proposes an algorithm to design a test pattern generator based on cellular automata for testing combinational circuits that effectively reduces power consumption while attaining high fault coverage and experimental results show that this approach reduces the power consumed during test by 34% on average.
Abstract: In the last decade, researchers devoted much effort to reduce the average power consumption in VLSI systems during normal operation mode, while power consumption during test operation mode was usually neglected. However, during test application, circuits are subjected to an activity level higher than the normal one: the extra power consumption due to test application may thus cause severe hazards to circuit reliability. Moreover, it can dramatically shorten battery life when periodic testing of battery-powered systems is considered. In this paper we propose an algorithm to design a test pattern generator based on cellular automata for testing combinational circuits that effectively reduces power consumption while attaining high fault coverage. Experimental results show that our approach reduces the power consumed during test by 34% on average, without affecting fault coverage, test length and area overhead.

106 citations


Proceedings ArticleDOI
03 Jul 2000
TL;DR: Static and dynamic methods are proposed to analyze the list of faults to be injected, and for removing faults as soon as their behaviour is known, and common features available in most VHDL simulation environments are also exploited.
Abstract: Simulation-based fault injection in VHDL descriptions is increasingly common due to the popularity of top-down design flows exploiting this language. However, the large CPU time required to perform VHDL simulations often represents a major drawback stemming from the adoption of this method. This paper presents some techniques for reducing the time to perform the fault injection experiments. Static and dynamic methods are proposed to analyze the list of faults to be injected, and for removing faults as soon as their behaviour is known. Common features available in most VHDL simulation environments are also exploited. Experimental results show that the proposed techniques are able to reduce the time required by a typical fault injection campaign by a factor ranging from 51% to 96%.

57 citations


Proceedings ArticleDOI
M. Lajolo1, M. Rebaudengo, Matteo Sonza Reorda, Massimo Violante, Luciano Lavagno 
08 Nov 2000
TL;DR: The paper proposes an approach for integrating the ability to generate test sequences into an existing co-design tool, and preliminary experimental results are reported, assessing the feasibility of the proposed approach.
Abstract: Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when system-on-chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test sequences, which can be reused during the following design steps, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying testability problems early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test sequences into an existing co-design tool. Preliminary experimental results are reported, assessing the feasibility of the proposed approach.

32 citations


Proceedings ArticleDOI
30 Apr 2000
TL;DR: Experimental results show how sharp observability metrics are crucial for making effective RT- level ATPG possible: test sequences generated at RT-level outperform commercial gate-level ATPGs on some ITC99 benchmark circuits.
Abstract: This paper focuses on observability, one of the open issues in high-level test generation. Three different approximate metrics for taking observability into account during RT-level ATPG are presented. Metrics range from a really naive and optimistic one to more sophisticated analysis. Metrics are evaluated including them in the calculation of the fitness function used in a RT-level ATPG. Advantages and disadvantages are illustrated. Experimental results show how sharp observability metrics are crucial for making effective RT-level ATPG possible: test sequences generated at RT-level outperform commercial gate-level ATPGs on some ITC99 benchmark circuits.

31 citations


Proceedings Article
01 Jan 2000
TL;DR: This paper aims at exploiting the capabilities of commercial V HDL simulators to compute faulty responses without modifying the VHDL source code, and results show that simulation of a faulty circuit is no more costly than simulation of the original circuit.
Abstract: With the advent of new RT-level design and test flow, new tools are needed to migrate at the RT-level the activities of fault simulation, testability analysis, and test pattern generation. This paper focuses on fault simulation at the RT-level, and aims at exploiting the capabilities of commercial VHDL simulators to compute faulty responses without modifying the VHDL source code. The proposed approach was implemented as a prototypical tool, and experimental results show that simulation of a faulty circuit is no more costly than simulation of the original circuit. For defining RT-level faults, we adopted a refinement of the observabilityenhanced statement coverage metric. While this metric usually handles observability in an approximated way, we were able to efficiently and exactly determine the observability of single-bit stuck-at faults on all assignment statements.

20 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: This paper describes how fault injection techniques have been integrated in an existing co-design tool and which advantages come from the availability of such an enhanced tool.
Abstract: The widespread adoption of embedded microprocessor-based systems for safety critical applications mandates the use of co-design tools able to evaluate system dependability at every step of the design cycle. In this paper, we describe how fault injection techniques have been integrated in an existing co-design tool and which advantages come from the availability of such an enhanced tool. The effectiveness of the proposed tool is assessed on a simple case study.

19 citations


Book ChapterDOI
17 Apr 2000
TL;DR: Experimental results show that in most of the standard benchmark circuits the Cellular Automaton selected by the Selfish Gene algorithm is able to reach a Fault Coverage higher that what can be obtained with current engineering practice with comparable area occupation.
Abstract: Testing is a key issue in the design and production of digital circuits: the adoption of BIST (Built-In Self-Test) techniques is increasingly popular, but requires efficient algorithms for the generation of the logic which generates the test vectors applied to the Unit Under Test. This paper addresses the issue of identifying a Cellular Automaton able to generate input patterns to detect stuck-at faults inside a Finite State Machine (FSM) circuit. Previous results already proposed a solution based on a Genetic Algorithm which directly identifies a Cellular Automaton able to reach good Fault Coverage of the stuck-at faults. However, such method requires 2-bit cells in the Cellular Automaton, thus resulting in a high area overhead. This paper presents a new solution, with an area occupation limited to 1 bit per cell; the improved results are possible due to the adoption of a new optimization algorithm, the Selfish Gene algorithm. Experimental results are provided, which show that in most of the standard benchmark circuits the Cellular Automaton selected by the Selfish Gene algorithm is able to reach a Fault Coverage higher that what can be obtained with current engineering practice with comparable area occupation.

16 citations


Proceedings Article
01 Jan 2000

16 citations


Proceedings ArticleDOI
16 Jul 2000
TL;DR: An application in the field of electronic CAD of the Selfish Gene algorithm, an evolutionary algorithm based on a recent interpretation of the Darwinian theory that is able to achieve good fault coverage results with a reduced area overhead is shown.
Abstract: Testing is a key issue in the design and production of digital circuits and the adoption of built-in self test techniques is increasingly popular. This paper shows an application in the field of electronic CAD of the Selfish Gene algorithm, an evolutionary algorithm based on a recent interpretation of the Darwinian theory. A three-phase optimization algorithm is exploited for determining the structure of a built-in self test architecture that is able to achieve good fault coverage results with a reduced area overhead. Experimental results show that the attained fault coverage is substantially higher than what can be obtained by previously proposed methods with comparable area requirements.

13 citations


Proceedings ArticleDOI
03 Jul 2000
TL;DR: This paper deals with a method able to provide a microprocessor-based system with safety capabilities by modifying the source code of the executed application, only, which exploits a set of transformations which can automatically be applied.
Abstract: This paper deals with a method able to provide a microprocessor-based system with safety capabilities by modifying the source code of the executed application, only. The method exploits a set of transformations which can automatically be applied, thus greatly reducing the cost of designing a safe system, and increasing the confidence in its correctness. Fault Injection experiments have been performed on a sample application using two different systems based on CISC and RISC processors. Results demonstrate that the method effectiveness is rather independent of the adopted platform.

Book ChapterDOI
24 Oct 2000
TL;DR: Experimental results show that the proposed techniques are able to reduce the time required by a typical Fault Injection campaign by a factor ranging from 43.9% to 96.6%.
Abstract: Simulation-based Fault Injection in VHDL descriptions is increasingly common due to the popularity of top-down design flows exploiting this language. This paper presents some techniques for reducing the time to perform the required simulation experiments. Static and dynamic methods are proposed to analyze the list of faults to be injected, removing faults as soon as their behavior is known. Common features available in most VHDL simulation environments are also exploited. Experimental results show that the proposed techniques are able to reduce the time required by a typical Fault Injection campaign by a factor ranging from 43.9% to 96.6%.


Book ChapterDOI
17 Apr 2000
TL;DR: The goal of this paper is to propose an automatic input pattern generation tool able to assist designers in the generation of a test bench for difficult parts of small-or medium-sized digital protocol interfaces.
Abstract: In present days, most of the design activity is performed at a high level of abstraction, thus designers need to be sure that their designs are syntactically and semantically correct before starting the automatic synthesis process. The goal of this paper is to propose an automatic input pattern generation tool able to assist designers in the generation of a test bench for difficult parts of small-or medium-sized digital protocol interfaces. The proposed approach exploit a Genetic Algorithm connected to a commercial simulator for cultivating a set of input sequence able to execute given statements in the interface description. The proposed approach has been evaluated on the new ITC'99 benchmark set, a collection of circuits offering a wide spectrum of complexity. Experimental results show that some portions of the circuits remained uncovered, and the subsequent manual analysis allowed identifying design redundancies.

01 Jan 2000
TL;DR: The COTEST project aimed at assessing whether it is feasible and effective to take test issues into account early in the circuit design process, when a behavioral description of the circuit is available, only.
Abstract: The COTEST project aimed at assessing whether it is feasible and effective to take test issues into account early in the circuit design process, when a behavioral description of the circuit is available, only The project focused on two main problems: generation of test sequences starting from behavioral descriptions and modification of behavioral descriptions to increase testability (Design for Testability) Due to the critical nature of this assessment project, it is extremely important to obtain as much feedback about the results as possible The results have been discussed on meetings with industry, academic partners and in the framework of several other Community projects Several scientific papers have been published or submitted for publication In this report the current dissemination status and future dissemination plans of both partners will be given

01 Jan 2000
TL;DR: Experimental results show that the proposed approach is far more effective than ABFT in terms of fault detection capability when injecting transient faults in data and code memory, at a cost of an increased memory overhead.
Abstract: Over the last years, an increasing number of safety-critical tasks have been demanded to computer systems. In particular, safety-critical computer-based applications are hitting markets where costs is a major issue, and thus solutions are required which conjugate fault tolerance with low costs. In this paper, a software-based approach for developing safety-critical applications is analyzed. By exploiting an ad-hoc automatic tool implementing the proposed technique, several benchmark applications have been hardened against transient errors. Fault Injection campaigns have been performed to evaluate the fault detection capability of the hardened applications. Moreover, a comparison of the proposed techniques with the Algorithm-Based Fault Tolerance (ABFT) approach is proposed. Experimental results show that the proposed approach is far more effective than ABFT in terms of fault detection capability when injecting transient faults in data and code memory, at a cost of an increased memory overhead. Moreover, the performance penalty introduced by the proposed technique is comparable, and sometimes lower, than that ABFT requires.

Book ChapterDOI
13 Sep 2000
TL;DR: This work proposes an automatic input sequence generation approach based on a heuristic algorithm able to upgrade a set of test vectors provided by the designer to provide early and accurate power consumption estimation.
Abstract: Reduction of chip packaging and cooling costs for deep submicron System-On-Chip (SOC) designs is an emerging issue. We present a simulation-based methodology able to realistically model the complex environment in which a SOC design operates in order to provide early and accurate power consumption estimation. We show that a rich functional test bench provided by a designer with a deep knowledge of a complex system is very often not appropriate for power analysis and can lead to power estimation errors of some orders of magnitude. To address this issue, we propose an automatic input sequence generation approach based on a heuristic algorithm able to upgrade a set of test vectors provided by the designer. The obtained sequence closely reflects the worst-case power consumption for the chip and allows looking at how the chip is going to work over time.

Book ChapterDOI
17 Apr 2000
TL;DR: This paper forms the problem as a constrained optimization problem, and solves it resorting to an evolutionary algorithm, and empirically assess the effectiveness of the problem formulation with respect to the classical unconstrained formulation.
Abstract: Modern VLSI design methodologies and manufacturing technologies are making circuits increasingly fast. The quest for higher circuit performance and integration density stems from fields such as the telecommunication one where high speed and capability of dealing with large data sets is mandatory. The design of high-speed circuits is a challenging task, and can be carried out only if designers can exploit suitable CAD tools. Among the several aspects of high-speed circuit design, controlling power consumption is today a major issue for ensuring that circuits can operate at full speed without damages. In particular, tools for fast and accurate estimation of power consumption of high-speed circuits are required. In this paper we focus on the problem of predicting the maximum power consumption of sequential circuits. We formulate the problem as a constrained optimization problem, and solve it resorting to an evolutionary algorithm. Moreover, we empirically assess the effectiveness of our problem formulation with respect to the classical unconstrained formulation. Finally, we report experimental results assessing the effectiveness of the prototypical tool we implemented.