scispace - formally typeset
Search or ask a question
Author

Matteo Sonza Reorda

Bio: Matteo Sonza Reorda is an academic researcher from Polytechnic University of Turin. The author has contributed to research in topics: Fault coverage & Automatic test pattern generation. The author has an hindex of 32, co-authored 295 publications receiving 4525 citations. Previous affiliations of Matteo Sonza Reorda include University of California, Riverside & NXP Semiconductors.


Papers
More filters
Proceedings ArticleDOI
14 Mar 2022
TL;DR: In this paper , the authors propose a method to automatically compact the test programs of a given self-Test Libraries (STL) targeting GPUs, which combines a multi-level abstraction analysis resorting to logic simulation to extract the microarchitectural operations triggered by the test program and the information about the thread-level activity of each instruction and to fault simulation to know its ability to propagate faults to an observable point.
Abstract: Nowadays, Graphics Processing Units (GPUs) are effective platforms for implementing complex algorithms (e.g., for Artificial Intelligence) in different domains (e.g., automotive and robotics), where massive parallelism and high computational effort are required. In some domains, strict safety-critical requirements exist, mandating the adoption of mechanisms to detect faults during the operational phases of a device. An effective test solution is based on Self-Test Libraries (STLs) aiming at testing devices functionally. This solution is frequently adopted for CPUs, but can also be used with GPUs. Nevertheless, the in-field constraints restrict the size and duration of acceptable STLs. This work proposes a method to automatically compact the test programs of a given STL targeting GPUs. The proposed method combines a multi-level abstraction analysis resorting to logic simulation to extract the microarchitectural operations triggered by the test program and the information about the thread-level activity of each instruction and to fault simulation to know its ability to propagate faults to an observable point. The main advantage of the proposed method is that it requires a single fault simulation to perform the compaction. The effectiveness of the proposed approach was evaluated, resorting to several test programs developed for an open-source GPU model (FlexGripPlus) compatible with NVIDIA GPUs. The results show that the method can compact test programs by up to 98.64% in code size and by up to 98.42% in terms of duration, with minimum effects on the achieved fault coverage.

5 citations

Proceedings ArticleDOI
01 Nov 2016
TL;DR: The method incorporates hierarchical fast, yet accurate modelling of NBTI-induced delays at transistor, gate and path levels for generation of rejuvenation Assembler programs using an Evolutionary Algorithm, and aims at extending the reliable lifetime of nanoelectronic processors.
Abstract: The time-dependent variation caused by Negative Bias Temperature Instability (NBTI) is agreed to be one of the main reliability concerns in integrated circuits implemented with current nanotechnology nodes. NBTI increases the threshold voltage of pMOS transistors: hence, it slows down signal propagation along logic paths between flip-flops. It may cause intermittent faults and, ultimately, permanent functional failures in processor circuits. In this paper, we study an NBTI mitigation approach in processor designs by rejuvenation of pMOS transistors along NBTI-critical paths. The method incorporates hierarchical fast, yet accurate modelling of NBTI-induced delays at transistor, gate and path levels for generation of rejuvenation Assembler programs using an Evolutionary Algorithm. These programs are applied further as an execution overhead to drive those pMOS transistors to the recovery phase, which are the most critical for the NBTI-induced path delay in processors. The experimental results demonstrate efficiency of evolutionary generation and significant reduction of NBTI-induced delays by the rejuvenation stimuli with an execution overhead of 0.1% or less. The proposed approach aims at extending the reliable lifetime of nanoelectronic processors.

5 citations

Proceedings ArticleDOI
28 Jun 2021
TL;DR: In this paper, the authors propose to load in the DUT a pattern not by shifting inside of it a bit at a time, but loading the entire pattern at once inside of them, which allows for conservative stress measures, thus it fits for stress analysis purposes.
Abstract: Burn-In equipment provide both external and internal stress to the device under test. External stress, such as thermal stress, is provided by a climatic chamber or by socket-level local temperature forcing tools, and aims at aging the circuit material, while internal stress, such as electrical stress, consists in driving the circuit nodes to produce a high internal activity. To support internal stress, Burn-In test equipment is usually characterized by large memory capabilities required to store precomputed patterns that are then sequenced to the circuit inputs. Because of the increasing complexity and density of the new generations of SoCs, evaluating the effectiveness of the patterns applied to a Device under Test (DUT) through a simulation phase requires long periods of time. Moreover, topology-related considerations are becoming more and more important in modern high-density designs, so a way to include this information into the evaluation has to be devised. In this paper we show a feasible solution to this problem: the idea is to load in the DUT a pattern not by shifting inside of it a bit at a time but loading the entire pattern at once inside of it; this kind of procedure allows for conservative stress measures, thus it fits for stress analysis purposes. Moreover, a method to take the topology of the DUT into account when calculating the activity metrics is proposed, so to obtain stress metrics which can better represent the activity a circuit is subject to. An automotive chip accounting for about 20 million of gates is considered as a case of study. Resorting to it we show both the feasibility and the effectiveness of the proposed methodology.

5 citations

Proceedings ArticleDOI
01 Jan 1999
TL;DR: A prototypical tool and experimental analysis shows that VEGA2 is able to provide a larger number of correct results than both an exact method and the previous GA-based approach, so it is able increase confidence on the validity of an optimization process.
Abstract: We have presented VEGA2: a Genetic Algorithm-based approach to the problem of equivalence verification of sequential circuits. Although sacrificing the exactness of the verification, the advantages of such an approach lie in the ability to handle large designs and in the possibility to easily trade off CPU time with confidence on the result (by tuning the maximum number of generations). VEGA2 is not a replacement for exact verification tools, but a complement: when the complexity of the circuits prevents the use of a BDD-based algorithm, it is still able to provide meaningful results. We also presented a prototypical tool and experimental analysis that shows that VEGA2 is able to provide a larger number of correct results than both an exact method and the previous GA-based approach. Thus it is able increase confidence on the validity of an optimization process.

5 citations

Proceedings ArticleDOI
07 Jul 2014
TL;DR: This paper focuses on Control Flow Errors (CFEs) and extends a previously proposed method, based on the usage of the debug interface existing in several processors/controllers, that achieves a good detection capability with very limited impact on the system development flow and reduced hardware cost.
Abstract: Transient faults can affect the behavior of electronic systems, and represent a major issue in many safety-critical applications. This paper focuses on Control Flow Errors (CFEs) and extends a previously proposed method, based on the usage of the debug interface existing in several processors/controllers. The new method achieves a good detection capability with very limited impact on the system development flow and reduced hardware cost: moreover, the proposed technique does not involve any change either in the processor hardware or in the application software, and works even if the processor uses caches. Experimental results are reported, showing both the advantages and the costs of the method.

5 citations


Cited by
More filters
01 Jan 1999
TL;DR: This research organizes, presents, and analyzes contemporary MultiObjective Evolutionary Algorithm research and associated Multiobjective Optimization Problems (MOPs) and uses a consistent MOEA terminology and notation to present a complete, contemporary view of current MOEA "state of the art" and possible future research.
Abstract: : This research organizes, presents, and analyzes contemporary Multiobjective Evolutionary Algorithm (MOEA) research and associated Multiobjective Optimization Problems (MOPs). Using a consistent MOEA terminology and notation, each cited MOEAs' key factors are presented in tabular form for ease of MOEA identification and selection. A detailed quantitative and qualitative MOEA analysis is presented, providing a basis for conclusions about various MOEA-related issues. The traditional notion of building blocks is extended to the MOP domain in an effort to develop more effective and efficient MOEAs. Additionally, the MOEA community's limited test suites contain various functions whose origins and rationale for use are often unknown. Thus, using general test suite guidelines appropriate MOEA test function suites are substantiated and generated. An experimental methodology incorporating a solution database and appropriate metrics is offered as a proposed evaluation framework allowing absolute comparisons of specific MOEA approaches. Taken together, this document's classifications, analyses, and new innovations present a complete, contemporary view of current MOEA "state of the art" and possible future research. Researchers with basic EA knowledge may also use part of it as a largely self-contained introduction to MOEAs.

1,287 citations

Book
31 Jan 1993
TL;DR: This book is a core reference for graduate students and CAD professionals and presents a balance of theory and practice in a intuitive manner.
Abstract: From the Publisher: This work covers all aspects of physical design. The book is a core reference for graduate students and CAD professionals. For students, concept and algorithms are presented in an intuitive manner. For CAD professionals, the material presents a balance of theory and practice. An extensive bibliography is provided which is useful for finding advanced material on a topic. At the end of each chapter, exercises are provided, which range in complexity from simple to research level.

927 citations

Journal ArticleDOI
TL;DR: This paper presents crossover and mutation operators, developed to tackle the Travelling Salesman Problem with Genetic Algorithms with different representations such as: binary representation, path representation, adjacency representation, ordinal representation and matrix representation.
Abstract: This paper is the result of a literature study carried out by the authors. It is a review of the different attempts made to solve the Travelling Salesman Problem with Genetic Algorithms. We present crossover and mutation operators, developed to tackle the Travelling Salesman Problem with Genetic Algorithms with different representations such as: binary representation, path representation, adjacency representation, ordinal representation and matrix representation. Likewise, we show the experimental results obtained with different standard examples using combination of crossover and mutation operators in relation with path representation.

839 citations

Journal ArticleDOI
TL;DR: A taxonomy of hybrid metaheuristics is presented in an attempt to provide a common terminology and classification mechanisms and is also applicable to most types of heuristics and exact optimization algorithms.
Abstract: Hybrid metaheuristics have received considerable interest these recent years in the field of combinatorial optimization. A wide variety of hybrid approaches have been proposed in the literature. In this paper, a taxonomy of hybrid metaheuristics is presented in an attempt to provide a common terminology and classification mechanisms. The taxonomy, while presented in terms of metaheuristics, is also applicable to most types of heuristics and exact optimization algorithms. As an illustration of the usefulness of the taxonomy an annoted bibliography is given which classifies a large number of hybrid approaches according to the taxonomy.

829 citations

Journal Article
TL;DR: In benchmark studies using a set of large industrial circuit verification instances, this method is greatly more efficient than BDD-based symbolic model checking, and compares favorably to some recent SAT-based model checking methods on positive instances.
Abstract: We consider a fully SAT-based method of unbounded symbolic model checking based on computing Craig interpolants. In benchmark studies using a set of large industrial circuit verification instances, this method is greatly more efficient than BDD-based symbolic model checking, and compares favorably to some recent SAT-based model checking methods on positive instances.

775 citations