scispace - formally typeset
Search or ask a question
Author

Ernesto Sanchez

Other affiliations: Instituto Politécnico Nacional
Bio: Ernesto Sanchez is an academic researcher from Polytechnic University of Turin. The author has contributed to research in topics: Fault coverage & Automatic test pattern generation. The author has an hindex of 17, co-authored 118 publications receiving 1172 citations. Previous affiliations of Ernesto Sanchez include Instituto Politécnico Nacional.


Papers
More filters
Journal ArticleDOI
TL;DR: A taxonomy for different SBST methodologies according to their test program development philosophy is proposed, and research approaches based on SBST techniques for optimizing other key aspects are summarized.
Abstract: This article discusses the potential role of software-based self-testing in the microprocessor test and validation process, as well as its supplementary role in other classic functional- and structural-test methods. In addition, the article proposes a taxonomy for different SBST methodologies according to their test program development philosophy, and summarizes research approaches based on SBST techniques for optimizing other key aspects.

231 citations

Journal ArticleDOI
TL;DR: This work focuses on simulation-based design validation performed at the behavioral register-transfer level, where designers typically write assertions inside hardware description language (HDL) models and run extensive simulations to increase confidence in device correctness.
Abstract: Design validation is a critical step in the development of present-day microprocessors, and some authors suggest that up to 60% of the design cost is attributable to this activity. Of the numerous activities performed in different stages of the design flow and at different levels of abstraction, we focus on simulation-based design validation performed at the behavioral register-transfer level. Designers typically write assertions inside hardware description language (HDL) models and run extensive simulations to increase confidence in device correctness. Simulation results can also be useful in comparing the HDL model against higher-level references or instruction set simulators. Microprocessor validation has become more difficult since the adoption of pipelined architectures, mainly because you can't evaluate the behavior of a pipelined microprocessor by considering one instruction at a time; a pipeline's behavior depends on a sequence of instructions and all their operands.

129 citations

Journal ArticleDOI
TL;DR: This paper illustrates the several issues that need to be taken into account when generating test programs for on-line execution and proposed an overall development flow based on ordered generation of test programs that is minimizing the computational efforts.
Abstract: Software-Based Self-Test is an effective methodology for devising the online testing of Systems-on-Chip. In the automotive field, a set of test programs to be run during mission mode is also called Core Self-Test library. This paper introduces many new contributions: (1) it illustrates the several issues that need to be taken into account when generating test programs for on-line execution; (2) it proposed an overall development flow based on ordered generation of test programs that is minimizing the computational efforts; (3) it is providing guidelines for allowing the coexistence of the Core Self-Test library with the mission application while guaranteeing execution robustness. The proposed methodology has been experimented on a large industrial case study. The coverage level reached after one year of team work is over 87 percent of stuck-at fault coverage, and execution time is compliant with the ISO26262 specification. Experimental results suggest that alternative approaches may request excessive evaluation time thus making the generation flow unfeasible for large designs.

67 citations

Proceedings ArticleDOI
11 Mar 2019
TL;DR: This work presents a method-ology to evaluate the impact of permanent faults affecting CNN exploited for automotive applications through a fault injection enviroment built upon on the darknet open source DNN framework.
Abstract: Deep Learning, and in particular its implementation using Convolutional Neural Networks (CNNs), is currently one of the most intensively and widely used predictive models for safety-critical applications like autonomous driving assistance on pedestrian, objects and structures recognition. Today, ensuring the reliability of these innovations is becoming very important since they involve human lives. One of the peculiarities of the CNNs is the inherent resilience to errors due to the iterative nature of the learning process. In this work we present a method-ology to evaluate the impact of permanent faults affecting CNN exploited for automotive applications. Such a characterization is performed through a fault injection enviroment built upon on the darknet open source DNN framework. Results are shown about fault injection campaigns where permanent faults are affecting the connection weights in the LeNet and Yolo; the behavior of the corrupted CNN is classified according to the criticality of the introduced deviation.

64 citations

Journal ArticleDOI
TL;DR: An evolutionary based adaptive drift-correction method designed to work with state-of-the-art classification systems that exploits a cutting-edge evolutionary strategy to iteratively tweak the coefficients of a linear transformation which can transparently correct raw sensors' measures thus mitigating the negative effects of the drift.

55 citations


Cited by
More filters
Book
26 Mar 2008
TL;DR: A unique overview of this exciting technique is written by three of the most active scientists in GP, which starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination until high-fitness solutions emerge.
Abstract: Genetic programming (GP) is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until high-fitness solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. This unique overview of this exciting technique is written by three of the most active scientists in GP. See www.gp-field-guide.org.uk for more information on the book.

1,856 citations

Journal ArticleDOI
John R. Koza1
TL;DR: It is predicted that the increased availability of computing power (through both parallel computing and Moore’s law) should result in the production, in the future, of an increasing flow of human-competitive results, as well as more intricate and impressive results.
Abstract: Genetic programming has now been used to produce at least 76 instances of results that are competitive with human-produced results. These human-competitive results come from a wide variety of fields, including quantum computing circuits, analog electrical circuits, antennas, mechanical systems, controllers, game playing, finite algebras, photonic systems, image recognition, optical lens systems, mathematical algorithms, cellular automata rules, bioinformatics, sorting networks, robotics, assembly code generation, software repair, scheduling, communication protocols, symbolic regression, reverse engineering, and empirical model discovery. This paper observes that, despite considerable variation in the techniques employed by the various researchers and research groups that produced these human-competitive results, many of the results share several common features. Many of the results were achieved by using a developmental process and by using native representations regularly used by engineers in the fields involved. The best individual in the initial generation of the run of genetic programming often contains only a small number of operative parts. Most of the results that duplicated the functionality of previously issued patents were novel solutions, not infringing solutions. In addition, the production of human-competitive results, as well as the increased intricacy of the results, are broadly correlated to increased availability of computing power tracked by Moore's law. The paper ends by predicting that the increased availability of computing power (through both parallel computing and Moore's law) should result in the production, in the future, of an increasing flow of human-competitive results, as well as more intricate and impressive results.

315 citations

Journal ArticleDOI
TL;DR: Experiments on the popular sensor drift data with multiple batches collected using E-nose system clearly demonstrate that the proposed DAELM significantly outperforms existing drift-compensation methods without cumbersome measures, and also bring new perspectives for ELM.
Abstract: This paper addresses an important issue known as sensor drift, which exhibits a nonlinear dynamic property in electronic nose (E-nose), from the viewpoint of machine learning. Traditional methods for drift compensation are laborious and costly owing to the frequent acquisition and labeling process for gas samples’ recalibration. Extreme learning machines (ELMs) have been confirmed to be efficient and effective learning techniques for pattern recognition and regression. However, ELMs primarily focus on the supervised, semisupervised, and unsupervised learning problems in single domain (i.e., source domain). To our best knowledge, ELM with cross-domain learning capability has never been studied. This paper proposes a unified framework called domain adaptation extreme learning machine (DAELM), which learns a robust classifier by leveraging a limited number of labeled data from target domain for drift compensation as well as gas recognition in E-nose systems, without losing the computational efficiency and learning ability of traditional ELM. In the unified framework, two algorithms called source DAELM (DAELM-S) and target DAELM (DAELM-T) are proposed in this paper. In order to perceive the differences among ELM, DAELM-S, and DAELM-T, two remarks are provided. Experiments on the popular sensor drift data with multiple batches collected using E-nose system clearly demonstrate that the proposed DAELM significantly outperforms existing drift-compensation methods without cumbersome measures, and also bring new perspectives for ELM.

283 citations