scispace - formally typeset
Search or ask a question
Author

Celia Lopez-Ongil

Other affiliations: Carlos III Health Institute
Bio: Celia Lopez-Ongil is an academic researcher from Charles III University of Madrid. The author has contributed to research in topics: Fault injection & Emulation. The author has an hindex of 13, co-authored 70 publications receiving 729 citations. Previous affiliations of Celia Lopez-Ongil include Carlos III Health Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a very fast and cost effective solution for SEU sensitivity evaluation is presented, which uses FPGA emulation in an autonomous manner to fully exploit the FPGAs emulation speed.
Abstract: The appearance of nanometer technologies has produced a significant increase of integrated circuit sensitivity to radiation, making the occurrence of soft errors much more frequent, not only in applications working in harsh environments, like aerospace circuits, but also for applications working at the earth surface. Therefore, hardened circuits are currently demanded in many applications where fault tolerance was not a concern in the very near past. To this purpose, efficient hardness evaluation solutions are required to deal with the increasing size and complexity of modern VLSI circuits. In this paper, a very fast and cost effective solution for SEU sensitivity evaluation is presented. The proposed approach uses FPGA emulation in an autonomous manner to fully exploit the FPGA emulation speed. Three different techniques to implement it are proposed and analyzed. Experimental results show that the proposed Autonomous Emulation approach can reach execution rates higher than one million faults per second, providing a performance improvement of two orders of magnitude with respect to previous approaches. These rates give way to consider very large fault injection campaigns that were not possible in the past

142 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that AMUSE can emulate soft error effects for complex circuits including microprocessors and memories, considering the real delays of an ASIC technology, and support massive fault injection campaigns, in the order of tens of millions of faults within acceptable time.
Abstract: Estimation of soft error sensitivity is crucial in order to devise optimal mitigation solutions that can satisfy reliability requirements with reduced impact on area, performance, and power consumption. In particular, the estimation of Single Event Transient (SET) effects for complex systems that include a microprocessor is challenging, due to the huge potential number of different faults and effects that must be considered, and the delay-dependent nature of SET effects. In this paper, we propose a multilevel FPGA emulation-based fault injection approach for evaluation of SET effects called AMUSE (Autonomous MUltilevel emulation system for Soft Error evaluation). This approach integrates Gate level and Register-Transfer level models of the circuit under test in a FPGA and is able to switch to the appropriate model as needed during emulation. Fault injection is performed at the Gate level, which provides delay accuracy, while fault propagation across clock cycles is performed at the Register-Transfer level for higher performance. Experimental results demonstrate that AMUSE can emulate soft error effects for complex circuits including microprocessors and memories, considering the real delays of an ASIC technology, and support massive fault injection campaigns, in the order of tens of millions of faults within acceptable time.

102 citations

Proceedings ArticleDOI
13 Jul 2011
TL;DR: A solution for accelerating statistical tests of Diehard Battery based on reconfigurable hardware, benefiting from task parallelization and high frequencies is described.
Abstract: Pseudorandom number generators (PRNGs) are used frequently in secure data processing algorithms. Randomness measuring is an essential test, performed on these generators, that help to range the security of the designed algorithm with respect to assure strong messing-up of the processed data. This paper describes a solution for accelerating statistical tests of Diehard Battery based on reconfigurable hardware, benefiting from task parallelization and high frequencies. With this proposal, users can obtain a fast, cheap and reliable measure of the randomness properties that can enable a complete exploration of the design space to produce better devices in shorter times.

72 citations

Journal ArticleDOI
TL;DR: In this article, a unified emulation environment which combines two fault injection techniques based on FPGA emulation is proposed, which provides both, a high speed tool for quick fault detection, and a medium speed tools for in-depth analysis of SEUs propagation.
Abstract: Sensitivity of electronic circuits to radiation effects is an increasing concern in modern designs. As technology scales down, Single Event Upsets (SEUs) are made more frequent and probable, affecting not only space applications, but also applications at earth's surface, like automotive applications. Fault injection is a method widely used to evaluate the SEU sensitivity of digital circuits. Among the existing fault injection techniques, those based on FPGA emulation have proven to be the fastest ones. In this paper a unified emulation environment which combines two fault injection techniques based on FPGA emulation is proposed. The new emulation environment provides both, a high speed tool for quick fault detection, and a medium speed tool for in-depth analysis of SEUs propagation. The experiments presented here show that the two techniques can be successfully applied in a complementary manner.

38 citations

Proceedings ArticleDOI
27 Jun 2012
TL;DR: This work proposes a new approach to build approximate logic circuits driven by testability estimations using the concept of unate functions, which can provide a variety of solutions for different trade-offs between error coverage and overheads.
Abstract: Logic masking approaches for Single-Event Transient (SET) mitigation use hardware redundancy to mask the propagation of SET effects. Conventional techniques, such as Triple-Modular Redundancy (TMR), can guarantee full fault coverage, but they also introduce very large overheads. Alternatively, approximate logic circuits can provide the necessary flexibility to find an optimal balance between error coverage and overheads. In this work, we propose a new approach to build approximate logic circuits driven by testability estimations. Using the concept of unate functions, approximations are performed in lines with low testability in order to minimize the impact on error coverage. The proposed approach is scalable and can provide a variety of solutions for different trade-offs between error coverage and overheads.

33 citations


Cited by
More filters
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Journal ArticleDOI
TL;DR: This review discusses the current status of devices that generate quantum random numbers, and discusses the most fundamental processes based on elementary quantum mechanical processes.
Abstract: In mathematics and computer science, random numbers have the role of a resource for assisting proofs, making cryptography secure, and enabling computational protocols. This role motivates efforts to produce random numbers as a physical process. Potential physical sources abound, but arguably the most fundamental are those based on elementary quantum mechanical processes. This review discusses the current status of devices that generate quantum random numbers.

446 citations

Journal ArticleDOI
TL;DR: This book is mainly oriented towards a final year undergraduate course on fault-tolerant computing, primarily with an implementation bias, and draws considerably on the author's experience in industry, particularly reflected in the projects accompanying chapter 5.
Abstract: Design and Analysis ofFault-Tolerant Digital Systems: B. W. JOHNSON (Addison Wesley, 1989,577 pp., £41.35) The book provides an introduction to the important aspects of designing fault-tolerant systems, and an evaluation of how well the reliability goals have been achieved. The book is mainly oriented towards a final year undergraduate course on fault-tolerant computing, primarily with an implementation bias. In chapters 1 and 2, definitions and basic terminology are covered, which sets the stage for the remaining chapters, and provides the background and motivation for the remainder of the book. Chapter 3 provides a thorough analysis of fault-tolerance techniques and concepts. This chapter in particular is remarkably well written, covering the issues of hardware and information redundancy, which form the mainstay offault-tolerant computing. Subsequent chapters on the use and evaluation of the various approaches illustrate the principles as they have been put into practice. At the end of chapter 5, small projects that allow the reader to apply the material presented in the preceding chapters are included. The resurgence of interest in fault-tolerance with the emergence of VLSI is the theme of chapter 6, focussing on designing fault-tolerant systems in a VLSI environment. The problems and opportunities presented by VLSI are discussed and the use of redundancy techniques in order to enhance manufacturing yield and to provide in-service reliability are reviewed. The final chapter covers testing, design for testability and testability analysis, which must be considered during each phase of the design process to guarantee that resulting designs can be thoroughly tested. Each chapter is followed by a summary of the key issues and concepts presented therein, and a separate list of references, which makes it easily readable. In addition, there is a reading list with more comprehensive and specialised references devoted to each chapter. Overall, the book is well written, and contains a great deal of information in 577 pages. The book has a definite implementation bias, and draws considerably on the author's experience in industry, particularly reflected in the projects accompanying chapter 5. The book should be a useful addition to a library, and a suitable text to accompany a lecture course on fault-tolerant computing. R. RAMASWAMI, Department ofComputation, UMIST

444 citations