scispace - formally typeset
Search or ask a question
Author

David de Andrés

Other affiliations: University of Valencia
Bio: David de Andrés is an academic researcher from Polytechnic University of Valencia. The author has contributed to research in topics: Fault injection & Dependability. The author has an hindex of 10, co-authored 55 publications receiving 352 citations. Previous affiliations of David de Andrés include University of Valencia.


Papers
More filters
Journal ArticleDOI
TL;DR: The procedures to inject a wide set of faults representative of deep-submicrometer technology, like stuck-at, bit-flip, pulse, indetermination, stuck-open, delay, short, open-line, and bridging, using the best suitable FPGA- based technique are described.
Abstract: Advances in semiconductor technologies are greatly increasing the likelihood of fault occurrence in deep-submicrometer manufactured VLSI systems. The dependability assessment of VLSI critical systems is a hot topic that requires further research. Field-programmable gate arrays (FPGAs) have been recently pro posed as a means for speeding-up the fault injection process in VLSI systems models (fault emulation) and for reducing the cost of fixing any error due to their applicability in the first steps of the development cycle. However, only a reduced set of fault models, mainly stuck-at and bit-flip, have been considered in fault emulation approaches. This paper describes the procedures to inject a wide set of faults representative of deep-submicrometer technology, like stuck-at, bit-flip, pulse, indetermination, stuck-open, delay, short, open-line, and bridging, using the best suitable FPGA- based technique. This paper also sets some basic guidelines for comparing VLSI systems in terms of their availability and safety, which is mandatory in mission and safety critical application contexts. This represents a step forward in the dependability benchmarking of VLSI systems and towards the definition of a framework for their evaluation and comparison in terms of performance, power consumption, and dependability.

41 citations

Proceedings ArticleDOI
22 Jun 2003
TL;DR: This paper presents a new SWIFI tool (INERTE) that solves the temporal overhead problem by using a standard debug interface called Nexus, and is able to inject transient faults without any temporal overhead.
Abstract: Software implemented fault injection techniques (SWIFI) enable emulation of hardware and software faults. This emulation can be based on debugging mechanisms of general purpose processors [1] or in special debugging ports of embedded processors [2]. A well-known drawback of existing SWIFI tools rely on the temporal overhead introduced in the target system. This overhead is a problem when validating real-time systems. This paper presents a new SWIFI tool (INERTE) that solves this problem by using a standard debug interface called Nexus [3]. Using Nexus, system memory can be accessed at runtime without any intrusion in the target system. Thus, INERTE is able to inject transient faults without any temporal overhead.

24 citations

Journal ArticleDOI
01 Nov 2011
TL;DR: This paper proposes a benchmarking methodology to experimentally evaluate and compare the behaviour of multi-hop routing protocols for WMNs and reflects to what extent this methodology can be useful in increasing the knowledge on how real WMNs behave in practice.
Abstract: Wireless mesh networks (WMNs) establish a new, quick and low-cost alternative to provide communications when deploying a fixed infrastructure that could result prohibitive in terms of either time or money. During last years, the specification of multi-hop routing protocols for WMNs has been promoted, leading to their recent exploitation in commercial solutions. The selection of routing protocols for integration in WMNs requires the evaluation, comparison and ranking of eligible candidates according to a representative set of meaningful measures. In this context, the development of suitable experimental techniques to balance different features of each protocol is an essential requirement. This paper copes with this challenging task by proposing a benchmarking methodology to experimentally evaluate and compare the behaviour of these protocols. The feasibility of the proposed approach is illustrated through a simple but real (non-simulated) case study and reflects to what extent this methodology can be useful in increasing our knowledge on how real WMNs behave in practice.

23 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper analyzes the existing correlation between fault injection experiments in an RTL microcontroller description and the information available at the ISS to enable accurate ISS-based fault injection.
Abstract: Increasingly complex microcontroller designs for safety-relevant automotive systems require the adoption of new methods and tools to enable a cost-effective verification of their robustness. In particular, costs associated to the certification against the ISO26262 safety standard must be kept low for economical reasons. In this context, simulation-based verification using instruction set simulators (ISS) arises as a promising approach to partially cope with the increasing cost of the verification process as it allows taking design decisions in early design stages when modifications can be performed quickly and with low cost. However, it remains to be proven that verification in those stages provides accurate enough information to be used in the context of automotive microcontrollers. In this paper we analyze the existing correlation between fault injection experiments in an RTL microcontroller description and the information available at the ISS to enable accurate ISS-based fault injection.

19 citations

Journal ArticleDOI
TL;DR: A survey covering current state-of-the-art evaluation platforms in the domain of ad hoc routing protocols paying special attention to the resilience dimension is provided.

16 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal Article
TL;DR: This research examines the interaction between demand and socioeconomic attributes through Mixed Logit models and the state of art in the field of automatic transport systems in the CityMobil project.
Abstract: 2 1 The innovative transport systems and the CityMobil project 10 1.1 The research questions 10 2 The state of art in the field of automatic transport systems 12 2.1 Case studies and demand studies for innovative transport systems 12 3 The design and implementation of surveys 14 3.1 Definition of experimental design 14 3.2 Questionnaire design and delivery 16 3.3 First analyses on the collected sample 18 4 Calibration of Logit Multionomial demand models 21 4.1 Methodology 21 4.2 Calibration of the “full” model. 22 4.3 Calibration of the “final” model 24 4.4 The demand analysis through the final Multinomial Logit model 25 5 The analysis of interaction between the demand and socioeconomic attributes 31 5.1 Methodology 31 5.2 Application of Mixed Logit models to the demand 31 5.3 Analysis of the interactions between demand and socioeconomic attributes through Mixed Logit models 32 5.4 Mixed Logit model and interaction between age and the demand for the CTS 38 5.5 Demand analysis with Mixed Logit model 39 6 Final analyses and conclusions 45 6.1 Comparison between the results of the analyses 45 6.2 Conclusions 48 6.3 Answers to the research questions and future developments 52

4,784 citations

Journal ArticleDOI
TL;DR: A survey on fault tolerance in neural networks manly focusing on well-established passive techniques to exploit and improve, by design, such potential but limited intrinsic property in neural models, particularly for feedforward neural networks is presented.
Abstract: Beyond energy, the growing number of defects in physical substrates is becoming another major constraint that affects the design of computing devices and systems. As the underlying semiconductor technologies are getting less and less reliable, the probability that some components of computing devices fail also increases, preventing designers from realizing the full potential benefits of on-chip exascale integration derived from near atomic scale feature dimensions. As the quest for performance confronts permanent and transient faults, device variation, and thermal issues, major breakthroughs in computing efficiency are expected to benefit from unconventional and new models of computation, such as brain-inspired computing. The challenge is then to find not only high-performance and energy-efficient, but also fault-tolerant computing solutions. Neural computing principles remain elusive, yet as source of a promising fault-tolerant computing paradigm. In the quest to fault tolerance can be translated into scalable and reliable computing systems, hardware design itself and/or to use circuits even with faults has further motivated research on neural networks, which are potentially capable of absorbing some degrees of vulnerability based on their natural properties. This paper presents a survey on fault tolerance in neural networks manly focusing on well-established passive techniques to exploit and improve, by design, such potential but limited intrinsic property in neural models, particularly for feedforward neural networks. First, fundamental concepts and background on fault tolerance are introduced. Then, we review fault types, models, and measures used to evaluate performance and provide a taxonomy of the main techniques to enhance the intrinsic properties of some neural models, based on the principles and mechanisms that they exploit to achieve fault tolerance passively. For completeness, we briefly review some representative works on active fault tolerance in neural networks. We present some key challenges that remain to be overcome and conclude with an outlook for this field.

181 citations

Journal ArticleDOI
TL;DR: This paper presents the CPS taxonomy via providing a broad overview of data collection, storage, access, processing, and analysis, and discusses big data meeting green challenges in the contexts of CPS.
Abstract: The world is witnessing an unprecedented growth of cyber-physical systems (CPS), which are foreseen to revolutionize our world via creating new services and applications in a variety of sectors, such as environmental monitoring, mobile-health systems, intelligent transportation systems, and so on. The information and communication technology sector is experiencing a significant growth in data traffic, driven by the widespread usage of smartphones, tablets, and video streaming, along with the significant growth of sensors deployments that are anticipated in the near future. It is expected to outstandingly increase the growth rate of raw sensed data. In this paper, we present the CPS taxonomy via providing a broad overview of data collection, storage, access, processing, and analysis. Compared with other survey papers, this is the first panoramic survey on big data for CPS, where our objective is to provide a panoramic summary of different CPS aspects. Furthermore, CPS requires cybersecurity to protect them against malicious attacks and unauthorized intrusion, which become a challenge with the enormous amount of data that are continuously being generated in the network. Thus, we also provide an overview of the different security solutions proposed for CPS big data storage, access, and analytics. We also discuss big data meeting green challenges in the contexts of CPS.

149 citations

01 Jan 1978

131 citations