scispace - formally typeset
Search or ask a question

Showing papers by "Matteo Sonza Reorda published in 2004"


Journal ArticleDOI
TL;DR: This work focuses on simulation-based design validation performed at the behavioral register-transfer level, where designers typically write assertions inside hardware description language (HDL) models and run extensive simulations to increase confidence in device correctness.
Abstract: Design validation is a critical step in the development of present-day microprocessors, and some authors suggest that up to 60% of the design cost is attributable to this activity. Of the numerous activities performed in different stages of the design flow and at different levels of abstraction, we focus on simulation-based design validation performed at the behavioral register-transfer level. Designers typically write assertions inside hardware description language (HDL) models and run extensive simulations to increase confidence in device correctness. Simulation results can also be useful in comparing the HDL model against higher-level references or instruction set simulators. Microprocessor validation has become more difficult since the adoption of pipelined architectures, mainly because you can't evaluate the behavior of a pipelined microprocessor by considering one instruction at a time; a pipeline's behavior depends on a sequence of instructions and all their operands.

129 citations


Proceedings ArticleDOI
16 Feb 2004
TL;DR: This paper analyses the effects of single event upsets in an SRAM-based FPGA, with special emphasis for the transient faults affecting the configuration memory, and describes a method for obtaining the same result with similar devices.
Abstract: This paper analyses the effects of single event upsets in an SRAM-based FPGA, with special emphasis for the transient faults affecting the configuration memory. Two approaches are combined: from one side, by exploiting the available information and tools dealing with the device configuration memory, we were able to make hypothesis on the meaning of every bit in the configuration memory. From the other side, radiation testing was exploited to validate the hypothesis and to gather experimental evidence about the correctness of the obtained results. As a major result, we can provide detailed information about the effects of SEUs affecting the configuration memory of a commercial FPGA device. As a second contribution, we describe a method for obtaining the same result with similar devices. Finally, the obtained results are crucial to allow the possible usage of SRAM-based FPGAs in safety-critical environments, e.g., by working on the place and route strategies of the supporting tools.

94 citations


Proceedings ArticleDOI
12 Jul 2004
TL;DR: A fault-injection environment developed at this institution is exploited to analyze the impact of single event upsets affecting the configuration memory of SRAM-based FPGAs when fault tolerant design techniques are adopted, and shows that the TMR design technique mainly depends on the characteristics of the adopted TMR architecture in terms of placing and routing.
Abstract: The growing adoption of SRAM-based field programmable gate arrays (FPGAs) in safety-critical applications demands for efficient methodologies for evaluating their reliability. Single event upsets (SEUs) affecting the configuration memory of SRAM-based FPGAs are a major concern, since they can permanently affect the function implemented by the device. We exploited a fault-injection environment developed at our institution to analyze the impact of such faults on SRAM-based FPGAs when fault tolerant design techniques are adopted. The experimental results allow quantitative evaluations of the effects of these faults, and show that the sensitivity of the TMR design technique mainly depends on the characteristics of the adopted TMR architecture in terms of placing and routing.

61 citations


Journal ArticleDOI
TL;DR: A new approach for predicting SEU effects in circuits mapped on SRAM-based FPGAs that combines radiation testing with simulation is described, which is used to characterize (in terms of device cross section) the technology on which the FPGA device is based, no matter which circuit it implements.
Abstract: SRAM-based field programmable gate arrays (FPGAs) are particularly sensitive to single event upsets (SEUs) that, by changing the FPGA's configuration memory, may affect dramatically the functions implemented by the device. In This work we describe a new approach for predicting SEU effects in circuits mapped on SRAM-based FPGAs that combines radiation testing with simulation. The former is used to characterize (in terms of device cross section) the technology on which the FPGA device is based, no matter which circuit it implements. The latter is used to predict the probability for a SEU to alter the expect behavior of a given circuit. By combining the two figures, we then compute the cross section of the circuit mapped on the pre-characterized device. Experimental results are presented that compare the approach we developed with a traditional one based on radiation testing only, to measure the cross section of a circuit mapped on an FPGA. The figures here reported confirm the accuracy of our approach.

51 citations


Proceedings ArticleDOI
12 Jul 2004
TL;DR: This paper proposes to adopt low-cost infrastructure-intellectual-property cores in conjunction with software-based techniques to perform soft error detection and results are reported that show the effectiveness of the proposed approach.
Abstract: High integration levels, coupled with the increased sensitivity to soft errors even at ground level, make the task of guaranteeing adequate dependability levels more difficult then ever. In this paper, we propose to adopt low-cost infrastructure-intellectual-property (I-IP) cores in conjunction with software-based techniques to perform soft error detection. Experimental results are reported that show the effectiveness of the proposed approach.

21 citations


Proceedings ArticleDOI
10 Oct 2004
TL;DR: A new environment that can be fruitfully exploited to assess the effects of faults in CAN-based networks is presented, which is particularly suited for being exploited when a prototype of the network under analysis is available.
Abstract: The controller area network (CAN) is a well-known standard, and it is widely used in many safety-critical applications, spanning from automotive electronics to aircraft and aerospace electronics. Due to its widespread adoption in critical applications, the capability of accurately evaluating the dependability properties of CAN-based networks is becoming a major concern. In this paper we present a new environment that can be fruitfully exploited to assess the effects of faults in CAN-based networks, which is particularly suited for being exploited when a prototype of the network under analysis is available. The core of our new environment is a special-purpose board that plugs into an existing CAN network, and that is able to monitor and, when asked, to modify the information traveling over the bus. Observation and modification of CAN frames are done concurrently with normal CAN-bus operations without introducing any performance degradation. The obtained environment is thus suitable for being deployed in complex CAN-based networks to validate their dependability.

20 citations


Proceedings ArticleDOI
09 Sep 2004
TL;DR: A new approach to automatic test program generation is proposed that exploits such hardware to monitor specific micro-architectural events and repeatedly evaluates and improves candidate programs directly running on the target microprocessor.
Abstract: In the past performance counters have been available to top-end microprocessors as hardware luxuries for profiling critical applications. Today, on the contrary', several desktop microprocessors contain hardware support for monitoring performance events. This paper proposes a new approach to automatic test program generation that exploits such hardware to monitor specific micro-architectural events. In the approach, the generation tool repeatedly evaluates and improves candidate programs directly running on the target microprocessor: candidate programs are not "simulated", but rather "executed". The fast evaluation of candidate tests enables the use of an automatic methodology even on large designs. As a case study, an experiment targeting the Intel/spl reg/ Pentium/spl reg/ 4 microprocessor is reported.

13 citations


Proceedings ArticleDOI
10 Oct 2004
TL;DR: A new test control schema based on the use of an infrastructure IP (I-IP) is proposed for the test on-site of SoCs including microprocessors and memories equipped with P1500 compliant solutions.
Abstract: Today's complex system-on-chip integrated circuits include a wide variety of functional IPs whose correct manufacturing must be guaranteed by IC producers. Infrastructure IPs are increasingly often inserted to achieve this purpose; such blocks, explicitly designed for test, are coupled with functional IPs both to obtain yield improvement during the manufacturing process and to perform volume production test. In this paper, a new test control schema based on the use of an infrastructure IP (I-IP) is proposed for the test on-site of SoCs. The proposed in-field test strategy is based on the ability of a single I-IP to periodically monitor the behavior of the system by reusing the test structures introduced for manufacturing test. The feasibility of this approach has been proved for SoCs including microprocessors and memories equipped with P1500 compliant solutions. Experimental results highlight the advantages in term of reusability and scalability, low impact on system availability and reduced area overhead.

12 citations


Proceedings ArticleDOI
10 Nov 2004
TL;DR: This paper presents an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car.
Abstract: The validation of networked systems is mandatory to guarantee the dependability levels that international standards impose in many safety-critical applications. In this paper we present an environment to study how soft errors affecting the memory elements of network nodes in CAN-based systems may alter the dynamic behavior of a car. The experimental evidence of the effectiveness of the approach is reported on a case study.

6 citations


Proceedings ArticleDOI
04 Sep 2004
TL;DR: This paper describes how the interaction between the two levels of abstraction is managed to provide accurate analysis of the dependability of the whole system, and is shown to be able to identify faults affecting the CAN network whose effects are most likely to be critical for vehicle's dynamic.
Abstract: Safety-critical applications are now common where both digital and mechanical components are deployed, as in the automotive fields. The analysis of the dependability of such systems is a particularly complex task that mandates modeling capabilities in both the discrete and in the continuous domains. To tackle this problem a multi-level approach is presented here, which is based on abstract functional models to capture the behavior of the whole system, and on detailed structural models to cope with the details of system components. In this paper, we describe how the interaction between the two levels of abstraction is managed to provide accurate analysis of the dependability of the whole system. In particular, the proposed technique is shown to be able to identify faults affecting the CAN network whose effects are most likely to be critical for vehicle's dynamic. Exploiting the information about the effects of these faults, they can then be further analyzed at the higher level of details.

5 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed evolutionary simulation-based validation method is effectively able to deal with realistic designs, discovering potential problems, and it exhibits a natural robustness even when used starting from incomplete information.
Abstract: This paper describes evolutionary simulation-based validation, a new point in the spectrum of design validation techniques, besides pseudo-random simulation, designer-generated patterns and formal verification. The proposed approach is based on coupling an evolutionary algorithm with a hardware simulator, and it is able to fit painlessly in an existing industrial flow. Prototypical tools were used to validate gate-level designs, comparing them against both their RT-level specifications and different gate-level implementations. Experimental results show that the proposed method is effectively able to deal with realistic designs, discovering potential problems, and, although approximate in nature, it is able to provide a high degree of confidence in the results and it exhibits a natural robustness even when used starting from incomplete information.



Proceedings ArticleDOI
10 Oct 2004
TL;DR: In this article, a coupled methodology is proposed to generate test-programs, using complementary techniques: one pseudo-exhaustive and one driven by an evolutionary optimizer, for the Motorola 6800.
Abstract: The actual operating life time for many electronic systems turned out to be much longer than originally foreseen, leading to the use of obsolete components in critical projects. To skip microprocessor obsolescence problems, companies should have bought larger stocks of components when still available, or are forced to find parts in secondary markets later. Alternatively, a suitable low-cost solution could be replacing the obsolete component by emulating its functionalities with a programmable logic device. However, design verification of microprocessors is well known as a challenging task. This paper proposes a coupled methodology to generate test-programs, using complementary techniques: one pseudoexhaustive and one driven by an evolutionary optimizer. As a case study, the Motorola 6800 was targeted.