scispace - formally typeset
Search or ask a question

Showing papers on "Fault model published in 2006"


Book ChapterDOI
10 Oct 2006
TL;DR: Two differential fault attack techniques against Advanced Encryption Standard (AES) could find all 128 bits of key using one of them and only 6 faulty ciphertexts, establishing a novel technique to cryptanalysis AES without side channel information.
Abstract: In this paper we describe two differential fault attack techniques against Advanced Encryption Standard (AES). We propose two models for fault occurrence; we could find all 128 bits of key using one of them and only 6 faulty ciphertexts. We need approximately 1500 faulty ciphertexts to discover the key with the other fault model. Union of these models covers all faults that can occur in the 9th round of encryption algorithm of AES-128 cryptosystem. One of main advantage of proposed fault models is that any fault in the AES encryption from start (AddRoundKey with the main key before the first round) to MixColumns function of 9th round can be modeled with one of our fault models. These models cover all states, so generated differences caused by diverse plaintexts or ciphertexts can be supposed as faults and modeled with our models. It establishes a novel technique to cryptanalysis AES without side channel information. The major difference between these methods and previous ones is on the assumption of fault models. Our proposed fault models use very common and general assumption for locations and values of occurred faults.

187 citations


Journal ArticleDOI
TL;DR: In this paper, a Gauss-Newton iterative approach is used to flatten seismic data and a weighted inversion scheme is applied to identify locations of faults, allowing dips to be summed around the faults to reduce the influence of erroneous estimates near the faults.
Abstract: We present an efficient full-volume automatic dense-picking method for flattening seismic data. First local dips (stepouts) are calculated over the entire seismic volume. The dips are then resolved into time shifts (or depth shifts) using a nonlinear Gauss-Newton iterative approach that exploits fast Fourier transforms to minimize computation time. To handle faults (discontinuous reflections), we apply a weighted inversion scheme. The weight identifies locations of faults, allowing dips to be summed around the faults to reduce the influence of erroneous dip estimates near the fault. If a fault model is not provided, we can estimate a suitable weight (essentially a fault indicator) within our inversion using an iteratively reweighted least squares (IRLS) method. The method is tested successfully on both synthetic and field data sets of varying degrees of complexity, including salt piercements, angular unconformities, and laterally limited faults.

154 citations


Journal ArticleDOI
TL;DR: The fact that detected faults cannot be immediately corrected with several examples is illustrated, and an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed.
Abstract: Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given

143 citations


Proceedings ArticleDOI
05 Nov 2006
TL;DR: In this paper, the authors propose a new type of failure proximity, called R-Proximity, which regards two failing traces as similar if they suggest roughly the same fault location.
Abstract: Recent software systems usually feature an automated failure reporting system, with which a huge number of failing traces are collected every day. In order to prioritize fault diagnosis, failing traces due to the same fault are expected to be grouped together. Previous methods, by hypothesizing that similar failing traces imply the same fault, cluster failing traces based on the literal trace similarity, which we call trace proximity. However, since a fault can be triggered in many ways, failing traces due to the same fault can be quite different. Therefore, previous methods actually group together traces exhibiting similar behaviors, like similar branch coverage, rather than traces due to the same fault. In this paper, we propose a new type of failure proximity, called R-Proximity, which regards two failing traces as similar if they suggest roughly the same fault location. The fault location each failing case suggests is automatically obtained with Sober, an existing statistical debugging tool. We show that with R-Proximity, failing traces due to the same fault can be grouped together. In addition, we find that R-Proximity is helpful for statistical debugging: It can help developers interpret and utilize the statistical debugging result. We illustrate the usage of R-Proximity with a case study on the grep program and some experiments on the Siemens suite, and the result clearly demonstrates the advantage of R-Proximity over trace proximity.

142 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: Results from several simulated and over 800 failing ICs reveal a significant improvement in localization and an accurate model of the logic-level defect behavior that provides useful insight into the actual defect mechanism.
Abstract: DIAGNOSIX is a comprehensive fault diagnosis methodology for characterizing failures in digital ICs. Using limited layout information, DIAGNOSIX automatically extracts a fault model for a failing IC by analyzing the behavior of the physical neighborhood surrounding suspect lines. Results from several simulated and over 800 failing ICs reveal a significant improvement in localization. More importantly, the output of DIAGNOSIX is an accurate model of the logic-level defect behavior that provides useful insight into the actual defect mechanism. Experiment results for the failing chips with successful physical failure analysis reveal that the extracted faults accurately describe the actual defects.

112 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: The experiences in validating a number of model transformations are reported and three techniques that can be used for constructing test cases are proposed.
Abstract: Validation of model transformations is important for ensuring their quality. Successful validation must take into account the characteristics of model transformations and develop a suitable fault model on which test case generation can be based. In this paper, we report our experiences in validating a number of model transformations and propose three techniques that can be used for constructing test cases.

103 citations


Proceedings ArticleDOI
14 Jun 2006
TL;DR: In this paper, an unknown input observer (UIO) design technique was proposed for fault detection and isolation in uncertain Lipschitz nonlinear systems with a focus on fault isolation and fault detection as a byproduct of fault isolation.
Abstract: With an emphasis on fault isolation and by treating fault detection as a byproduct of fault isolation, both actuator and sensor fault detection and isolation (FDI) problems for a class of uncertain Lipschitz nonlinear systems are studied using an unknown input observer (UIO) design technique. To solve the actuator fault detection and isolation problem, we develop a particular system structure by regrouping the system inputs, which is suitable for UIO design. By filtering the regrouped outputs properly, the same system structure can be developed for sensor fault detection and isolation problem, which allows us to treat the sensor fault detection and isolation problem as an actuator fault detection and isolation problem. To accomplish FDI efficiently, a novel full order nonlinear UIO is designed with a special property suitable for fault isolation purposes and a necessary and sufficient condition for its existence are presented. The LMI based sufficient condition enables the designers to use Matlab LMI toolbox and makes the computationally difficult UIO design much easier. For UIO based FDI, the following three problems are investigated: 1) under what conditions is it possible to isolate single and/or multiple faults? 2) What is the maximum number of faults that can be isolated simultaneously? 3) How to design fault isolation schemes to achieve multiple fault isolation (that is, to make decisions on how many faults have occurred and the location of each fault)? Conditions for problem 1) are derived and the maximum number of faults that can be isolated is determined for problem 2) to solve problem 3) an FDI scheme is designed using a bank of nonlinear UIOs and its design procedure is presented in a step by step fashion. An example is given to show how to use the proposed FDI scheme and simulations results illustrate that the proposed technique works well for FDI in uncertain Lipschitz nonlinear systems.

97 citations


Journal ArticleDOI
TL;DR: In this article, the authors use radar interferometry and aftershock relocation of the earthquake sequence to resolve the ambiguity and conclude that the NW−SE striking nodal plane is the primary fault plane and slip was right lateral.
Abstract: SUMMARY The 1994 M w 6.0 and 2004 M w 6.5 Al Hoceima earthquakes are the largest to have occurred in Morocco for 100 yr, and give valuable insight into the poorly understood tectonics of the area. Bodywave modelling indicates the earthquakes occurred on near-vertical, strike-slip faults with the nodal planes oriented NW‐SE and NE‐SW. Distinguishing between the primary fault plane and auxiliary planes, using either geodetic or seismic data, is difficult due to the spatial symmetry in deformation fields and radiation pattern of moderately sized, buried, strikeslip earthquakes. Preliminary studies, using aftershock locations and surface observations, have been unable to identify the orientation of the primary fault plane for either earthquake conclusively. We use radar interferometry and aftershock relocation of the earthquake sequence to resolve the ambiguity. For the 2004 earthquake, inverting the interferograms for a uniform slip model based either of the two potential nodal planes results in similar misfits to the data. However, the NE‐SW best-fit fault plane has an unrealistically high fault slip-to-length ratio and we conclude the NW‐SE striking nodal plane is the primary fault plane and slip was right lateral. We carry out tests on synthetic data for a buried strike-slip earthquake in which the orientation of the fault plane is known a priori. Independent of geometry, missing data, and correlated noise, models produced assuming the auxiliary plane to be the fault plane have very high fault slip-to-length ratios. The 1994 earthquake had a smaller magnitude and comparisons of model misfits and slip-to-length ratios do not conclusively indicate which of the nodal planes is the primary fault plane. Nonetheless, the InSAR data provides valuable information by improving the accuracy of the earthquake location by an order of magnitude. We carry out a multiple event relocation of the entire earthquake sequence, including aftershocks, making use of the absolute locations for the 1994 and 2004 main shocks from our InSAR study. The aftershock locations are consistent with a NW‐SE orientated fault plane in 2004 and suggests that the 1994 earthquake occurred on a NE‐SW fault; perpendicular to the fault which ruptured in 2004. Previous tectonic models of the area proposed a bookshelf model of block rotation with NNE‐SSW left-lateral faults. This model requires modification to accommodate the observation of right-lateral slip on a NW‐SE fault plane for the 2004 earthquake and we prefer to interpret the fault orientations as due to a zone of distributed shear with a right-lateral fault striking at ∼115 ◦ and conjugate, clockwise rotating, left-lateral faults striking at ∼25 ◦ .

95 citations


Journal ArticleDOI
05 Jun 2006
TL;DR: This paper presents a procedure using the finite-element (FE)-based phase variable model combined with wavelet analysis to facilitate the fault diagnostic study for permanent magnet machines with internal short circuit faults.
Abstract: This paper presents a procedure using the finite-element (FE)-based phase variable model combined with wavelet analysis to facilitate the fault diagnostic study for permanent magnet machines with internal short circuit faults. Our efforts are dedicated to the aspects of fault modeling and fault extraction. The FE-based phase variable model is developed to describe the PM machine with internal short circuit faults. This model is built with the parameters [inductances and back Electromotive Force (EMF)] obtained from FE computations of the machine with the same type of fault. The developed model has two features. It includes the detailed information of the fault including the location of the shorted turns and the number of turns involved. It keeps the accuracy of the FE model by taking only a fraction of time needed by FE operation. This is particularly desired for diagnosing faults in machines connected to a control circuit. The wavelet transform is used to perform machine current/voltage signature analysis. Excellent results were obtained providing information that would not be otherwise available except by measurement

95 citations


Journal ArticleDOI
TL;DR: This paper presents a new fault-tolerant routing methodology that does not degrade performance in the absence of faults and tolerates a reasonably large number of faults without disabling any healthy node.
Abstract: Massively parallel computing systems are being built with thousands of nodes. The interconnection network plays a key role for the performance of such systems. However, the high number of components significantly increases the probability of failure. Additionally, failures in the interconnection network may isolate a large fraction of the machine. It is therefore critical to provide an efficient fault-tolerant mechanism to keep the system running, even in the presence of faults. This paper presents a new fault-tolerant routing methodology that does not degrade performance in the absence of faults and tolerates a reasonably large number of faults without disabling any healthy node. In order to avoid faults, for some source-destination pairs, packets are first sent to an intermediate node and then from this node to the destination node. Fully adaptive routing is used along both subpaths. The methodology assumes a static fault model and the use of a checkpoint/restart mechanism. However, there are scenarios where the faults cannot be avoided solely by using an intermediate node. Thus, we also provide some extensions to the methodology. Specifically, we propose disabling adaptive routing and/or using misrouting on a per-packet basis. We also propose the use of more than one intermediate node for some paths. The proposed fault-tolerant routing methodology is extensively evaluated in terms of fault tolerance, complexity, and performance.

92 citations


Journal ArticleDOI
TL;DR: In this article, a fuzzy logic-based algorithm to identify the type of faults in radial, unbalanced distribution system has been developed, which is able to accurately identify the phase(s) involved in all ten types of shunt faults that may occur in an electric power distribution system under different fault types, fault resistance, fault inception angle, system topology and loading levels.
Abstract: In this paper, a fuzzy logic-based algorithm to identify the type of faults in radial, unbalanced distribution system has been developed. The proposed technique is able to accurately identify the phase(s) involved in all ten types of shunt faults that may occur in an electric power distribution system under different fault types, fault resistance, fault inception angle, system topology and loading levels. The proposed method needs only three line current measurements available at the substation and can perform the fault classification task in about half-cycle period. All the test results show that the proposed fault identifier is well suited for identifying fault types in radial, unbalanced distribution system.

Journal ArticleDOI
01 May 2006
TL;DR: An algorithm of vague fault-tree analysis is proposed in this paper to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events.
Abstract: An algorithm of vague fault-tree analysis is proposed in this paper to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. We also modify Tanaka et al's definition and extend the new usage on vague fault-tree analysis in terms of finding most important basic system component for managerial decision-making. In numerical verification, the fault of automatic gun is presented as a numerical example. For advanced experiment, a fault tree for the reactor protective system is adopted as simulation example and we compare the results with other methods. This paper also develops vague fault tree decision support systems (VFTDSS) to generate fault-tree, fault-tree nodes, then directly compute the vague fault-tree interval, traditional reliability, and vague reliability interval.

Journal ArticleDOI
TL;DR: In this article, the authors describe the data assimilation procedure used to construct the fault model and assign frictional properties, and show that the frictional failure physics leads to self-organization of the statistical dynamics, and produces empirical statistical distributions (probability density functions: PDFs) that characterize the activity.
Abstract: Virtual California is a topologically realistic simulation of the interacting earthquake faults in California. Inputs to the model arise from field data, and typically include realistic fault system topologies, realistic long-term slip rates, and realistic frictional parameters. Outputs from the simulations include synthetic earthquake sequences and space-time patterns together with associated surface deformation and strain patterns that are similar to those seen in nature. Here we describe details of the data assimilation procedure we use to construct the fault model and to assign frictional properties. In addition, by analyzing the statistical physics of the simulations, we can show that that the frictional failure physics, which includes a simple representation of a dynamic stress intensity factor, leads to self-organization of the statistical dynamics, and produces empirical statistical distributions (probability density functions: PDFs) that characterize the activity. One type of distribution that can be constructed from empirical measurements of simulation data are PDFs for recurrence intervals on selected faults. Inputs to simulation dynamics are based on the use of time-averaged event-frequency data, and outputs include PDFs representing measurements of dynamical variability arising from fault interactions and space-time correlations. As a first step for productively using model-based methods for earthquake forecasting, we propose that simulations be used to generate the PDFs for recurrence intervals instead of the usual practice of basing the PDFs on standard forms (Gaussian, Log-Normal, Pareto, Brownian Passage Time, and so forth). Subsequent development of simulation-based methods should include model enhancement, data assimilation and data mining methods, and analysis techniques based on statistical physics.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate coseismic displacements from the 2002 M w 7.9 Denali Fault earthquake at 232 GPS sites in Alaska and Canada and restrict the motion to right-lateral slip and north-side-up dip slip.
Abstract: [1] We estimate coseismic displacements from the 2002 M w 7.9 Denali Fault earthquake at 232 GPS sites in Alaska and Canada. Displacements along a N-S profile crossing the fault indicate right-lateral slip on a near-vertical fault with a significant component of vertical motion, north-side up. We invert both GPS displacements and geologic surface offsets for slip on a three-dimensional (3-D) fault model in an elastic half-space. We restrict the motion to right-lateral slip and north-side-up dip slip. Allowing for oblique slip along the Denali and Totschunda faults improves the model fit to the GPS data by about 30%. We see mostly right-lateral strike-slip motion on the Denali and Totschunda faults, but in a few areas we see a significant component of dip slip. The slip model shows increasing slip from west to east along the Denali Fault, with four localized higher-slip patches, three near the Trans-Alaska pipeline crossing and a large slip patch corresponding to a M w 7.5 subevent about 40 Ion west of the Denali-Totschunda junction. Slip of 1-3 m was estimated along the Totschunda Fault with the majority of slip being at shallower than 9 km depth. We have limited resolution on the Susitna Glacier Fault, but the estimated slip along the fault is consistent with a M w 7.2 thrust subevent. Total estimated moment in the Denali Fault earthquake is equivalent to M w 7.89. The estimated slip distribution along the surface is in very good agreement with geological surface offsets, but we find that surface offsets measured on glaciers are biased toward lower values.

Proceedings ArticleDOI
01 Aug 2006
TL;DR: The modeling of a power distribution system and its protective relaying to obtain an extensive fault database using the capabilities of ATP and Matlab is described and a methodology to perform automatic simulations and a data base with 930 fault situations in a 25 kV test system is obtained.
Abstract: Opportune fault location in power distribution systems is an important aspect related to power quality, and especially to maintain good continuity indexes Fault location methods which use more information than RMS values of voltage and current, are the commonly known as Knowledge Based Methods - KBM Those require of a complete fault database to adequately perform the training and validation stages, and as a consequence successfully perform the fault location task In this paper, the modeling of a power distribution system and its protective relaying to obtain an extensive fault database using the capabilities of ATP and Matlab is described The obtained database can be used to perform different types of system analysis and in this specific case to solve the problem of fault location in power distribution systems As a result a methodology to perform automatic simulations and a data base with 930 fault situations in a 25 kV test system was obtained

Journal ArticleDOI
TL;DR: The paper deals with multiple fault diagnosis of analogue AC or DC circuits with limited accessible terminals for excitation and measurement and brings an algorithm for identificating faulty elements and evaluating their parameters and shows its efficiency.
Abstract: The paper deals with multiple fault diagnosis of analogue AC or DC circuits with limited accessible terminals for excitation and measurement and brings an algorithm for identificating faulty elements and evaluating their parameters. The main achievement is a method enabling us to efficiently identify faulty elements. For this purpose some testing equations are derived playing a key role in identification of possibly faulty elements which are next verified using a test of acceptance. The proposed approach is described in detail for double fault diagnosis. Also extension to triple fault diagnosis is given. Although the method pertains to linear circuits, some aspects of multiple fault diagnosis of non-linear circuits can be also performed using the small signal approach. Two numerical examples illustrate the proposed method and show its efficiency. Copyright © 2006 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work describes a new diagnostic ATPG implementation that uses a generalized fault model and shows that diagnostic resolution can be significantly enhanced over a traditional diagnostic test set aimed only at stuck-at faults.
Abstract: It is now generally accepted that the stuck-at fault model is no longer sufficient for many manufacturing test activities. Consequently, diagnostic test pattern generation based solely on distinguishing stuck-at faults is unlikely to achieve the resolution required for emerging fault types. In this work we describe a new diagnostic ATPG implementation that uses a generalized fault model. It can be easily used in any diagnosis framework to refine diagnostic resolution for complex defects. For various types of faults that include, for example, bridge, transition, and transistor stuck-open, we show that diagnostic resolution can be significantly enhanced over a traditional diagnostic test set aimed only at stuck-at faults. Finally, we illustrate the use of our diagnostic ATPG to distinguish faults derived from a state-of-the-art diagnosis flow based on layout.

Journal ArticleDOI
TL;DR: A simulator for resistive-bridging and stuck-at faults based on electrical equations rather than table look up is presented, thus, exposing more flexibility and interaction of fault effects in current time frame and earlier time frames is elaborated on.
Abstract: The authors present a simulator for resistive-bridging and stuck-at faults. In contrast to earlier work, it is based on electrical equations rather than table look up, thus, exposing more flexibility. For the first time, simulation of sequential circuits is dealt with; interaction of fault effects in current time frame and earlier time frames is elaborated on for different bridge resistances. Experimental results are given for resistive-bridging and stuck-at faults in combinational and sequential circuits. Different definitions of fault coverage are listed, and quantitative results with respect to all these definitions are given for the first time

Proceedings ArticleDOI
11 Sep 2006
TL;DR: In this article, a new fault model, labeled crosspoint faults, is proposed for reversible logic circuits and a randomized Automatic Test Pattern Generation algorithm targeting this specific kind of fault is introduced and analyzed.
Abstract: Reversible logic computing is a rapidly developing research area. Testing such circuits is obviously an important issue. In this paper, we consider a new fault model, labeled crosspoint faults, for reversible logic circuits. A randomized Automatic Test Pattern Generation algorithm targeting this specific kind of fault is introduced and analyzed. Simulation results show that the algorithm yields very good performance. The relationship between the crosspoint faults and stuck-at faults is also investigated. We show that the crosspoint fault model is a better fault model for reversible circuits since it dominates the traditional stuck-at fault model in most instances.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: The evaluation of the functional constraints on large industrial circuits show that the proposed constraint generation algorithm generate a powerful set of constraints most of which are not captured in the constraints extracted by designers for design-verification purposes.
Abstract: In this paper, we present a study of the implication based functional constraint extraction techniques to generate pseudo-functional scan tests. Novel algorithms to extract pair-wise and multi-node constraints as Boolean expressions on arbitrary gates in the design are presented. We analyze its impact on reducing the overkill in testing, and report the trade-offs in coverage and scan-loads for a number of fault models. In the case of path-delay fault model, we show that the longest paths contribute most to the over-testing problem, raising the question about scan testing of the longest paths. Finally, our evaluation of the functional constraints on large industrial circuits show that the proposed constraint generation algorithm generate a powerful set of constraints most of which are not captured in the constraints extracted by designers for design-verification purposes.

Journal ArticleDOI
TL;DR: In this article, the authors inverted the high-resolution spatiotemporal slip distribution of the 21 September 1999 Chi-Chi, Taiwan, earthquake utilizing data from densely distributed island-wide strong motion stations for a 3-dimensional (3-D) fault geometry, and 3-D Green's functions calculations based upon parallel nonnegative least squares inversion.
Abstract: [1] We inverted the high-resolution spatiotemporal slip distribution of the 21 September 1999 Chi-Chi, Taiwan, earthquake utilizing data from densely distributed island-wide strong motion stations for a three-dimensional (3-D) fault geometry, and 3-D Green's functions calculations based upon parallel nonnegative least squares inversion. The 3-D fault geometry, consistent with high-resolution reflection profile, is determined from GPS inversion and aftershocks distribution. This 3-D fault model presents the dip angle gradually becoming shallower from south to north along the fault and near flat at the deeper portion of the fault. The 3-D Green's functions are calculated through numerical wavefield simulation from three-dimensional heterogeneous velocity structure derived from tomography studies. The Green's functions show significant azimuthal variations and suggest the necessity of lateral heterogeneity in velocity structure. Considering complex fault geometry and Green's functions in full 3-D scale, we invert the spatial/temporal slip distribution of the 1999 Chi-Chi earthquake using the best available and most densely populated strong motion waveform data. We perform the inversion under a parallel environment utilizing multiple-time window to manage the large data volume and source parameters. Results indicate that most slip occurred at the shallower portion of the fault above the decollement. Two major asperities are found, one in the middle of the fault and another one at the northern portion of the fault near the bend in the fault trace. The slip in the southern portion of the fault shows a relatively low slip rate with longer time duration, while the slip in the northern portion of the fault shows a large slip rate with shorter time duration. The synthetics explain the observations well for the island-wide distributed strong motion stations. This comprehensive study emphasizes the importance of realistic fault geometry, 3-D Green's functions, and parallel inversion technique in correctly accounting for both the detailed source rupture process and its relationship with the strong ground motion of this intense earthquake.

Journal ArticleDOI
TL;DR: This methodology advocates testing dies for process variation by monitoring parameter variations across a die and analyzing the data that the monitoring devices provide, and uses ring oscillators (ROs) to map parameter variations into the frequency domain.
Abstract: Ring oscillators are not new, but the authors of this article use them in a novel, unconventional way to monitor process variation at different regions of a die in the frequency domain. Measuring the variation of each design or fabrication parameter is infeasible from a circuit designer's perspective. Therefore, we propose a methodology that approaches PV from a test perspective. This methodology advocates testing dies for process variation by monitoring parameter variations across a die and analyzing the data that the monitoring devices provide. We use ring oscillators (ROs) to map parameter variations into the frequency domain. Our use of ROs is far more rigorous than in standard practices. To keep complexity and overhead low, we neither employ analog channels nor use zero-crossing counters. Instead, we use a frequency domain analysis because it allows compacting RO signals using digital adders (thereby also reducing the number of wires), and decoupling frequencies to identify high PVs and problematic regions. Our PV test methodology includes defining the PV fault model; deciding on types, numbers, and positions of a small distributed network of frequency-sensitive sensors (RO) and designing an efficient, fully digital communication channel with sufficient bandwidth to transfer sensor information to an analysis point. With this methodology, users can trade off cost and accuracy by choosing the number or frequency of sensors and regions on the die to monitor

Proceedings ArticleDOI
07 Nov 2006
TL;DR: The pointcut fault model is a first step towards a fault model that focuses on the unique constructs of the AspectJ language that is needed for the systematic and effective testing of Aspect J programs.
Abstract: We present a candidate fault model for pointcuts in AspectJ programs. The fault model identifies faults that we believe are likely to occur when writing pointcuts in the AspectJ language. Categories of fault types are identified, and each individual fault type is described an categorized. We argue that a fault model that focuses on the unique constructs of the AspectJ language is needed for the systematic and effective testing of AspectJ programs. Our pointcut fault model is a first step towards such a model.

Book ChapterDOI
TL;DR: The value of this work is demonstrated by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.
Abstract: Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely well-defined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.

Journal Article
TL;DR: In this paper, the authors present a comprehensive fault hypothesis for safety-critical real-time computer systems, under the assumption that a distributed system node is expected to be a system-on-a-chip (SOC).
Abstract: A safety-critical real-time computer system must provide its services with a dependability that is much better than the dependability of any one of its constituent components. This challenging goal can only be achieved by the provision of fault tolerance. The design of any fault-tolerant system proceeds in four distinct phases. In the first phase the fault hypothesis is shaped, i.e. assumptions are made about the types and numbers of faults that must be tolerated by the planned system. In the second phase an architecture is designed that tolerates the specified faults. In the third phase the architecture is implemented and the functions and fault-tolerance mechanisms are validated. Finally, in the fourth phase it has to be confirmed experimentally that the assumptions contained in the fault-hypothesis are met by reality. The first part of this contribution focuses on the establishment of a comprehensive fault hypothesis for safety-critical real-time computer systems. The size of the fault containment regions, the failure mode of the fault containment regions, the assumed frequency of the faults and the assumptions about error detection latency and error containment are discussed under the premise that in future a distributed system node is expected to be a system-on-a-chip (SOC). The second part of this contribution focuses on the implications that such a fault hypothesis will have on the future architecture of distributed safety-critical real-time computer systems in the automotive domain.

Journal Article
TL;DR: A computational framework to deal with the problem of early fault classification using Case-Based Reasoning is presented and different techniques for case retrieval and reuse that have been applied at different times of fault evolution are illustrated.
Abstract: In this paper we introduce a system for early classification of several fault modes in a continuous process. Early fault classification is basic in supervision and diagnosis systems, since a fault could arise at any time, and the system must identify the fault as soon as possible. We present a computational framework to deal with the problem of early fault classification using Case-Based Reasoning. This work illustrates different techniques for case retrieval and reuse that have been applied at different times of fault evolution. The technique has been tested for a set of fourteen fault classes simulated in a laboratory plant.

Journal ArticleDOI
TL;DR: How fault tuples can and have been used to enhance testing tasks such as fault simulation, test generation, and diagnosis, and enable new capabilities such as interfault collapsing and application-based quality metrics is described.
Abstract: Fault tuples represent a defect modeling mechanism capable of capturing the logical misbehavior of arbitrary defects in digital circuits. To justify this claim, this paper describes two types of logic faults (state transition and signal line faults) and formally shows how fault tuples can be used to precisely represent any number of faults of this kind. The capability of fault tuples to capture misbehaviors beyond logic faults is then illustrated using many examples of varying degree of complexity. In particular, the ability of fault tuples to modulate fault controllability and observability is examined. Finally, it is described how fault tuples can and have been used to enhance testing tasks such as fault simulation, test generation, and diagnosis, and enable new capabilities such as interfault collapsing and application-based quality metrics

Proceedings ArticleDOI
30 Apr 2006
TL;DR: A scheme to functionally test the networking infrastructure of a system within a network on chip and a test pattern generation and application algorithm that relies on a network simulator are presented.
Abstract: A scheme to functionally test the networking infrastructure of a system within a network on chip is presented. A fault model and a test pattern generation and application algorithm that relies on a network simulator are presented. Experimental results demonstrate the impact of the presented algorithm.

Book ChapterDOI
10 Oct 2006
TL;DR: In this article, the authors introduced the first fault attack applied on RSA in standard mode, where the assumption on the induced faults' effects are relaxed, and the attack is done by modifying the modulus.
Abstract: It is well known that a malicious adversary can try to retrieve secret information by inducing a fault during cryptographic operations. Following the work of Seifert on fault inductions during RSA signature verification, we consider in this paper the signature counterpart. Our article introduces the first fault attack applied on RSA in standard mode. By only corrupting one public key element, one can recover the private exponent. Indeed, similarly to Seifert’s attack, our attack is done by modifying the modulus. One of the strong points of our attack is that the assumptions on the induced faults’ effects are relaxed. In one mode, absolutely no knowledge of the fault’s behavior is needed to achieve the full recovery of the private exponent. In another mode, based on a fault model defining what is called dictionary, the attack’s efficiency is improved and the number of faults is dramatically reduced. All our attacks are very practical. Note that those attacks do work even against implementations with deterministic (e.g., RSA-FDH) or random (e.g., RSA-PFDH) paddings, except for cases where we have signatures with randomness recovery (such as RSA-PSS). The results finally presented on this paper lead us to conclude that it is also mandatory to protect RSA’s public parameters against fault attacks.

Journal Article
TL;DR: The strength of SMS4 against the differential fault attack is examined, and the authors suggest that the encryption device should be protected to prevent the adversary from deducing faults.
Abstract: SMS4 is the block cipher used in WAPI,and it is also the first commercial block(cipher) disclosed by the government.Since it was disclosed only a short time ago,on its security,there has been no published paper at present.In this paper the strength of SMS4(against) the differential fault attack is examined.The authors use the byte-oriented fault model,and take advantage of the differential analysis as well.Theoretically,the 128bit master key for SMS4 can be obtained by using 32 faulty ciphertexts.But in practice,for the fact that the byte position where the fault happens isn't equally distributed,the number of faulty ciphertexts needed will be a little bigger than the theoretical value.The attack experiment result validates this fact too.The result shows that only need average 47 faulty ciphertexts to recover the 128bit keys for SMS4.So SMS4 is vulnerable to differential fault attack.To(avoid) this kind of attack, the authors suggest that the encryption device should be protected to prevent the adversary from deducing faults.