scispace - formally typeset
Search or ask a question
Author

V. V. Saposhnikov

Bio: V. V. Saposhnikov is an academic researcher from Saint Petersburg State University. The author has contributed to research in topics: Combinational logic & Electronic circuit. The author has an hindex of 7, co-authored 9 publications receiving 196 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A structure dependent method for the systematic design of a self-checking circuit which is well adapted to the fault model of single gate faults and which can be used in test mode is proposed.
Abstract: In this paper we propose a structure dependent method for the systematic design of a self-checking circuit which is well adapted to the fault model of single gate faults and which can be used in test mode.

54 citations

Journal ArticleDOI
TL;DR: Carefully selected non-unidirectional gates of the original circuit are duplicated such that every single gate fault can only be propagated to the circuit outputs on paths with either an even or an odd number of inverters.
Abstract: In this paper, a new method for the design of unidirectional combinational circuits is proposed Carefully selected non-unidirectional gates of the original circuit are duplicated such that every single gate fault can only be propagated to the circuit outputs on paths with either an even or an odd number of inverters Unlike previous methods, it is not necessary to localize all the inverters of the circuit at the primary inputs The average area over head for the described method of circuit transformation is 16% of the original circuit, which is less than half of the area overhead of other known methods The transformed circuits are monitored by Berger codes, or by the least significant two bits of a Berger code All single stuck-at faults are detected by the method proposed

45 citations

Proceedings ArticleDOI
03 Jul 2000
TL;DR: A new approach for concurrent checking by Berger codes is proposed, which modify a subset of outputs of the original circuit by adding modulo 2 the outputs of a complementary circuit.
Abstract: In this paper a new approach for concurrent checking by Berger codes is proposed. We modify a subset of outputs of the original circuit by adding modulo 2 the outputs of a complementary circuit. In the error free case the unmodified outputs together with their corresponding modified outputs are elements of a Berger code. The number of outputs of the original circuit does not increase. Compared to the traditional method of concurrent checking by Berger codes, a smaller checker is needed.

34 citations

Proceedings ArticleDOI
28 Apr 1996
TL;DR: Self-dual parity checking as a modification of ordinary parity checking is proposed in this paper and the usefulness of the proposed method is demonstrated for MCNC benchmark circuits.
Abstract: Self-dual parity checking as a modification of ordinary parity checking is proposed in this paper This method is based on the newly introduced concept of a self-dual complement of a given Boolean function The parity prediction function f/sub p/ of ordinary parity checking is replaced by the self-dual complement /spl delta//sub p/ of this function such that the module-2 sum of the outputs of the monitored circuit and of /spl delta//sub p/ is an arbitrary self-dual Boolean function h Because of the large number of possible choices for h as an arbitrary self-dual Boolean function, the area overhead for an optimal self-dual complement /spl delta//sub p/ is small Alternating inputs are applied to the circuit; the output h is alternating as long as no error occurs The fault coverage of this method is almost the same as for parity checking The usefulness of the proposed method is demonstrated for MCNC benchmark circuits

32 citations

Proceedings ArticleDOI
02 Dec 1998
TL;DR: A new method for the implementation of a self-dual circuit with alternating inputs that is especially useful for online testing of control systems for which time is not critical is proposed.
Abstract: In this paper we propose a new method for the implementation of a self-dual circuit with alternating inputs. For every circuit output the self-dual complement is designed. Contrary to ordinary duplication and comparison the corresponding self-dual complements and the monitored circuit itself can be jointly, implemented. The self-dual duplicated circuits can be used in test mode, online mode and in fast mode without alternating inputs. Because of the necessary time redundancy, the approach is especially useful for online testing of control systems for which time is not critical.

15 citations


Cited by
More filters
Proceedings ArticleDOI
06 Nov 2003
TL;DR: A new paradigm for designing logic circuits with concurrent error detection (CED) based on partial duplication is described, capable of reducing the soft error failure rate significantly with a fraction of the overhead required for full duplication.
Abstract: In this paper, a new paradigm for designing logic circuits with concurrent error detection (CED) is described. The key idea is to exploit the asymmetric soft error susceptibility of nodes in a logic circuit. Rather than target all modeled faults, CED is targeted towards the nodes that have the highest soft error susceptibility to achieve cost-effective tradeoffs between overhead and reduction in the soft error failure rate. Under this new paradigm, we present one particular approach that is based on partial duplication and show that it is capable of reducing the soft error failure rate significantly with a fraction of the overhead required for full duplication. A procedure for characterizing the soft error susceptibility of nodes in a logic circuit, and a heuristic procedure for selecting the set of nodes for partial duplication are described. A full set of experimental results demonstrate the cost-effective tradeoffs that can be achieved.

295 citations

Journal ArticleDOI
TL;DR: An overview of a comprehensive collection of on-line testing techniques for VLSI, avoiding complex fail-safe interfaces using discrete components; radiation hardened designs, avoiding expensive fabrication process such as SOI, etc.
Abstract: This paper presents an overview of a comprehensive collection of on-line testing techniques for VLSI. Such techniques are for instance: self-checking design, allowing high quality concurrent checking by means of hardware cost drastically lower than duplication; signature monitoring, allowing low cost concurrent error detection for FSMs; on-line monitoring of reliability relevant parameters such as current, temperature, abnormal delay, signal activity during steady state, radiation dose, clock waveforms, etc.; exploitation of standard BIST, or implementation of BIST techniques specific to on-line testing (Transparent BIST, Built-In Concurrent Self-Test,...); exploitation of scan paths to transfer internal states for performing various tasks for on-line testing or fault tolerance; fail-safe techniques for VLSI, avoiding complex fail-safe interfaces using discrete components; radiation hardened designs, avoiding expensive fabrication process such as SOI, etc.

234 citations

Book
01 Jan 1985
TL;DR: Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing.
Abstract: Today's computers must perform with increasing reliability, which in turn depends on the problem of determining whether a circuit has been manufactured properly or behaves correctly. However, the greater circuit density of VLSI circuits and systems has made testing more difficult and costly. This book notes that one solution is to develop faster and more efficient algorithms to generate test patterns or use design techniques to enhance testability - that is, "design for testability." Design for testability techniques offer one approach toward alleviating this situation by adding enough extra circuitry to a circuit or chip to reduce the complexity of testing. Because the cost of hardware is decreasing as the cost of testing rises, there is now a growing interest in these techniques for VLSI circuits.The first half of the book focuses on the problem of testing: test generation, fault simulation, and complexity of testing. The second half takes up the problem of design for testability: design techniques to minimize test application and/or test generation cost, scan design for sequential logic circuits, compact testing, built-in testing, and various design techniques for testable systems.Hideo Fujiwara is an associate professor in the Department of Electronics and Communication, Meiji University. Logic Testing and Design for Testability is included in the Computer Systems Series, edited by Herb Schwetman.

127 citations

Journal ArticleDOI
26 Apr 1998
TL;DR: An efficient scheme for concurrent error detection in sequential circuits with no constraint on the state encoding is presented and its cost is reduced significantly compared to other methods based on other codes.
Abstract: This paper presents a procedure for synthesizing multilevel circuits with concurrent error detection based on Bose-Lin codes (1985). Bose-Lin codes are an efficient solution for providing concurrent error detection as they are separable codes and have a fixed number of check bits, independent of the number of information bits. Furthermore, Bose-Lin code checkers have a simple structure as they are based on modulo operations. Procedures are described for synthesizing circuits in a way that their structure ensures that all single-point faults can only cause errors that are detected by a Bose-Lin code. This paper also presents an efficient scheme for concurrent error detection in sequential circuits. Both the state bits and the output bits are encoded with a Bose-Lin code and their checking is combined such that one checker suffices. Results indicate low area overhead. The cost of concurrent error detection is reduced significantly compared to other methods.

113 citations

Proceedings ArticleDOI
24 Oct 2001
TL;DR: These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency.
Abstract: Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for Rijndael symmetric encryption algorithm. These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations.

110 citations