scispace - formally typeset
Search or ask a question

Showing papers by "Fabrizio Lombardi published in 2004"


Journal ArticleDOI
TL;DR: A detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented and a testing technique requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design.
Abstract: There has been considerable research on quantum dot cellular automata (QCA) as a new computing scheme in the nanoscale regimes. The basic logic element of this technology is the majority voter. In this paper, a detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented. Testing of these QCA devices at logic level is investigated and compared with conventional CMOS-based designs. Unique testing features of designs based on this technology are presented and interesting properties have been identified. A testing technique is presented; it requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design. A design-for-test scheme is also presented, which results in the generation of a reduced test set at 100% fault coverage.

172 citations


Proceedings ArticleDOI
25 Apr 2004
TL;DR: A detailed simulation-based characterization of QCA defects and study of their effects at logic-level are presented and unique testing features of designs based on this technology are presented.
Abstract: There has been considerable research on quantum dot cellular automata (QCA) as a new computing scheme in the nano-scale regimes. The basic logic element of this technology is majority voter. In this paper, a detailed simulation-based characterization of QCA defects and study of their effects at logic-level are presented. Testing of these devices is investigated and compared with conventional CMOS-based designs. Unique testing features of designs based on this technology are presented and interesting properties have been identified.

131 citations


Proceedings ArticleDOI
10 Oct 2004
TL;DR: In this article, the authors investigate defect tolerance properties of a 2D nano-scale crossbar, which is the basic block of various nano architectures which have been recently proposed, and their impact on the routability of a crossbar is investigated.
Abstract: Defect tolerance is an extremely important aspect in nano-scale electronics as the bottom-up self-assembly fabrication process results in a significantly higher defect density compared to conventional lithography-based processes. Defect tolerance techniques are therefore essential to obtain an acceptable manufacturing yield. In this paper, we investigate defect tolerance properties of a 2D nano-scale crossbar, which is the basic block of various nano architectures which have been recently proposed. Various nano-wire and switch faults are studied and their impact on the routability of a crossbar are investigated. In the presence of defects, it is still possible to utilize a defective crossbar at reduced functionality, i.e. as a smaller defect-free crossbar. Simulation results for different sizes and defect densities are presented. This proposed approach can be utilized by architecture designers to determine the expected size of functional (defect-free) crossbar based on defect density information obtained from the fabrication process.

76 citations


Journal ArticleDOI
TL;DR: In this article, a CMOS subbandgap reference circuit with 1-V supply voltage is described, where threshold voltage reduction and subthreshold operation techniques are used to obtain subband gap reference voltages.
Abstract: A CMOS subbandgap reference circuit with 1-V supply voltage is described. To obtain subbandgap reference voltages with a 1-V supply voltage, threshold voltage reduction and subthreshold operation techniques are used. Large /spl Delta/V/sub BE/ (100 mV) as well as a 90-dB operational amplifier are used to circumvent the amplifier offset. A power-on-reset (POR) circuit is used as startup. This circuit has been implemented using a standard 0.5-/spl mu/m CMOS process, and its size is 940 /spl mu/m/spl times/1160/spl mu/m. The temperature coefficient is 17 ppm from -40/spl deg/C to 125/spl deg/C after resistor trimming and the minimum power supply voltage is 0.95 V. The measured total current consumption is below 10 /spl mu/A and the measured output voltage is 0.631 V at room temperature.

76 citations


Proceedings ArticleDOI
26 Apr 2004
TL;DR: A detailed simulation-based analysis of the AOI gate is presented as well as the characterization of QCA defects and study of their effects at logic level, for a novel complex and universal QCA gate: the And-Or-Inverter (AOI) gate.
Abstract: Quantum-dot Cellular Automata (QCA) offers a new computing paradigm in nanotechnology. The basic logic elements of this technology are the inverter and the majority voter. In this paper, we propose a novel complex and universal QCA gate: the And-Or-Inverter (AOI) gate, which is a 5 input gate consisting of 7 cells. This paper presents a detailed simulation-based analysis of the AOI gate as well as the characterization of QCA defects and study of their effects at logic level. Design implementations using the AOI gate are compared with the conventional CMOS and the majority voter-based QCA methodology. Testing of the AOI gate at logic level is also addressed, unique testing features of designs based on this complex gate have been investigated.

44 citations


Proceedings ArticleDOI
10 Oct 2004
TL;DR: In this paper, the impact of scaling on defects that may arise in the manufacturing of quantum dot cellular automata (QCA) devices is discussed, and it is shown that in most defect cases, the scaling relationship between l and d is linear, albeit with different slopes.
Abstract: In this paper, we present the impact of scaling on defects that may arise in the manufacturing of quantum dot cellular automata (QCA) devices. This study shows how the sensitivity to manufacturing processing variations changes with device scaling. Scaling in QCA technology is related to cell dimension/size and cell-to-cell spacing within a Cartesian layout. Extensive simulation results on scaling of QCA devices, such as the majority voter, the inverter and the binary wire, are provided to show that defects have definitive trends in their behavior. These trends relate cell size (l) to the smallest cell-to-cell spacing (d) for erroneous behavior in the presence of different defects (such as misalignment and displacement); their impact on the correct functionality of QCA devices is extensively discussed. It is shown that in most defect cases, the scaling relationship between l and d is linear, albeit with different slopes.

40 citations


Journal ArticleDOI
TL;DR: This paper analyzes an environment which utilizes built-in self-test (BIST) and automatic test equipment (ATE), and presents closed-form expressions for fault coverage as a function of the number of BIST and ATE test vectors.
Abstract: This paper analyzes an environment which utilizes built-in self-test (BIST) and automatic test equipment (ATE), and presents closed-form expressions for fault coverage as a function of the number of BIST and ATE test vectors. This requires incorporating the time to switch from BIST to ATE (referred to as switchover time), and utilizing ATE generated vectors to finally achieve the desired level of fault coverage. For this environment, we model fault coverage as a function of the testability of the circuit under test and the numbers of vectors which are supplied by the BIST circuitry and the ATE. A novel approach is proposed; this approach is initially based on fault simulation using a small set of random vectors; an estimate of the so-called detection profile of the circuit under test is established as the basis of the test model. This analytical model effectively relates the testable features of the circuit under test to detection using both BIST and ATE as related testing processes.

34 citations


Proceedings ArticleDOI
09 Aug 2004
TL;DR: This paper proposes Markov based models for analyzing the reliability and availability of different fault-tolerant memory arrangements under the operational scenario of an SEU and shows that a duplex memory system encoded with error control codes has a higher reliability than the triplex arrangement.
Abstract: A single event upset (SEU) can affect the correct operation of digital systems, such as memories and processors This paper proposes Markov based models for analyzing the reliability and availability of different fault-tolerant memory arrangements under the operational scenario of an SEU These arrangements exploit redundancy (either duplex or triplex replication) for dynamic fault-tolerant operation as provided by arbitration (for error detection and output selection) as well as in the presence of dedicated circuitry implementing different correction/detection codes for bit-flips as errors The primary objective is to preserve either the correctness, or the fail-safe nature of the data output of the memory system for long mission time It is shown that a duplex memory system encoded with error control codes has a higher reliability than the triplex arrangement Moreover, the use of a code for single error correction and double error detection (SEC-DED) improves both availability and reliability compared to an error correction code with same error detection capabilities

23 citations


Journal ArticleDOI
TL;DR: It is shown that for industrial system-on-chip (SoC) designs, the efficiency of the reuse compression technique is comparable to sophisticated software techniques with the advantage of easy and fast decoding.
Abstract: Compression has been used in automatic test equipment (ATE) to reduce storage and application time for high volume data by exploiting the repetitive nature of test vectors. The application of a binary compression method to an ATE environment for manufacturing is studied using a technique, referred to as reuse. In reuse, compression is achieved by partitioning the vector set and removing repeating segments. This process has O(n/sup 2/) time complexity for compression (where n is the number of vectors) with a simple hardware decoding circuitry. It is shown that for industrial system-on-chip (SoC) designs, the efficiency of the reuse compression technique is comparable to sophisticated software techniques with the advantage of easy and fast decoding. Two shift register-based decompression schemes are presented; they can be either incorporated into internal scan chains or built in the tester's head. The proposed compression method has been applied to industrial test and data and an average compression rate of 84% has been achieved.

17 citations


Proceedings ArticleDOI
10 Oct 2004
TL;DR: This paper presents a novel technique for achieving error-resilience to bit-flips in compressed test data streams using Tunstall coding and bit-padding to preserve vector boundaries, and requires a simple algorithm for compression and its hardware for decompression is very small.
Abstract: This paper presents a novel technique for achieving error-resilience to bit-flips in compressed test data streams. Error-resilience is related to the capability of a test data stream (or sequence) to tolerate bit-flips which may occur in an automatic test equipment (ATE), either in the electronics components of the loadboard or in the high speed serial communication links between the user interface workstation and the head. Initially, it is shown that errors caused by bit-flips can seriously degrade test quality (as measured by the coverage), as such degradation is very significant for variable codeword techniques such as Huffman coding. To address this issue a variable-to-constant compression technique (namely Tunstall coding) is proposed. Using Tunstall coding and bit-padding to preserve vector boundaries, an error-resilient compression technique is proposed. This technique requires a simple algorithm for compression and its hardware for decompression is very small, while achieving a much higher error-resilience against bit-flips compared with previous techniques (albeit at a small reduction in compression). Simulation results on benchmark circuits are provided to substantiate the validity of this approach in an ATE environment.

17 citations



Journal ArticleDOI
TL;DR: A traditional model is transformed into a new model that performs fault simulation using a VHDL simulation engine and an automatic switching mechanism selects gate level descriptions for originating faults and behavioral descriptions for propagating them.
Abstract: This paper presents a novel fault simulation environment in VHDL. By writing a library of special fault simulation models, a traditional model is transformed into a new model that performs fault simulation using a VHDL simulation engine. Pre- and post-synthesis VHDL models are used for an effective implementation, better performance and to minimize the overhead associated with VHDL simulation. Models are written such that an automatic switching mechanism selects gate level descriptions for originating faults and behavioral descriptions for propagating them.

Journal ArticleDOI
TL;DR: An upper bound is established on the maximum number of faults which can be sustained without invalidating the test results under worst case conditions and given test schedules and diagnostic algorithms which meet the upper bound as far as the highest order term.
Abstract: We examine the diagnosis of processor array systems formed as two-dimensional arrays, with boundaries, and either four or eight neighbors for each interior processor. We employ a parallel test schedule. Neighboring processors test each other, and report the results. Our diagnostic objective is to find a fault-free processor or set of processors. The system may then be sequentially diagnosed by repairing those processors tested faulty according to the identified fault-free set, or a job may be run on the identified fault-free processors. We establish an upper bound on the maximum number of faults which can be sustained without invalidating the test results under worst case conditions. We give test schedules and diagnostic algorithms which meet the upper bound as far as the highest order term. We compare these near optimal diagnostic algorithms to alternative algorithms, both new and already in the literature, and against an upper bound ideal case algorithm, which is not necessarily practically realizable. For eight-way array systems with N processors, an ideal algorithm has diagnosability 3N/sup 2/3/-2N/sup 1/2/ plus lower-order terms. No algorithm exists which can exceed this. We give an algorithm which starts with tests on diagonally connected processors, and which achieves approximately this diagnosability. So the given algorithm is optimal to within the two most significant terms of the maximum diagnosability. Similarly, for four-way array systems with N processors, no algorithm can have diagnosability exceeding 3N/sup 2/3//2/sup 1/3/-2N/sup 1/2/ plus lower-order terms. And we give an algorithm which begins with tests arranged in a zigzag pattern, one consisting of pairing nodes for tests in two different directions in two consecutive test stages; this algorithm achieves diagnosability (3/2)(5/2)/sup 1/3/N/sup 2/3/-(5/4)N/sup 1/2/ plus lower-order terms, which is about 0.85 of the upper bound due to an ideal algorithm.

Proceedings ArticleDOI
18 May 2004
TL;DR: In this article, the authors proposed a numerical method to find an optimized fault-coverage implemented in BIST and ATE so that a minimum cost can be achieved, which is applied to two parallel combined BIST/ATE testing schemes to assure its technical validity.
Abstract: As the design and fabrication complexities for the instrumentation-on-silicon systems intensify, optimization of combined Built-In Self-Test (BIST) and Automated Test Equipment (ATE) testing becomes more desirable to meet the required fault-coverage while maintaining acceptable cost overhead. The cost associated with combined BIST/ATE testing of such systems mainly consists of the following; (1) the cost induced by the BIST area overhead and (2) the cost induced by the overall testing time. In general, BIST has faster testing speed than ATE, while it can provide only limited fault-coverage and driving higher fault-coverage from BIST means additional area cost overhead. On the other hand, higher fault-coverage can be usually achieved from ATE, but excessive use of ATE results in additional test time cost. Fault-coverage of BIST and ATE plays a significant role since it can affect the area overhead in BIST and test time in BIST/ATE. This paper is to propose a novel numerical method to find an optimized fault-coverage implemented in BIST and ATE so that a minimum cost can be achieved. The proposed method. then, is applied to two parallel combined BIST/ATE testing schemes to assure its technical validity.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: Testing of quantum dots cellular automata is investigated and their designs are compared with conventional CMOS-based designs; a testing technique requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design.
Abstract: There has been considerable research on quantum dots cellular automata as a new computing scheme in the nano-scale regimes. The basic logic element of this technology is a majority voter. In this paper, testing of these devices is investigated and compared with conventional CMOS-based designs. A testing technique is presented; it requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design. A design-for-test scheme is also presented which results in the generation of a reduced test set.

Proceedings ArticleDOI
26 Oct 2004
TL;DR: Two new probabilistic routing (routability) metrics are proposed and used as figures of merit for evaluating the interconnect resources of commercially available FPGAs as well as academic architectures.
Abstract: This work presents a new approach for the evaluation of FPGA routing resources in the presence of interconnect faults. All possible interconnect faults for programmable switches and wiring channels are considered. Signal routing in the presence of faulty interconnect resources is analyzed at both switch block and the entire FPGA. Two new probabilistic routing (routability) metrics are proposed and used as figures of merit for evaluating the interconnect resources of commercially available FPGAs as well as academic architectures.

Proceedings ArticleDOI
10 Oct 2004
TL;DR: An extensive evaluation of the manufacturing yield of embedded SRAMs (eSRAM) which are designed using a memory compiler and an industrial ASIC chip is also considered as a design case.
Abstract: This work presents an extensive evaluation of the manufacturing yield of embedded SRAMs (eSRAM) which are designed using a memory compiler The yield is evaluated by considering the different design constructs (generally referred to as kernels) that are used in defining the memory architecture through a compiler Architectural considerations such as array size and line (word and bit) organization are analyzed Compiler-based features of different kernels (such as required for decoding) are also treated in detail An extensive evaluation of the provided redundancy (row, column and combined) is pursued to characterize its impact on the memory yield Industrial data is used in the evaluation and an industrial ASIC chip (made of multiple eSRAMs) is also considered as a design case

Proceedings ArticleDOI
14 Apr 2004
TL;DR: This paper characterizes the yield and reliability properties of the two-phase clockless asynchronous pipeline with respect to glitch and proposes a simple yet effective fault tolerant architecture by using redundant request signals.
Abstract: This paper presents a fault tolerant design technique for clockless wave pipeline. The specific architectural model investigated in this paper is the two-phase clockless wave pipeline [12] which is ideally supposed to yield the theoretical maximum performance. Request signal is the most critical component for the clockless control of the wave pipelined processing of data. In practice, the request signal is very sensitive and vulnerable to electronic crosstalk noise, referred to as glitch, and this problem has become extremely stringent in the ultra-high density integrated circuits today. Electronic crosstalk noise may devastate the operational confidence level of the clockless wave pipeline. In this context, this paper characterizes the yield and reliability properties of the two-phase clockless asynchronous pipeline with respect to glitch. Based on the yield and reliability characterization, a simple yet effective fault tolerant architecture by using redundant request signals is proposed. The reliability model evaluates the impact of the request signal glitch on the overall reliability, and can be used to maneuver the proposed fault tolerant architecture. An experimental simulation is conducted to demonstrate the efficiency and effectiveness of the proposed fault tolerant technique.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: For this application, it is shown that the best heuristic technique is not the famous Christofides or Lin-Kernighan, but the Multi-Fragment technique.
Abstract: Vector reordering is an essential task in testing VLSI systems because it affects this process from two perspectives: power consumption and correlation among data. The former feature is crucial and if not properly controlled during testing, may result in permanent failure of the device-under-test (DUT). The atter feature is a so important because correlation is captured by coding schemes to efficiently compress test data and ease memory requirements of Automatic-Test-Equipment (ATE),while reducing the volume of data and lowering the test application time. Reordering however is NP-complete. This paper presents an evaluation of different heuristic techniques for vector reordering using ISCAS85 and ISCAS89 benchmark circuits in terms of time and quality. For this application, it is shown that the best heuristic technique is not the famous Christofides or Lin-Kernighan, but the Multi-Fragment technique.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: Signal routing in the presence of faulty switches is analyzed at both switch block and array levels; probabilistic routing (mutability) is used as figure of merit for evaluating the programmable interconnect resources of FPGA architectures.
Abstract: Summary form only given. We present a new approach for the evaluation of FPGA routing resources in the presence of faulty switches. Switch stuck-open faults (switch permanently off) as well as switch stuck-closed faults (switch permanently on) are addressed, which is directly related to fault tolerance of the interconnect for testing and reconfiguration at manufacturing and run-time application. Signal routing in the presence of faulty switches is analyzed at both switch block and array levels; probabilistic routing (mutability) is used as figure of merit for evaluating the programmable interconnect resources of FPGA architectures. Two approaches are proposed in this paper. The first approach is based on finding a permutation (one-to-one mapping) between the input and output endpoints. A probabilistic approach is also presented to evaluate fault tolerant routing for the entire FPGA by connecting switch blocks in chains as required for testing and to account for the I/O pin restrictions of an FPGA chip. The results are reported for various commercial and academic FPGA architectures.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: This work gives a Markov chain model of the yield of an embedded memory core and concludes that as long as there is at least one spare of each type, the spares do not need to be balanced, once the yield impact of being part of a system-on-a-chip has been taken into account.
Abstract: We give a Markov chain model of the yield of an embedded memory core. The model allows easy inclusion of the effect of possible defects elsewhere on the chip that includes the embedded memory. We propose a reconfiguration algorithm for the case of both spare rows and columns that is simple enough that it could serve as built-in self-repair on the chip. Compared to an optimal configuration algorithm, there is no visible difference in the yield. We use parameters from an IBM embedded SRAM process to illustrate the yield calculation. We study the effect of different spare allocations. We conclude that as long as there is at least one spare of each type, the spares do not need to be balanced, once the yield impact of being part of a system-on-a-chip has been taken into account.

Proceedings ArticleDOI
10 Oct 2004
TL;DR: This work presents arithmetic coding and its application to data compression for VLSI testing through a practical integer implementation of arithmetic coding/decoding and analyzes its deviation from the entropy bound.
Abstract: This work presents arithmetic coding and its application to data compression for VLSI testing. The use of arithmetic codes for compression results in a codeword whose length is close to the optimal value as predicted by entropy in information theory. Previous techniques (such as those based on Huffman or Golomb coding) result in optimal codes for test data sets in which the probability model of the symbols satisfies specific requirements. We show that Huffman and Golomb codes result in large differences between entropy bound and sustained compression. We present compression results of arithmetic coding for circuits through a practical integer implementation of arithmetic coding/decoding and analyze its deviation from the entropy bound as well. A software implementation approach is proposed and studied in detail using industrial embedded DSP cores.

Proceedings ArticleDOI
18 May 2004
TL;DR: The proposed methodology includes optimal latch insertion point identification, how to consider clock skew for timing, and how to simulate circuits to verify the timing and functionality considering the clock skew in high speed VLSI systems for latch-based design.
Abstract: This paper presents a framework of simulation and verification methodology for latch-based VLSI design. The proposed methodology includes optimal latch insertion point identification, how to consider clock skew for timing, and how to simulate circuits to verify the timing and functionality considering the clock skew in high speed VLSI systems for latch-based design. An existing flip-flop based FFT block is converted to latch-based design using the proposed methodology, and the performance of the block is increased by 10%.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: The presented approach utilizes a path-based technique to find the probability of establishing a path between pairs of input and output endpoints in a switch block to evaluate FPGA routing resources in the presence of faulty switches.
Abstract: This paper presents a new approach for the evaluation of FPGA routing resources in the presence of faulty switches. This is considered under the worst case scenario of open faults. Signal routing in the presence of faulty switches is analyzed at switch block level; probabilistic routing (routability) is used as figure of merit for evaluating the interconnect resources of FP-GAs. The presented approach utilizes a path-based technique to find the probability of establishing a path between pairs of input and output endpoints in a switch block. The results are reported for various commercial and academic FPGAs.

Journal ArticleDOI
TL;DR: In this article, probabilistic redundancy partitioning and utilization techniques are proposed to achieve optimal combination of yield and reliability of the embedded memory system core to realize enhanced manufacturing yield and field reliability, both ATE (automated test equipment) and BISR (built-in-self-repair) are commonly utilized to allocate redundancy for embedded memory systems cores.

Proceedings ArticleDOI
18 May 2004
TL;DR: In this paper, the authors provide a framework by which jitter phenomena, which are encountered at the output signals of a head board in an automatic test equipment (ATE), can be studied.
Abstract: The objective of this paper is to provide a framework by which jitter phenomena, which are encountered at the output signals of a head board in an automatic test equipment (ATE), can be studied. In this paper, the jitter refers to the one caused by radiated electromagnetic interference (EMI) noise, which is present in the head of all ATE due to DC-DC converter activity. An initial analysis of the areas of the head board most sensitive to EMI noise has been made. It identifies a sensitive part in the loop filler of a phase locked loop which is used to obtain a high frequency clock for the timing generator. Different H-fields are then applied externally at the loop filter to verify the behavior of the output signal of the head board in terms of RMS jitter. As for RMS jitter measurements, a frequency domain methodology has been employed. A trend for RMS jitter variation with respect to radiated EMI magnitude as well as frequency has been obtained. Also the orientation of the external H-field source with respect to the target board and its effects on the measured RMS jitter has been investigated. For measuring the RMS value, a proper circuitry has been designed on a daughter board to circumvent ground noise and connectivity problems arising from the head environment.

Proceedings ArticleDOI
10 Oct 2004
TL;DR: A novel probabilistic method is proposed to balance the fault-coverage and the test overhead costs in a combined BIST/ATE test environment and is applied to two BIST-ATE test scenarios to find the optimal fault- coverage/cost combinations.
Abstract: As design and test complexities of SoCs ever intensify, the balanced utilization of combined built-in self-test (BIST) and automated test equipment (ATE) testing becomes desirable to meet the required minimum-fault-coverage while maintaining an acceptable cost overhead. The cost associated with combined BIST/ATE testing of such systems mainly consists of 1) the cost induced by the BIST area overhead and 2) the cost induced by the overall testing time. In general, BIST is significantly faster than ATE, while it can provide only limited fault-coverage, and driving higher fault-coverage from BIST means additional area cost overhead. On the other hand, higher fault-coverage can be easily achieved from ATE, but excessive use of ATE results in additional test time. This paper proposes a novel probabilistic method to balance the fault-coverage and the test overhead costs in a combined BIST/ATE test environment. The proposed technique is then applied to two BIST/ATE test scenarios to find the optimal fault-coverage/cost combinations.

Proceedings ArticleDOI
10 Oct 2004
TL;DR: In this article, a new fault model called March-NU is proposed to detect interword coupling faults in word-organized SRAMs (WOMs) which extends fault detection to three additional types of coupling faults, i.e. read destructive, deceptive read destructive and incorrect read coupling faults.
Abstract: A new algorithm to detect inter-word coupling faults in word-organized SRAMs (WOMs) is proposed in this paper. This algorithm (referred to as March-NU) relies on a new fault model which extends fault detection to three additional types of coupling faults, i.e. read destructive, deceptive read destructive and incorrect read coupling faults. These faults are related to well known fault mechanisms, that have been reported in the literature, which occur in the read operation of SRAMs. Previous algorithms can not guarantee 100% fault detection of these coupling faults. March-NU sensitizes and detects with 100% coverage all coupling faults as well as traditional faults. A detailed analysis of its fault detection capabilities are presented. March-NU utilizes 8 March elements and its complexity is 30N, where N is the number of words in the WOM under test.

Proceedings ArticleDOI
18 May 2004
TL;DR: A test circuit used to perform electronic neuron IC testing and how a subthreshold circuit can reduce power consumption is described, and design techniques targeting a neural oscillator are presented.
Abstract: This paper presents new test and verification methodologies, including design techniques targeting a neural oscillator. Because the output signal of a neuron is chaotic, customized verification and test methodologies are required. We have chosen to use MATLAB to verify our experimental results at a simulation level. In this paper we also describe a test circuit used to perform electronic neuron IC testing. We investigate how a subthreshold circuit can reduce power consumption. In our HSPICE simulations, we both validate the proposed test circuit and verify the electronic neuron and synapse circuit.