scispace - formally typeset
Search or ask a question
Author

Kozo Kinoshita

Bio: Kozo Kinoshita is an academic researcher from Osaka Gakuin University. The author has contributed to research in topics: Automatic test pattern generation & Fault coverage. The author has an hindex of 19, co-authored 115 publications receiving 1743 citations. Previous affiliations of Kozo Kinoshita include Osaka University & Hiroshima University.


Papers
More filters
Journal ArticleDOI
TL;DR: New cost-effective heuristics for the generation of minimal test sets that reduce the number of tests and allow tests generated earlier in the test generation process to be dropped are presented.
Abstract: This paper presents new cost-effective heuristics for the generation of minimal test sets. Both dynamic techniques, which are introduced into the test generation process, and a static technique, which is applied to already generated test sets, are used. The dynamic compaction techniques maximize the number of faults that a new test vector detects out of the yet-undetected faults as well as out of the already-detected ones. Thus, they reduce the number of tests and allow tests generated earlier in the test generation process to be dropped. The static compaction technique replaces N test vectors by M

228 citations

Proceedings ArticleDOI
01 May 2005
TL;DR: Experimental results show the effectiveness of the novel low-capture-power X-filling method in reducing capture power dissipation without any impact on area, timing, and fault coverage.
Abstract: Research on low-power scan testing has been focused on the shift mode, with little or no consideration given to the capture mode power. However, high switching activity when capturing a test response can cause excessive IR drop, resulting in significant yield loss. This paper addresses this problem with a novel low-capture-power X-filling method by assigning 0's and 1's to unspecified (X) bits in a test cube to reduce the switching activity in capture mode. This method can be easily incorporated into any test generation flow, where test cubes are obtained during ATPG or by X-bit identification. Experimental results show the effectiveness of this method in reducing capture power dissipation without any impact on area, timing, and fault coverage.

183 citations

Proceedings ArticleDOI
08 Nov 2005
TL;DR: A novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG to improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss.
Abstract: Scan-based at-speed testing is a key technology to guarantee timing-related test quality in the deep submicron era. However, its applicability is being severely challenged since significant yield loss may occur from circuit malfunction due to excessive IR drop caused by high power dissipation when a test response is captured. This paper addresses this critical problem with a novel low-capture-power X-filling method of assigning 0's and 1's to unspecified (X) bits in a test cube obtained during ATPG. This method reduces the circuit switching activity in capture mode and can be easily incorporated into any test generation flow to achieve capture power reduction without any area, timing, or fault coverage impact. Test vectors generated with this practical method greatly improve the applicability of scan-based at-speed testing by reducing the risk of test yield loss

144 citations

Proceedings ArticleDOI
30 Apr 2006
TL;DR: Compared with previous methods that passively conduct X-filling for unspecified bits in test cubes generated only for fault detection, the new method achieves more capture power reduction with less test set inflation.
Abstract: High power dissipation can occur when the response to a test vector is captured by flip-flops in scan testing, resulting in excessive JR drop, which may cause significant capture-induced yield loss in the DSM era. This paper addresses this serious problem with a novel test generation method, featuring a unique algorithm that deterministically generates test cubes not only for fault detection but also for capture power reduction. Compared with previous methods that passively conduct X-filling for unspecified bits in test cubes generated only for fault detection, the new method achieves more capture power reduction with less test set inflation. Experimental results show its effectiveness.

96 citations

Journal ArticleDOI
TL;DR: In this paper, a logic level characterization and fault model for crosstalk faults is presented, and a fault list of such faults can be generated from the layout data, and given an automatic test pattern generation procedure for them.
Abstract: The continuous reduction of the device size in integrated circuits and the increase in the switching rate cause parasitic capacitances between conducting layers to become dominant and cause logic errors in the circuits. Therefore, capacitive couplings can be considered as potential logic faults. Classical fault models do not cover this class of faults. This paper presents a logic level characterization and fault model for crosstalk faults. The authors also show how a fault list of such faults can be generated from the layout data, and give an automatic test pattern generation procedure for them. >

94 citations


Cited by
More filters
Book
01 Jan 2001
TL;DR: Conservative logic shows that it is ideally possible to build sequential circuits with zero internal power dissipation and proves that universal computing capabilities are compatible with the reversibility and conservation constraints.
Abstract: Conservative logic is a comprehensive model of computation which explicitly reflects a number of fundamental principles of physics, such as the reversibility of the dynamical laws and the conservation of certain additive quantities (among which energy plays a distinguished role). Because it more closely mirrors physics than traditional models of computation, conservative logic is in a better position to provide indications concerning the realization of high-performance computing systems, i.e., of systems that make very efficient use of the "computing resources" actually offered by nature. In particular, conservative logic shows that it is ideally possible to build sequential circuits with zero internal power dissipation. After establishing a general framework, we discuss two specific models of computation. The first uses binary variables and is the conservative-logic counterpart of switching theory; this model proves that universal computing capabilities are compatible with the reversibility and conservation constraints. The second model, which is a refinement of the first, constitutes a substantial breakthrough in establishing a correspondence between computation and physics. In fact, this model is based on elastic collisions of identical "balls" and thus is formally identical with the atomic model that underlies the (classical) kinetic theory of perfect gases. Quite literally, the functional behavior of a general-purpose digital computer can be reproduced by a perfect gas placed in a suitably shaped container and given appropriate initial conditions.

1,888 citations

Proceedings Article
14 Jul 1980
TL;DR: According to a physical interpretation, the central result of this paper is that i¢ is ideally possible to build sequential c/rcuits with zero internal power dissipation.
Abstract: The theory of reversible computing is based on invertib|e primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems. According to a physical interpretation, the central result of this paper is that i¢ is ideally possible to build sequential c/rcuits with zero internal power dissipation.

1,357 citations

Journal ArticleDOI
TL;DR: Cache memories are a general solution to improving the performance of a memory system by placing smaller faster memories in front of larger, slower, and cheaper memories to approach that of a perfect memory system—at a reasonable cost.
Abstract: A computer’s memory system is the repository for all the information the computer’s central processing unit (CPU, or processor) uses and produces. A perfect memory system is one that can supply immediately any datum that the CPU requests. This ideal memory is not practically implementable, however, as the three factors of memory capacity, speed, and cost are directly in opposition. By placing smaller faster memories in front of larger, slower, and cheaper memories, the performance of the memory system may approach that of a perfect memory system—at a reasonable cost. The memory hierarchies of modern general-purpose computers generally contain registers at the top, followed by one or more levels of cache memory, main memory (all three are semiconductor memory), and virtual memory (on a magnetic or optical disk). Figure 1 shows a memory hierarchy typical of today’s (1995) commodity systems. Performance of a memory system is measured in terms of latency and bandwidth. The latency of a memory request is how long it takes the memory system to produce the result of the request. The bandwidth of a memory system is the rate at which the memory system can accept requests and produce results. The memory hierarchy improves average latency by quickly returning results that are found in the higher levels of the hierarchy. The memory hierarchy usually reduces bandwidth requirements by intercepting a fraction of the memory requests at higher levels of the hierarchy. Some machines, such as high-performance vector machines, may have fewer levels in the hierarchy—increasing cost for better predictability and performance. Some of these machines contain no caches at all, relying on large arrays of main memory banks to supply very high bandwidth. Pipelined accesses of operands reduce the performance impact of long latencies in these machines. Cache memories are a general solution to improving the performance of a memory system. Although caches are smaller than typical main memory sizes, they ideally contain the most frequently accessed portions of main memory. By keeping the most heavily used data near the CPU, caches can service a large fraction of the requests without needing to access main memory (the fraction serviced is called the hit rate). Caches require locality of reference to work well transparently—they assume that accessed memory words will be accessed again quickly (temporal locality) and that memory words adjacent to an accessed word will be accessed soon after the access in question (spatial locality). When the CPU issues a request for a datum not in the cache (a cache miss), the cache loads that datum and some number of adjacent data (a cache block) into itself from main memory. To reduce cache misses, some caches are associative—a cache may place a given block in one of several places, collectively called a set. This set is content-addressable; a block may be accessed based on an address tag, one of which is coupled with each block. When a new block is brought into a set and the set is full, the cache’s replacement policy dictates which of the old blocks should be removed from the cache to make room for the new. Most caches use an approximation of least recently used

702 citations

Journal Article
TL;DR: In this article, the effects of switching transients in digital MOS circuits that perturb analog circuits integrated on the same die by means of coupling through the substrate were observed. And the authors showed that in such cases the substrate noise is highly dependent on layout geometry.
Abstract: An experimental technique is described for observing the effects of switching transients in digital MOS circuits that perturb analog circuits integrated on the same die by means of coupling through the substrate Various approaches to reducing substrate crosstalk (the use of physical separation of analog and digital circuits, guard rings, and a low-inductance substrate bias) are evaluated experimentally for a CMOS technology with a substrate comprising an epitaxial layer grown on a heavily doped bulk wafer Observations indicate that reducing the inductance in the substrate bias is the most effective Device simulations are used to show how crosstalk propagates via the heavily doped bulk and to predict the nature of substrate crosstalk in CMOS technologies integrated in uniform, lightly doped bulk substrates, showing that in such cases the substrate noise is highly dependent on layout geometry A method of including substrate effects in SPICE simulations for circuits fabricated on epitaxial, heavily doped substrates is developed >

567 citations

Journal ArticleDOI
TL;DR: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The presented scheme is widely applicable and easy to deploy because it is based on the standard scan/ATPG methodology and adopts a very simple flow. It is nonintrusive as it does not require any modifications to the core logic such as the insertion of test points or logic bounding unknown states. The EDT scheme consists of logic embedded on a chip and a new deterministic test-pattern generation technique. The main contributions of the paper are test-stimuli compression schemes that allow us to deliver test data to the on-chip continuous-flow decompressor. In particular, it can be done by repeating certain patterns at the rates, which are adjusted to the requirements of the test cubes. Experimental results show that for industrial circuits with test cubes with very low fill rates, ranging from 3% to 0.2%, these schemes result in compression ratios of 30 to 500 times. A comprehensive analysis of the encoding efficiency of the proposed compression schemes is also provided.

529 citations