scispace - formally typeset
Search or ask a question

Showing papers by "Dong Xiang published in 2010"


Proceedings ArticleDOI
19 Apr 2010
TL;DR: A new test application scheme called partial launch-on-capture (PLOC) is proposed to solve the two problems of stuck-at fault testing and scan flip-flop partition algorithm to minimize the overlapping part.
Abstract: Most previous DFT-based techniques for low-capture-power broadside testing can only reduce test power in one of the two capture cycles, launch cycle and capture cycle. Even if some methods can reduce both of them, they may make some testable faults in standard broadside testing untestable. In this paper, a new test application scheme called partial launch-on-capture (PLOC) is proposed to solve the two problems. It allows only a part of scan flip-flops to be active in the launch cycle and capture cycle. In order to guarantee that all testable faults in the standard broadside testing can be detected in the new test scheme, extra efforts are required to check the overlapping part. In addition, calculation of the overlapping part is different from previous techniques for the stuck-at fault testing because broadside testing requires two consecutive capture cycles. Therefore, a new scan flip-flop partition algorithm is proposed to minimize the overlapping part. Sufficient experimental results are presented to demonstrate the efficiency of the proposed method.

24 citations


Proceedings ArticleDOI
Jia Li1, Dong Xiang1
29 Nov 2010
TL;DR: This paper proposes to provide testability for the breaking point produced by the Through-Silicon-Vias (TSVs) of 3D-Stacked ICs during pre-bond testing with low Design-for-Testability (DfT) cost.
Abstract: This paper proposes to provide testability for the breaking point produced by the Through-Silicon-Vias (TSVs) of 3D-Stacked ICs (3D-SICs) during pre-bond testing with low Design-for-Testability (DfT) cost. Different from prior solutions which utilize two additional wrapper cells for the breaking point at each TSV, this paper proposes to provide the testability of two ends of the TSVs respectively by reusing the existing Primary-Inputs (PIs)/ Primary-Outputs (POs) and Pseudo-PIs/Pseudo- POs (PPIs/PPOs). To further reduce the hardware overhead and enhance the efficiency of the proposed method, this paper has also proposed the metrics and algorithm on deciding the selecting order of the TSVs and the PIs/PPIs (POs/PPOs) to be reused. The experimental results on larger ITC'99 benchmark circuits validate the effectiveness of the proposed method.

21 citations


Proceedings ArticleDOI
07 Nov 2010
TL;DR: This paper proposes minimum-violations partitioning (MVP), a scan-cell clustering method that can support multiple capture cycles in delay testing without increasing test-data volume.
Abstract: Scan shift power can be reduced by activating only a subset of scan cells in each shift cycle. In contrast to shift power reduction, the use of only a subset of scan cells to capture responses in a cycle may cause capture violations, thereby leading to fault coverage loss. In order to restore the original fault coverage, new test patterns must be generated, leading to higher test-data volume. In this paper, we propose minimum-violations partitioning (MVP), a scan-cell clustering method that can support multiple capture cycles in delay testing without increasing test-data volume. This method is based on an integer linear programming model and it can cluster the scan flip-flops into balanced parts with minimum capture violations. Based on this approach, hierarchical partitioning is proposed to make the partitioning method routingaware. Experimental results on ISCAS'89 and IWLS'05 benchmark circuits demonstrate the effectiveness of our method.

16 citations


Journal ArticleDOI
TL;DR: Based on an effective scan chain configuration, a segment-based X-filling is presented to reduce test power and keep the defect coverage and experimental results show that low testPower and high defect coverage can be achieved.
Abstract: Test power is a serious problem in the scan-based testing. DFT-based techniques and X-filling are two effective ways to reduce both shift power and capture power. However, few of the previous methods pay attention to the defect coverage when reducing the test power. Many of them, especially for X-filling methods, may lead to low defect coverage. In this paper, based on an effective scan chain configuration, we present a segment-based X-filling to reduce test power and keep the defect coverage. The scan chain configuration tries to cluster the scan flip-flops with common successors into one scan chain, in order to distribute the specified bits per pattern over a minimum number of chains. Based on the configuration, all the bits to some scan chains in a vector may be don't care(X). For these scan chains, segment-based X-filling is used to reduce test power and keep the defect coverage. Compared with the ordinary full-scan architecture, experimental results show that low test power and high defect coverage can be achieved.

10 citations


Journal ArticleDOI
01 Jun 2010
TL;DR: Experimental results show that fault coverage based on the proposed method is comparable that of enhanced scan, and the efficient conflict-driven selection and reordering schemes improved greatly.
Abstract: This paper presents a new method for improving transition fault coverage in hybrid scan testing. It is based on a novel test application scheme, in order to break the functional dependence of broadside testing. The new technique analyzes the automatic test pattern generation conflicts in broadside test generation and skewed-load test generation, and tries to control the flip-flops with the most influence on fault coverage. The conflict-driven selection method selects some flip-flops that work in the enhanced scan mode or skewed-load scan mode. And the conflict-driven reordering method distributes the selected flip-flops into different chains. In the multiple scan chain architecture, to avoid too many scan-in pins, some chains are driven by the same scan-in pin to construct a tree-based architecture. Based on the architecture, the new test application scheme allows some flip-flops to work in enhanced scan or skewed-load mode, while most of others to work in the traditional broadside scan mode. With the efficient conflict-driven selection and reordering schemes, fault coverage is improved greatly, which can also reduce test application time and test data volume. Experimental results show that fault coverage based on the proposed method is comparable that of enhanced scan.

9 citations


Proceedings ArticleDOI
03 Jan 2010
TL;DR: The double-tree scan-path architecture, originally proposed for low test power, is adapted to simultaneously reduce the test application time and test data volume under external testing.
Abstract: The double-tree scan-path architecture, originally proposed for low test power, is adapted to simultaneously reduce the test application time and test data volume under external testing. Experimental results show significant performance improvements over other existing scan architectures.

8 citations


Proceedings ArticleDOI
08 Dec 2010
TL;DR: A simple and efficient selection function based on the concept of dynamic-bandwidth-estimation (DBE) to relieve network congestion and improve network performance is presented.
Abstract: The performance of interconnection networks is heavily influenced by routing algorithms. The selection function, which decides the final output channel when a set of admissible output channels exist, is essential for an adaptive routing algorithm. Congestion is usually a major cause of performance degradation in interconnection networks. In this paper, we present a simple and efficient selection function based on the concept of dynamic-bandwidth-estimation (DBE) to relieve network congestion and improve network performance. With the proposed selection function, each router grasps congestion information locally, estimates the actual bandwidth of each output channel dynamically using grasped information and try to distribute the traffic through the channel that will allow the packet to be routed as free as possible of congested nodes and links. In this way, traffic loads of links in the network also get more balanced. The DBE selection function can be coupled with any type of network topology and adaptive routing algorithm. Performance evaluation was carried out by using a flit-accurate network simulator under various routing algorithms and traffic patterns. Results obtained show that the DBE selection function consistently improves performance in both throughput and average latency.

7 citations


Proceedings ArticleDOI
08 Dec 2010
TL;DR: The X¡Y routing scheme with wormhole-switching technique used in multi-mapping meshes is developed, which allows one processing element (PE) to be connected to multiple routers, and vice versa, by properly establishing the mapping relationships between PE and router.
Abstract: Traditional mesh is a popular interconnection architecture for Networks-on-Chip (NoCs). However, with the trend towards larger number of cores in chip multiprocessors, it is unable to stand the fast-growing diameter and average distance in meshes. Recently, concentration and express channels are two countermeasures for that. In this paper, a new scheme called multi-mapping is proposed, which allows one processing element (PE) to be connected to multiple routers, and vice versa. By properly establishing the mapping relationships between PE and router, both diameter and average distance of the network are lowered while the interconnections between routers are not altered. To provide efficient and in-order communication in NoCs, we develop the XiY routing scheme with wormhole-switching technique used in multi-mapping meshes. The simulation results are presented to show the effectiveness of our method by comparing with traditional meshes and mesh-based express cubes under different traffic patterns.

1 citations


Journal ArticleDOI
TL;DR: It is shown that DTS supports variability of loading the same test vector to the scan path dynamically, which enhances linear independence of broadcast based compression, which in consequence, reduces the test application time and test data volume under external testing.
Abstract: The double-tree scan (DTS) architecture was originally proposed for reducing power consumption in test mode. In this paper, we show that DTS supports variability of loading the same test vector to the scan path dynamically. This feature enhances linear independence of broadcast based compression, which in consequence, reduces the test application time and test data volume under external testing. Unlike the linear scan chain, there are multiple paths from the source to the sink in the DTS. Thus, each pattern can be loaded into scan chains in different ways, to break the correlation in traditional broadcast loading mode. Combining the flexibility of DTS with the broadcast-scan architecture to load different DTS-based scan chains simultaneously, we reduce test data and test application time.

1 citations


Proceedings ArticleDOI
19 Apr 2010
TL;DR: The hybrid (LOS+LOC) scheme proposed here simultaneously considers all three test cost parameters and achieves better fault coverage than prior schemes, as demonstrated by experimental results.
Abstract: Test power, volume, and time are the major test cost parameters that must be minimized while achieving the desired level of fault coverage. Unlike prior research in delay fault testing that has focused on at most two test cost parameters, the hybrid (LOS+LOC) scheme proposed here simultaneously considers all three cost parameters and achieves better fault coverage than prior schemes, as demonstrated by experimental results. A factor of (n/logn) reduction in test power is achieved by the use of a nonlinear double-tree-scan (DTS) structure instead of linear scan chain of length n. Concomitantly, by exploiting the permutation feature of DTS, whereby the same test data can be loaded in multiple ways, we also achieve substantial reductions in the test-data volume. By incorporating the Illinois scan (ILS) within this framework, we minimize not only the test time but also achieve further reductions in test-data volume.

1 citations