scispace - formally typeset
Open AccessProceedings ArticleDOI

A hybrid BIST architecture and its optimization for SoC testing

TLDR
This paper presents a hybrid BIST architecture and methods for optimizing it to test system-on-chip in a cost effective way and demonstrates the feasibility and efficiency of the approach, and significant decreases in overall test cost.
Abstract
This paper presents a hybrid BIST architecture and methods for optimizing it to test system-on-chip in a cost effective way. The proposed self-test architecture can be implemented either only in software or by using some test related hardware. In our approach we combine pseudorandom test patterns with stored deterministic test patterns to perform core test with minimum time and memory, without losing test quality. We propose two algorithms to calculate the cost of the rest process. To speed up the optimization procedure, a Tabu search based method is employed for finding the global cost minimum. Experimental results have demonstrated the feasibility and efficiency of the approach and the significant decreases in overall test cost.

read more

Content maybe subject to copyright    Report

A Hybrid BIST Architecture and its Optimization for SoC Testing
Gert Jervan, Zebo Peng
Linköping University, Sweden
{gerje, zebpe}@ida.liu.se
Raimund Ubar, Helena Kruus
Tallinn Technical University, Estonia
raiub@pld.ttu.ee, helen.kruus@ttu.ee
Abstract
This paper presents a hybrid BIST architecture and
methods for optimizing it to test systems-on-chip in a cost
effective way. The proposed self-test architecture can be
implemented either only in software or by using some test
related hardware. In our approach we combine
pseudorandom test patterns with stored deterministic test
patterns to perform core test with minimum time and
memory, without losing test quality. We propose two
algorithms to calculate the cost of the test process. To
speed up the optimization procedure, a Tabu search
based method is employed for finding the global cost
minimum. Experimental results have demonstrated the
feasibility and efficiency of the approach and the
significant decreases in overall test cost.
1. Introduction
The rapid advances of the microelectronics technology in
recent years have brought new possibilities to integrated
circuits (ICs) design and manufacturing. Many systems are
nowadays designed by embedding predesigned and
preverified complex functional blocks, usually referred as
cores, into one single die (Figure 1). Such a design style
allows designers to reuse previous designs and will lead
therefore to shorter time to market and reduced cost. Such a
System-on-Chip (SoC) approach is very attractive from the
designers’ perspective. Testing of SoC, on the other hand,
shares all the problems related to testing modern deep
submicron chips, and introduces also some additional
challenges due to the protection of intellectual property as
well as the increased complexity and higher density [1].
To test the individual cores of the system the test pattern
source and sink have to be available together with an
appropriate test access mechanism (TAM) [2] as depicted in
Figure 1. A traditional approach implements both source and
sink off-chip and requires therefore the use of external
Automatic Test Equipment (ATE). But, as the requirements
for the ATE speed and memory size are continuously
increasing, the ATE solution can be unacceptably expensive
and inaccurate. Therefore, in order to apply at-speed tests
and to keep the test costs under control, on-chip test solutions
are becoming more and more popular. Such a solution is
usually referred to as built-in self-test (BIST).
A typical BIST architecture consists of a test pattern
generator (TPG), a test response analyzer (TRA) and a BIST
control unit (BCU), all implemented on the chip. This
approach allows at-speed tests and eliminates the need for an
external tester. It can be used not only for manufacturing test
but also for periodical field maintenance tests.
The classical way to implement the TPG for logic BIST
(LBIST) is to use linear feedback shift registers (LFSR). But
as the test patterns generated by the LFSR are pseudorandom
by their nature [3], the LFSR-based approach often does not
guarantee a sufficiently high fault coverage (especially in the
case of large and complex designs) and demands very long
test application times in addition to high area overheads.
Therefore, several proposals have been made to combine
pseudorandom test patterns, generated by LFSRs, with
deterministic patterns [4-8], to form a hybrid BIST solution.
The main concern of the hybrid BIST approaches has
been to improve the fault coverage by mixing pseudorandom
vectors with deterministic ones, while the issue of cost
minimization has not been addressed directly.
To reduce the hardware overhead in the LBIST
architectures the hardware LFSR implementation can be
replaced by software, which is especially attractive to test
SoCs, because of the availability of computing resources
directly in the system (a typical SoC usually contains at least
one processor core). On the other hand, the software based
approach, is criticized because of the large memory
SoC
SRAM
Peripherial
Component
Interconnect
SRAM
CPU
Wrapper
Core
Under
Test
ROM
MPEG UDL
DRAM
Test Access
Mechanism
Test Access
Mechanism
Source
Sink
Figure 1. Testing a system-on-chip

requirements (to store the test program and test patterns).
Similar work has been reported in [7]. However, the
approach presented there has no direct cost considerations
and can therefore lead to very long test application times
because of the unlimited number of pseudorandom test
patterns.
In our approach we propose to use a hybrid test set, which
contains a limited number of pseudorandom and
deterministic test vectors. The pseudorandom test vectors
can be generated either by hardware or by software and later
complemented by the stored deterministic test set which is
specially designed to shorten the pseudorandom test cycle
and to target the random resistant faults. The basic idea of
Hybrid BIST was discussed in [13].
The main objective of the current work is to propose a test
architecture that supports the combination of pseudorandom
and deterministic vectors and to find the optimal balance
between those two test sets with minimum cost of time and
memory, without losing test quality. We propose two
different algorithms to calculate the total cost of the hybrid
BIST solution and a fast method to find the optimal
switching moment from the pseudorandom test to the stored
deterministic test patterns.
A similar problem has been addressed in [8], where an
approach to minimize testing time has been presented. It has
shown that hybrid BIST (or CBET in their terminology) can
achieve shorter testing time than pseudorandom or
deterministic test alone. However, the proposed algorithm
does not address total cost minimization (time and memory).
The rest of this paper is organized as follows. In section 2
we introduce the target architecture to implement our
approach, section 3 gives an overview of the concepts of
hybrid BIST. In sections 4 and 5 we discuss the concepts of
calculating the test cost for different solutions, including
hybrid BIST. In section 6 Tabu search based method is
proposed for optimizing the hybrid BIST test set and in
section 7 we present the experimental results which
demonstrate the efficiency of our approach. In section 8 we
will draw some conclusions together with an introduction to
future work.
2. Target Hybrid BIST Architecture
A hardware based hybrid BIST architecture is depicted in
Figure 2, where the pseudorandom pattern generator (PRPG)
and the Multiple Input Signature Analyzer (MISR) are
implemented inside the core under test (CUT). The
deterministic test pattern are precomputed off-line and stored
inside the system.
To avoid the hardware overhead caused by the PRPG and
MISR, and the performance degradation due to excessively
large LFSRs, a software based hybrid BIST can be used
where pseudorandom test patterns are produced by the test
software. However, the cost calculation and optimization
algorithms to be proposed are general, and can be applied as
well to the hardware based as to the software based hybrid
BIST optimization.
In case of software based solution, the test program,
together with test data (LFSR polynomials, initial states,
pseudorandom test length, signatures), is kept in a ROM.
The deterministic test vectors are generated during the
development process and are stored in the same place. For
transporting the test patterns, we assume that some form of
TAM is available.
In test mode the test program is executed in the processor
core. The test program proceeds in two successive stages. In
the first stage the pseudorandom test pattern generator,
which emulates the LFSR, is executed. In the second stage
the test program will apply precomputed deterministic test
vectors to the core under test.
The pseudorandom TPG software is the same for all cores
in the system and is stored as one single copy. All
characteristics of the LFSR needed for emulation are specific
to each core and are stored in the ROM. They will be loaded
upon request. Such an approach is very effective in the case
of multiple cores, because for each additional core, only the
BIST characteristics for this core have to be stored. The
general concept of the software based pseudorandom TPG is
depicted in Figure 3.
Although it is assumed that the best possible
pseudorandom sequence is used, not always all parts of the
system are testable by a pure pseudorandom test sequence. It
can also take a very long test application time to reach a
Figure 2. Hardware based hybrid BIST architecture
PRPG
. ..
. . .
. . .
. . .
MISR
BIST Controller
SoC
Core
ROM
. . .
MISSION
LOGIC
SoC ROMCPU Core
LFSR1: 001010010101010011
N1: 275
LFSR2: 110101011010110101
N2: 900
...
load (LFSRj);
for (i=0; i<Nj; i++)
...
end;
Core j Core j+1
Core j+...
Figure 3. LFSR emulation

good fault coverage level.
In case of hybrid BIST, we can dramatically reduce the
length of the initial pseudorandom sequence by
complementing it with deterministic stored test patterns, and
achieve the 100% fault coverage. The method proposed in
the paper helps to find tradeoffs between the length of the
best pseudorandom test sequence and the number of stored
deterministic patterns.
3. Cost of Hybrid BIST
Since the test patterns generated by LFSRs are
pseudorandom by nature, the generated test sequences are
usually very long and not sufficient to detect all the faults.
Figure 4 shows the fault coverage of the pseudorandom test
as the function of the test length for some larger ISCAS’85
[9] benchmark circuits. To avoid the test quality loss due to
the random pattern resistant faults and to speed up the testing
process, we have to apply deterministic test patterns targeting
the random resistant and difficult to test faults. Such a hybrid
BIST approach starts with a pseudorandom test sequence of
length L. On the next stage, stored deterministic test
approach takes place: precomputed test patterns, stored in the
ROM, are applied to the core under test to reach the desirable
fault coverage.
In a hybrid BIST technique the length of the
pseudorandom test L is an important parameter, which
determines the behavior of the whole test process [7]. It is
assumed in this paper that for the hybrid BIST the best
polynomial for the pseudorandom sequence generation will
be chosen. Removing the latter part of the pseudorandom
sequence leads to a lower fault coverage achievable by the
pseudorandom test. The loss in fault coverage should be
covered by additional deterministic test patterns. In other
words, a shorter pseudorandom test set implies a larger
deterministic test set. This requires additional memory space,
but at the same time, it shortens the overall test process. A
longer pseudorandom test, on the other hand, will lead to
longer test application time with reduced memory
requirements. Therefore it is crucial to determine the optimal
length of the pseudorandom test L
OPT
in order to minimize
the total testing cost.
Figure 5 illustrates the total cost calculation for the hybrid
BIST consisting of pseudorandom test and stored test,
generated off-line. We can define the total test cost of the
hybrid BIST C
TOTAL
as:
C
TOTAL
= C
GEN
+ C
MEM
=
α
L +
β
S (1)
where C
GEN
is the cost related to the time for generating L
pseudorandom test patterns (number of clock cycles), C
MEM
is related to the memory cost for storing S precomputed test
patterns to improve the pseudorandom test set, and
α
,
β
are
constants to map the test length and memory space to the
costs of the two parts of the test solutions to be mixed. Figure
5 illustrates how the cost of pseudorandom test is increasing
when striving to higher fault coverage (the C
GEN
curve). The
total cost C
TOTAL
is the sum of the above two costs. The
weights
α
and
β
reflect the correlation between the cost and
the pseudorandom test time (number of clock cycles used)
and between the cost and the memory size needed for storing
the precomputed test sequence, respectively. For simplicity
we assume here
α
= 1, and
β
= B where B is the number of
bytes of the input test vector to be applied on the CUT.
Hence, to carry out some experimental work for
demonstrating the feasibility and efficiency of the following
algorithms, we use as the cost units the number of clocks
used for pseudorandom test generation and the number of
bytes in the memory needed for storing the precomputed
deterministic test patterns. In practice those weights are
determined by the system specification and requirements and
can be used to drive the final implementation towards
different alternatives (for example slower, but more memory
efficient solution).
Equation 1, which is used for calculating the test cost as a
sum of costs of pseudorandom test, and of the memory
associated with storing the ATPG produced test, represents a
Figure 4. Pseudorandom test for some ISCAS’85 circuits
Total Cost
C
TOTAL
Figure 5. Cost calculation for hybrid BIST
Cost
Cost of
pseudorandom test
patterns C
GEN
Number of remaining
faults after applying k
pseudorandom test
patterns r
NOT
(k)
Cost of stored
test C
MEM
Time/Memory
L
OPT

simplified cost model for the hybrid BIST. In this model
neither the basic cost of memory (or its equivalent) occupied
by the LFSR emulator, nor the time needed for generating
deterministic test patterns are taken into account. However,
the goal of this paper was not to develop accurate cost
function for the whole BIST solution. The goal was to show
that the total cost of a hybrid BIST is essentially a function of
arguments L and S, and to develop a method to calculate the
value of S at a given value of L to find the tradeoffs between
the length of pseudorandom test and the number of
deterministic patterns to minimize the total cost of a hybrid
BIST.
Hence, the main problem of the formulated optimization
task is how to find the curves C
GEN
and C
MEM
in Figure 5 in a
most efficient way.
4. Calculation of the Cost for Pseudorandom
Test
Creating the curve C
GEN
=
α
L is not difficult. For this
purpose, the cumulative fault coverage (like in Figure 4) for
the pseudorandom sequence generated by a LSFR should be
calculated by a fault simulation. As the result we find for
each clock cycle the list of faults which were covered at this
time moment. In fact, we are interested to identify only these
clock numbers at which at least one new fault will be
covered. Let us call such clock numbers and the
corresponding pseudorandom test patterns efficient clocks
and efficient patterns, respectively.
As an example, in Table 1 the first four columns represent
a fragment of selected results of fault simulation for
pseudorandom test in the case of the ISCAS’85 circuit c880,
where
k is the number of the clock cycle,
r
DET
(k) is the number of new faults detected by the test
pattern generated at the clock signal k,
r
NOT
(k) is the number of faults not yet covered with the
sequence generated by k clock signals,
FC(k) is the fault coverage reached with the sequence
generated by k clock signals.
The rows in Table 1 correspond to selected efficient
clocks for the circuit c880. If we decide to switch from
pseudorandom mode to the deterministic mode after the
clock number k, then L = k.
More difficult is to find the values for C
MEM
=
β
S. Let t(k)
be the number of test patterns needed to cover r
NOT
(k) not yet
detected faults (these patterns should be precomputed and
used as stored test patterns in the hybrid BIST). As an
example, this data for the circuit c880 are depicted in the last
column of Table 1. In the following section the difficulties
and possible ways to solve the problem are discussed.
5. Calculation of the Cost for Stored Test
There are two approaches to find t(k): ATPG based and
fault table based. Let us have the following notations:
i – the current number of the entry in the table of BIST
analysis data;
k(i) – the number of the efficient clock cycle;
R
DET
(i) - the set of new faults detected by the
pseudorandom pattern generated at k(i);
R
NOT
(i) - the set of not yet covered faults after applying
the pseudorandom pattern number k(i);
T(i) - the set of test patterns found by ATPG to cover the
faults in R
NOT
(i);
N – the number of all efficient patterns in the sequence
created by the pseudorandom test;
FT – the fault table for a given set of test patterns T and
for the given set of faults R: the table defines the subsets
R(t
j
)
R of detected faults for each pattern
t
j
T.
Algorithm 1: ATPG based generation of
t(k)
1. Let k:=N;
2.
Generate for R
NOT
(k) a test T(k), T := T(k), t(k) := |T|;
3.
For all k= N-1, N-2, … 1:
Generate for the faults R
NOT
(k) not covered by T a test set
T(k), T := T+ T(k), t(k) := |T|;
4. END.
This algorithm generates a new deterministic test set for
the not yet detected faults at every efficient clock cycle. In
this way we have the complete test set (consisting of
pseudorandom and deterministic test vectors) for every
efficient clock, which can reach to the maximal achievable
fault coverage. The number of deterministic test vectors at all
efficient clocks are then used to create the curve C
MEM
(
β
S).
The algorithm is straightforward, however, very time
consuming because of repetitive use of ATPG.
Algorithm 2: Fault Table based generation of
t(k)
1. Calculate the whole test T = {t
j
} for the whole set of
faults R by any ATPG to reach as high fault coverage as
possible;
2. Create for T and R the fault table FT = {R(t
j
)};
3. Take k = 0, T
k
= T, R
k
= R, FT
k
= FT ;
4. Take k = k + 1;
Table 1. BIST analysis data
k
r
DET
(k) r
NOT
(k)
FC(k) t(k)
1 155 839 15.6% 104
2 76 763 23.2% 104
3 65 698 29.8% 100
4 90 608 38.8% 101
5 44 564 43.3% 99
10 104 421 57.6% 95
20 44 311 68.7% 87
50 51 218 78.1% 74
100 16 145 85.4% 52
200 18 114 88.5% 41
411 31 70 93.0% 26
954 18 28 97.2% 12
1560 8 16 98.4% 7
2153 11 5 99.5% 3
3449 2 3 99.7% 2
4519 2 1 99.9% 1
4520 1 0 100.0% 0

5. Calculate by fault simulation R
DET
(k);
6. Update the fault table: j, t
j
T
k
: R(t
j
) - R
DET
(k);
7. Remove from the test set T
k
all the test patterns t
j
T
k
where R(t
j
) = ;
8. If T(k) = go to END;
9. Optimize the test set T
k
by any test compaction
algorithm; t(k) = | T
k
|; go to 4;
10. END.
This algorithm starts by generating a test set T for all
detectable faults. Based on the fault simulation results a fault
table FT will be created. By applying k pseudorandom
patterns, we can remove from the original fault table all
faults, which were covered by the pseudorandom vectors and
by using static test compaction reduce the original
deterministic test set. Those modifications should be
performed iteratively for all possible breakpoints to calculate
the curve C
MEM
(
β
S) and to use this information to find the
optimal C
TOTAL
.
More details about the algorithm can be found in [10], but
in the case of very large circuits both of these algorithms
may lead to very expensive and time-consuming
experiments. It would be desirable to find the global
optimum of the total cost curve by as few sampled
calculations of the total cost for selected values of k as
possible.
6. Tabu search
For reducing the number of Total Cost calculations in
Algorithms 1 and 2 for finding the minimum value, the
method of Tabu search [11-12] as a general iterative
heuristic for solving combinatorial optimization problems
was used.
Algorithm 3: Tabu search
Start with initial solution SO
;
BestSolution:=SO;
T:=
;
While
number of empty iterations < E
Or
there is no return
to previously visited solution
Do
Generate the sample of neighbor solutions
V*
Ν
(SO);
Find best Cost(SO*
V*);
M: If
move to solution SO* is not in the T
Then
SO
trial
:=SO*;
Update Tabu list;
Else
Find the next best Cost(SO*
V*);
Go to M;
End If;
If
Cost(SO
trial
) < Cost(BestSolution)
Then
BestSolution := SO
trial
;
Else
Increment number of empty iterations E;
End
If;
End While;
END.
Tabu search is a form of local neighborhood search. Each
solution SO
∈Ω
where
is the search space (the set of all
feasible solutions) has an associated set of neighbors
Ν
(SO)
⊆Ω
. A solution SO'
∈Ν
(SO) can be reached from SO
by an operation called a move to SO'. At each step, the local
neighborhood of the current solution is explored and the best
solution is selected as a new current solution. Unlike local
search which stops when no improved new solution is found
in the current neighborhood, Tabu search continues the
search from the best solution in the neighborhood even if it is
worse than the current solution. To prevent cycling,
information pertaining to the most recently visited solutions
are inserted in a list called Tabu list. Moves to the Tabu
solutions are not allowed. The Tabu status of a solution is
overridden when a certain criteria (aspiration criteria) is
satisfied. One example of an aspiration criterion is when the
cost of the selected solution is better than the best seen so far,
which is an indication that the search is actually not cycling
back, but rather moving to a new solution not encountered
before [12].
The procedure of the Tabu search starts from an initial
feasible solution SO (current solution) in the search space
.
In our approach we use a fast estimation method proposed in
[13] to find an initial solution. This estimation method is
based on number of not yet covered faults R
NOT
(i) and can be
obtained from the pseudorandom test simulation results
(Table 1). A neighborhood
Ν
(SO) is defined for each SO.
Based on the experimental results it was concluded, that the
most efficient step size for defining the neighborhood N(SO)
was 3% of efficient clocks, as the larger step size, even if it
can give considerable speedup, will decrease the accuracy of
the final result. A sample of neighbor solutions V*
Ν
(SO)
is generated. An extreme case is to generate the entire
neighborhood, that is to take V* =
Ν
(SO). Since this is
generally impractical (computationally expensive), a small
sample of neighbors (V*
Ν
(SO)) is generated, and called
trial solutions (
V*
= n <<
Ν
(SO)
). In case of
ISCAS’85 benchmark circuits the best results were obtained,
when the size of the sample of neighborhood solutions was
4. Increase of the size of V* had no effect to the
improvement of the results. From these trial solutions the
best solution say SO*
V*, is chosen for consideration as the
next solution. The move to SO* considered even if SO* is
worse than SO, that is, Cost(SO*)>Cost(SO). A move from
SO to SO* is made provided certain conditions are satisfied.
The best candidate solution SO*
V* may or may not
improve the current solution but is still considered. It is this
feature that enables escaping from local optima.
One of the parameters of the algorithm is the size of the
Tabu list. A Tabu list T is maintained to prevent returning to
previously visited solutions. The list contains information
that to some extent forbids the search from returning to a
previously visited solutions. Generally the Tabu list size is
small. The size can be determined by experimental runs,

Citations
More filters

Hybrid Built-In Self-Test and Test Generation Techniques for Digital Systems

Gert Jervan
TL;DR: A novel hybrid BIST technique that addresses several areas where classical BIST methods have shortcomings is presented, and a set of optimization methods to reduce the hybrid test cost while not sacrificing test quality are proposed.
Proceedings ArticleDOI

Test time minimization for hybrid BIST of core-based systems

TL;DR: This paper proposes an iterative algorithm to find the optimal combination of pseudorandom and deterministic test sets of the whole system, consisting of multiple cores, under given memory constraints, so that the total test time is minimized.

System-on-Chip Test Scheduling and Test Infrastructure Design

TL;DR: The test application time and the test infrastructure hardware overhead of multiple-core SoCs are considered, and a technique to broadcast tests to several cores is proposed, and the possibility to use overlapping test vectors from different tests in a SoC is explored.
Patent

Programmable memory built-in-self-test (mbist) method and apparatus

TL;DR: In this article, programmable memory built-in self-test (MBIST) methods, apparatus, and systems are described. Exemplary embodiments of the disclosed technology can be used to test one or more memories located on an integrated circuit during manufacturing testing.
References
More filters
Book

Modern heuristic techniques for combinatorial problems

TL;DR: In this paper, the Lagrangian relaxation and dual ascent tree search were used to solve the graph bisection problem and the graph partition problem, and the traveling salesman problem scheduling problems.
Book

Shift register sequences

TL;DR: The Revised Edition of Shift Register Sequences contains a comprehensive bibliography of some 400 entries which cover the literature concerning the theory and applications of shift register sequences.
Journal ArticleDOI

A user's guide to tabu search

TL;DR: This presentation demonstrates that a well-tuned implementation of tabu search makes it possible to obtain solutions of high quality for difficult problems, yielding outcomes in some settings that have not been matched by other known techniques.
Proceedings ArticleDOI

Testing embedded-core based system chips

TL;DR: An overview of current industrial practices as well as academic research in core-based IC design is provided and the challenges for future research are described.
Related Papers (5)
Frequently Asked Questions (2)
Q1. What have the authors contributed in "A hybrid bist architecture and its optimization for soc testing" ?

This paper presents a hybrid BIST architecture and methods for optimizing it to test systems-on-chip in a cost effective way. In their approach the authors combine pseudorandom test patterns with stored deterministic test patterns to perform core test with minimum time and memory, without losing test quality. The authors propose two algorithms to calculate the cost of the test process. 

As a future work the authors would like to investigate possibilities to use the proposed approach for parallel testing issues ( testing multiple cores simultaneously ) and to use the same ideas in case of sequential cores.