scispace - formally typeset
Search or ask a question

Showing papers on "Design for testing published in 2004"


Journal ArticleDOI
TL;DR: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time.
Abstract: This paper presents a novel test-data volume-compression methodology called the embedded deterministic test (EDT), which reduces manufacturing test cost by providing one to two orders of magnitude reduction in scan test data volume and scan test time. The presented scheme is widely applicable and easy to deploy because it is based on the standard scan/ATPG methodology and adopts a very simple flow. It is nonintrusive as it does not require any modifications to the core logic such as the insertion of test points or logic bounding unknown states. The EDT scheme consists of logic embedded on a chip and a new deterministic test-pattern generation technique. The main contributions of the paper are test-stimuli compression schemes that allow us to deliver test data to the on-chip continuous-flow decompressor. In particular, it can be done by repeating certain patterns at the rates, which are adjusted to the requirements of the test cubes. Experimental results show that for industrial circuits with test cubes with very low fill rates, ranging from 3% to 0.2%, these schemes result in compression ratios of 30 to 500 times. A comprehensive analysis of the encoding efficiency of the proposed compression schemes is also provided.

529 citations


Proceedings ArticleDOI
26 Oct 2004
TL;DR: It is shown that scan chains can be used as a side channel to recover secret keys from a hardware implementation of the Data Encryption Standard (DES) by loading pairs of known plaintexts with one-bit difference in the normal mode and scanning out the internal state in the test mode.
Abstract: Scan based test is a double edged sword. On one hand, it is a powerful test technique. On the other hand, it is an equally powerful attack tool. We show that scan chains can be used as a side channel to recover secret keys from a hardware implementation of the Data Encryption Standard (DES). By loading pairs of known plaintexts with one-bit difference in the normal mode and then scanning out the internal state in the test mode, we first determine the position of all scan elements in the scan chain. Then, based on a systematic analysis of the structure of the nonlinear substitution boxes, and using three additional plaintexts we discover the DES secret key. Finally, some assumptions in the attack are discussed.

322 citations


Proceedings ArticleDOI
26 Oct 2004
TL;DR: Case study information on ATPG- and DFT-based solutions for test power reduction is presented and ICs have been observed to fail at specified minimum operating voltages during structured at-speed testing while passing all other forms of test.
Abstract: It is a well-known phenomenon that test power consumption may exceed that of functional operation. ICs have been observed to fail at specified minimum operating voltages during structured at-speed testing while passing all other forms of test. Methods exist to reduce power without dramatically increasing pattern volume for a given coverage. We present case study information on ATPG- and DFT-based solutions for test power reduction.

285 citations


Journal ArticleDOI
TL;DR: A scan architecture with mutually exclusive scan segment activation which overcomes the shortcomings of previous approaches and achieves both shift and capture-power reduction with no impact on the performance of the design, and with minimal impact on area and testing time.
Abstract: Power dissipation during scan testing is becoming an important concern as design sizes and gate densities increase. While several approaches have been recently proposed for reducing power dissipation during the shift cycle (minimum-transition don't care fill, special scan cells, and scan chain partitioning), limited work has been carried out toward reducing the peak power during test response capture and the few existing approaches for reducing capture power rely on complex automatic test pattern generation (ATPG) algorithms. This paper proposes a scan architecture with mutually exclusive scan segment activation which overcomes the shortcomings of previous approaches. The proposed architecture achieves both shift and capture-power reduction with no impact on the performance of the design, and with minimal impact on area and testing time (typically 2%-3%). An algorithmic procedure for assigning flip-flops to scan segments enables reuse of test patterns generated by standard ATPG tools. An implementation of the proposed method had been integrated into an automated design flow using commercial synthesis and simulation tools which was used on a wide range of benchmark designs. Reductions up to 57% in average power, and up to 44% and 34% in peak-power dissipation during shift and capture cycles, respectively, were obtained when using two scan segments. Increasing the number of scan segments to six leads to reductions of 96% and 80% in average power and, respectively, maximum number of simultaneous transitions.

196 citations


Journal ArticleDOI
TL;DR: A detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented and a testing technique requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design.
Abstract: There has been considerable research on quantum dot cellular automata (QCA) as a new computing scheme in the nanoscale regimes. The basic logic element of this technology is the majority voter. In this paper, a detailed simulation-based characterization of QCA defects and study of their effects at logic level are presented. Testing of these QCA devices at logic level is investigated and compared with conventional CMOS-based designs. Unique testing features of designs based on this technology are presented and interesting properties have been identified. A testing technique is presented; it requires only a constant number of test vectors to achieve 100% fault coverage with respect to the fault list of the original design. A design-for-test scheme is also presented, which results in the generation of a reduced test set at 100% fault coverage.

172 citations


Proceedings ArticleDOI
26 Oct 2004
TL;DR: This work describes a simple but cost-effective solution called channel masking that masks the X-states and allows test compression methods to be widely deployed on a variety of designs.
Abstract: The effectiveness of on-product test compression methods is degraded by the capture of unknown logic states ("X-states") by the scan elements This work describes a simple but cost-effective solution called channel masking that masks the X-states and allows test compression methods to be widely deployed on a variety of designs It also discusses various aspects of the channel masking hardware and the synthesis and validation methodology to support its use in a typical design flow Results are presented to show its effectiveness on some large industrial designs

136 citations


Proceedings ArticleDOI
12 Jul 2004
TL;DR: This analysis aims at pointing out the security vulnerability induced by using such a DfTtechnique and a solution securing the scan is finally proposed.
Abstract: Testing a secure system is often considered as a severebottleneck. While testability requires to an increase inboth observability and controllability, secure chips aredesigned with the reverse in mind, limiting access to chipcontent and on-chip controllability functions. As a result,using usual design for testability techniques whendesigning secure ICs may seriously decrease the level ofsecurity provided by the chip. This dilemma is even moresevere as secure applications need well-tested hardwareto ensure that the programmed operations are correctlyexecuted. In this paper, a security analysis of the scantechnique is performed. This analysis aims at pointing outthe security vulnerability induced by using such a DfTtechnique. A solution securing the scan is finally proposed.

133 citations


Proceedings ArticleDOI
15 Nov 2004
TL;DR: A new fault model, the missing gate fault (MGF) model, is proposed to better represent the physical failure modes of quantum technologies and it is shown that MGFs are highly testable, and that all M GFs in an N-gate k-CNOT circuit can be detected with from one to [N/2] test vectors.
Abstract: Logical reversibility occurs in low-power applications and is an essential feature of quantum circuits. Of special interest are reversible circuits constructed from a class of reversible elements called k-CNOT (controllable NOT) gates. We review the characteristics of k-CNOT circuits and observe that traditional fault models like the stuck-at model may not accurately represent their faulty behavior or test requirements. A new fault model, the missing gate fault (MGF) model, is proposed to better represent the physical failure modes of quantum technologies. It is shown that MGFs are highly testable, and that all MGFs in an N-gate k-CNOT circuit can be detected with from one to [N/2] test vectors. A design-for-test (DFT) method to make an arbitrary circuit fully testable for MGFs using a single test vector is described. Finally, we present simulation results to determine (near) optimal test sets and DFT configurations for some benchmark circuits.

107 citations


Proceedings ArticleDOI
Irith Pomeranz1
07 Jun 2004
TL;DR: The proposed procedure is able to produce test sets that detect many of the circuit faults, which are detectable using scan, and practically all the sequentially irredundant faults, by using test vectors with reachable states.
Abstract: Design-for-testability (DFT) for synchronous sequential circuits allows the generation and application of tests that rely on non-functional operation of the circuit. This can result in unnecessary yield loss due to the detection of faults that do not affect normal circuit operation. Considering single stuck-at faults in full-scan circuits, a test vector consists of a primary input vector U and a state S .We say that the test vector consisting of U and S relies on non-functional operation if S is an unreachable state, i.e., a state that cannot be reached from all the circuit states. Our goal is to obtain test sets with states S that are reachable states. Given a test set C, the solution we explore is based on a simulation-based procedure to identify reachable states that can replace unreachable states in C. No modifications are required to the test generation procedure and no sequential test generation is needed. Our results demonstrate that the proposed procedure is able to produce test sets that detect many of the circuit faults, which are detectable using scan, and practically all the sequentially irredundant faults, by using test vectors with reachable states. The procedure is applicable to any type of scan-based test set, including test sets for delay faults.

98 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a discussion of the emerging BOT and BIT schemes for embedded high-speed RF/analog/mixed-signal circuits in SOPs.
Abstract: Increasing levels of integration and high speeds of operation have made the problem of testing complex systems-on-packages (SOPs) very difficult. Testing packages with multigigahertz RF and optical components is even more difficult as external tester costs tend to escalate rapidly beyond 3 GHz. The extent of the problem can be gauged by the fact that test cost is approaching almost 40% of the total manufacturing cost of these packages. To alleviate test costs, various solutions relying on built-off test (BOT) and built-in test (BIT) of embedded high-speed components of SOPs have been developed. These migrate some of the external tester functions to the tester load board (BOT) and to the package and the die encapsulated in the package (BIT) in an "intelligent" manner. This paper provides a discussion of the emerging BOT and BIT schemes for embedded high-speed RF/analog/mixed-signal circuits in SOPs. The pros and cons of each scheme are discussed and preliminary available data on case studies are presented.

95 citations


Proceedings ArticleDOI
12 Jul 2004
TL;DR: This analysis aims at pointing out the security vulnerability induced by using usual design for testability techniques when designing secure ICs, and a solution securing the scan is finally proposed.
Abstract: Testing a secure system is often considered as a severe bottleneck. While testability requires an increase in both observability and controllability, secure chips are designed with the reverse in mind, limiting access to chip content and on-chip controllability functions. As a result, using usual design for testability (DfT) techniques when designing secure ICs may seriously decrease the level of security provided by the chip. This dilemma is even more severe as secure applications need well-tested hardware to ensure that the programmed operations are correctly executed. In this paper, a security analysis of the scan technique is performed. This analysis aims at pointing out the security vulnerability induced by using such a DfT technique. A solution securing the scan is finally proposed.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: This paper is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern and shows that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same.
Abstract: This paper extends the reconfigurable shared scan-in architecture (RSSA) to provide additional ability to change values on the scan configuration signals (scan enable signals) during the scan operation on a per-shift basis. We show that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same. In addition a simpler analysis can be used to construct the scan chains. This is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern. Results are presented on some ISCAS as well as industrial circuits.

Journal ArticleDOI
TL;DR: The challenges in meeting the quality requirements of gigascale integration are examined, and functional testing as well as statistical models and methods that could alleviate some of those problems are explored.
Abstract: Less predictable path delays and many paths with delays close to the clock period are the main trends affecting the delay testability of deep-submicron designs. We examine the challenges in meeting the quality requirements of gigascale integration, and explore functional testing as well as statistical models and methods that could alleviate some of those problems.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: Part of intelligible testing, a radically new test methodology required to support error-tolerance, is addressed, including three types of error attributes, namely error-rate, error-accumulation (retention), and error-significance.
Abstract: We have developed a new digital system mode of operation, referred to as error-tolerance, the purpose of which is to increase effective yield. Error-tolerance is based on the fact that many digital systems exhibit acceptable behavior even though they contain defects and occasionally output errors. A radically new test methodology, called intelligible testing, is required to support error-tolerance. This paper addresses parts of this methodology. There are several fundamental philosophical differences between intelligible testing and classical testing, such as: intelligible testing is application oriented; it partitions die and chips into multiple categories, not just good and bad parts; and it supplies quantitative information about the effects of defects on errors, i.e. it is error based rather than fault based. We describe three types of error attributes, namely error-rate, error-accumulation (retention), and error-significance. We present test techniques for estimating quantitative values for these qualitative attributes. Testing to support error-tolerance involves new ATPG tools, new fault simulators, and new DFT and BIST techniques.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: It is shown how by consciously creating scan paths prior to logic synthesis, both the transition delay fault coverage and circuit speed can be improved.
Abstract: This paper introduces a new method to construct functional scan chains at the register-transfer level aimed at increasing the delay fault coverage when using the skewed-load test application strategy. It is shown how by consciously creating scan paths prior to logic synthesis, both the transition delay fault coverage and circuit speed can be improved.

01 Jan 2004
TL;DR: The full holdscan testing system implemented in the 90nm Intel Pentium 4 processor is described, particularly the design challenges, cost optimizations, and test benefits, and the costs and benefits of having implemented this successful testing system are discussed.
Abstract: Ever-shrinking microprocessor product development times require enhanced High-Volume Manufacturing (HVM) techniques. This paper describes the full holdscan testing system implemented in the 90nm Intel Pentium 4 processor. Benefits of this scan system include significantly reduced functional test-writing and fault-grade effort, extensive initialization of the design for test and debug, massive visibility into the design for postsilicon debug and fault isolation, and ultimately, a significantly accelerated ramp to production test quality. Any full hold-scan system such as this impacts timing, power, area, and schedule. In a high-performance microprocessor, in particular, this significantly impacts product viability and must be closely managed. In this paper, the Intel full hold-scan system is described, particularly the design challenges, cost optimizations, and test benefits, and we also discuss the costs and benefits of having implemented this successful testing system.

BookDOI
01 Jan 2004
TL;DR: The IEEE 1149.4 Test Bus is used for mixed-signal test of A/D converters as discussed by the authors, and the IEEE 802.11.4 test bus is used to test mixed-Signal circuits.
Abstract: 0 Introduction.- 1 Mixed-Signal Test.- 2 Analog and Mixed Signal Test Bus: IEEE 1149.4 Test Standard.- 3 Test of A/D Converters.- 4 Phased Locked Loop Test Methodologies.- 5 Behavioral Testing of Mixed-Signal Circuits.- 6 Behavioral Modeling of Multistage ADCs and its Use for Design, Calibration and Test.- 7 DFT and BIST Techniques for Embedded Analog Integrated Filters.- 8 Oscillation-based Test Strategies.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: An on-chip scheme for delay fault detection and performance characterization is presented that allows for accurate measurement of delays of speed paths for speed binning and facilitates a systematic and efficient test and debug scheme fordelay faults.
Abstract: Efficient test and debug techniques are indispensable for performance characterization of large complex integrated circuits in deep-submicron and nanometer technologies. Performance characterization of such chips requires on-chip hardware and efficient debug schemes in order to reduce time to market and ensure shipping of chips with lower defect levels. In this paper we present an on-chip scheme for delay fault detection and performance characterization. The proposed technique allows for accurate measurement of delays of speed paths for speed binning and facilitates a systematic and efficient test and debug scheme for delay faults. The area overhead associated with the proposed technique is very low.

Proceedings ArticleDOI
05 Jan 2004
TL;DR: This paper investigates the use of random access scan for simultaneous reduction of test power, test data volume and test application time and provides an asymmetric traveling salesman formulation of these problems to minimize random access scans and the test data.
Abstract: Adherence to serial scan is preventing the researchers from investigating alternative design for test techniques that may offer larger test benefit at the cost of some what higher overhead. In this paper, we investigate the use of random access scan for simultaneous reduction of test power, test data volume and test application time. We provide an asymmetric traveling salesman formulation of these problems to minimize random access scans and the test data. Application of our method results into nearly 3/spl times/ speedup in test application time, 60% reduction in test data volume and over 99% reduction in power consumption for benchmark circuits.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: The proposed scheme makes use of low-speed low-resolution undersampling to eliminate the need for a bulky analog-to-digital converter and the use of a noise reference for comparison makes it possible to compensate for imperfect stimulus generation.
Abstract: This paper addresses the cost, signal integrity and I/O bandwidth problems in radio-frequency testing by proposing a feature extraction based built-in alternate test scheme. The scheme is suitable for built-in self-test of radio-frequency components embedded in a system with available digital signal processing resources, and can also be extended to implement built-in test solutions for individual RF devices that have access to a low-end digital tester. The process applies an alternate test and automatically extracts features from the component response to predict specifications like third order intercept point, 1dB compression point, noise figure, gain and power supply rejection ratio. The proposed scheme makes use of low-speed low-resolution undersampling to eliminate the need for a bulky analog-to-digital converter and the use of a noise reference for comparison makes it possible to compensate for imperfect stimulus generation. The simulation results for a 1 GHz downconversion mixer and a 900 MHz low-noise amplifier present an average of 97.3% prediction accuracy of specifications under test.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A novel selector architecture that allows arbitrary compression ratios, scales to any number of scan chains and minimizes area overhead is introduced.
Abstract: X-tolerant deterministic BIST (XDBIST) was recently presented as a method to efficiently compress and apply scan patterns generated by automatic test pattern generation (ATPG) in a logic built-in self-test architecture. In this paper we introduce a novel selector architecture that allows arbitrary compression ratios, scales to any number of scan chains and minimizes area overhead. XDBIST test-coverage, full X-tolerance and scan-based diagnosis ability are preserved and are the same as deterministic scan-ATPG.

Proceedings ArticleDOI
23 May 2004
TL;DR: A novel technique to reuse the existing scanpaths in a chip for delay fault testing and silicon debug is described, which facilitates an efficient scheme for detecting and debugging delay faults and has minimal area and power overhead.
Abstract: This paper describes a novel technique to reuse the existing scanpaths in a chip for delay fault testing and silicon debug. Efficient test and debug techniques for VLSI chips are indispensable in Deep Submicron technologies. A systematic debug scheme is also necessary in order to reduce time-to-market. Due to stringent timing requirements of modern chips, test and debug schemes have to be tailored for detection and debug of functional defects as well as delay faults quickly and efficiently. The proposed technique facilitates an efficient scheme for detecting and debugging delay faults and has minimal area and power overhead.

Journal ArticleDOI
TL;DR: In NIMA, test stimuli and expected results for digital cores are first compiled into new formats and subsequently encapsulated into packets and augmented with control and address bits such that they can autonomously be transmitted to their destination through a switching fabric.
Abstract: A generic model for test architectures in the core-based system-on-chip (SoC) designs consists of source/sink, wrapper, and test access mechanism (TAM). Current test architectures for digital cores assume a direct connection between the core and the tester. In these architectures, the tester establishes a physical link between itself and the core, such that it can directly control the core's design-for-testability (DFT), such as the scan chains or primary inputs. This direct connection undermines the modularity in the generic test architecture by tightly coupling its elements. In this paper, we propose a network-oriented indirect and modular architecture (NIMA) for postfabrication test in an SoC design methodology. In NIMA, test stimuli and expected results for digital cores are first compiled into new formats and subsequently encapsulated into packets. These packets are augmented with control and address bits such that they can autonomously be transmitted to their destination through a switching fabric. Owing to the indirect nature of the connection, embedded autonomous blocks at each core are used to apply the test to the core and compare the test results with expected values. This indirect access to the core decouples test data processing at the core from its communication providing the basis for flexible and modular test design and programming. Moreover, NIMA facilitates remote-access of single or multiple testers to an SoC, and enables the sending of test data to an SoC in-field in order to test the chip in its target system. Finally, NIMA serves in contributing toward the development of new test architectures that benefit from network-centric SoCs. We present a first implementation of NIMA when applied to a number of SoC benchmarks.

Proceedings ArticleDOI
23 May 2004
TL;DR: A new scan tree architecture for test application time reduction is proposed based on a dynamic reconfiguration mode allowing one to reduce the dependence between the test set and the final scan tree Architecture.
Abstract: We propose a new scan tree architecture for test application time reduction. This technique is based on a dynamic reconfiguration mode allowing one to reduce the dependence between the test set and the final scan tree architecture. The proposed method includes two different configuration modes: the scan tree mode and the single scan mode. The proposed method does not require any additional input or output. Experimental results show up to 95% of test application time saving and test data volume reduction in comparison with a single scan chain architecture.

Journal ArticleDOI
TL;DR: A new test generation algorithm is introduced that overcomes both of the limitations of the algorithms for testing from a non-deterministic stream X-machine for situations where the implementation is known to be deterministic.

Proceedings ArticleDOI
28 Jan 2004
TL;DR: The clustering process has been modified to allow a better distribution of scan cells in each cluster and hence lead to more important power reductions, and show that scan design constraints are still satisfied.
Abstract: Scan-based architectures, though widely used in modern designs, are expensive in power consumption. Recently, we proposed a technique based on clustering and reordering of scan cells that allows to design low power scan chains. The main feature of this technique is that power consumption during scan testing is minimized while constraints on scan routing are satisfied. In this paper, we propose a new version of this technique. The clustering process has been modified to allow a better distribution of scan cells in each cluster and hence lead to more important power reductions. Results are provided at the end of the paper to highlight this point and show that scan design constraints (length of scan connections, congestion problems) are still satisfied.

Proceedings ArticleDOI
25 Apr 2004
TL;DR: A loop-back architecture, along with a novel, all-digital design-for-testability (DfT) modification that enables cost efficient testing of various defects at the wafer level, applicable to a wide range of cost-sensitive applications that use the modulation of the voltage-controlled-oscillator (VCO).
Abstract: Traditionally, radio frequency (RF) paths are bypassed during wafer sort due to the high cost of RF testing. Increasing packaging costs, however; result in a need for a more thorough wafer-level testing including the RF path. In this paper, we propose a loop-back architecture, along with a novel, all-digital design-for-testability (DfT) modification that enables cost efficient testing of various defects at the wafer level. These methods are applicable to a wide range of cost-sensitive applications that use the modulation of the voltage-controlled-oscillator (VCO). Experimental results using a Bluetooth platform and considering a variety of defects confirm the viability of the approach.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: This paper investigates the applicability of multi-frequency test access mechanism (TAM) design for reducing the system-on-a-chip (SOC) test application time by exploring a larger solution space, which, as shown by experimental data, can lead to improvedtest application time.
Abstract: This paper investigates the applicability of multi-frequency test access mechanism (TAM) design for reducing the system-on-a-chip (SOC) test application time. Based on the bandwidth matching concept the proposed algorithms explore a larger solution space, which, as shown by experimental data, can lead to improved test application time.

01 Jan 2004
TL;DR: The purpose of this work is to present a Hardware Fault Simulation (HFS) methodology and tool, using partial reconfiguration, suitable for efficient fault modeling and simulation in FPGAs, particularly useful for BIST effectiveness evaluation and for applications in which multiple fault injection is mandatory.
Abstract: The purpose of this work is to present a Hardware Fault Simulation (HFS) methodology and tool ( f 2 s ), using partial reconfiguration, suitable for efficient fault modeling and simulation in FPGAs. The methodology is particularly useful for BIST effectiveness evaluation, and for applications in which multiple fault injection is mandatory, such as in safety-critical applications. A novel (CSA – Combination Stuck-At) fault model is proposed, which leads to better test quality estimates than the classic (LSA – Line Stuck-At) model at the LUTs terminals. Fault injection is performed using only local reconfiguration with small binary files. The efficiency of Software and FPGA-based HFS, with and without partial reconfiguration, are compared using ISCAS’89 sequential benchmarks, showing the usefulness of the proposed methodology. 1 – Introduction Design for Testability (DfT) techniques are mandatory for cost-effective product development. Full scan based DfT techniques are popular for sequential digital modules. However, it has limitations, due to the impact on device performance, test overhead, test application time and no at-speed testing, which is crucial to catch dynamic faults [1]. Hence, partial scan or self-test strategies have been used, requiring testability analysis of sequential blocks. In the design environment, test quality validation is performed through Fault Simulation (FS). For complex devices, FS may be a very costly process, especially for sequential digital modules. In fact, circuit complexity, test pattern length and fault list size may lead to a large computer effort. Although many efficient algorithms have been proposed (see, e.g., [2] [3]), fault simulation in complex circuits is still a very time-consuming task and can significantly lengthen the time-to-market. FS can be carried out implementing the process in software, or in hardware [1]. The easiness in developing software tools for FS (taking advantage of the easiness of software programmability) made this implementation widespread. However, the availability of very complex

Proceedings ArticleDOI
TL;DR: The micro scanning mirror with lateral out-of-plane comb drives is on its way to volume fabrication as mentioned in this paper, and the most important activities and decisions that are representative for many MEMS devices that are supposed to go the same way.
Abstract: The micro scanning mirror with lateral out-of-plane comb drives is on its way to volume fabrication. This article reviews the development and highlights the most important activities and decisions that are representative for many MEMS devices that are supposed to go the same way. Careful analysis of the product requirements, design for reliability, design for testability, design for packaging, a mature process, and automated testing preferably on wafer-level have been identified as keys to volume fabrication of MEMS.