Using laser defect avoidance to build large-area FPGAs
TL;DR: Experiments on test FPGAs show that laser defect avoidance produces signal delays half those of active switches, indicating the complexity limits of field-programmable gate arrays.
Abstract: Wafer-scale techniques of defect avoidance expend the complexity limits of field-programmable gate arrays by routing around flawed blocks to build working systems. Experiments on test FPGAs show that laser defect avoidance produces signal delays half those of active switches.
Citations
More filters
••
01 Nov 1999TL;DR: A new approach for tolerating the defects in FPGA's configurable logic blocks (CLBs) is proposed and two possibilities for distributing the spare resources are introduced and compared.
Abstract: The homogeneous structure of field programmable gate arrays (FPGAs) suggests that the defect tolerance can be achieved by shifting the configuration data inside the FPGA. This paper proposes a new approach for tolerating the defects in FPGA's configurable logic blocks (CLBs). The defects affecting the FPGA's interconnection resources can also be tolerated with a high probability. This method is suited for the makers, since the yield of the chip is considerably improved, specially for large sizes. On the other hand, defect-free chips can be used as either maximum size, ordinary array chips or fault tolerant chips. In the fault tolerant chips, the users will be able to achieve directly the fault tolerance by only shifting the design data automatically, without changing the physical design of the running application, without loading other configurations data from the off-chip FPGA, and without the intervention of the company. For tolerating defective resources, the use of spare CLBs is required. In this paper, two possibilities for distributing the spare resources (king-shifting and horse-allocation) are introduced and compared.
71 citations
Cites methods from "Using laser defect avoidance to bui..."
...The utilization of a laser technique for defect avoidance is another direction for defect tolerance approaches [11]....
[...]
••
01 Jun 2003
TL;DR: A guided tour to the approaches to the faults in SRAM-based field programmable gate arrays related to the FPGA and others which have been recently introduced and can be applied to today's FPGAs are provided.
Abstract: Topics related to the faults in SRAM-based field programmable gate arrays (FPGAs) have been intensively studied in recent research studies. These topics include FPGA fault detection, FPGA fault diagnosis, FPGA defect tolerance, and FPGA fault tolerance. This paper provides a guided tour to the approaches related to these topics. These include techniques, which are applied to the FPGA and others which have been recently introduced and can be applied to today's FPGAs.
59 citations
••
01 Jan 1984TL;DR: In this article, the authors discuss unique technological approaches and the future potentials along with limitations that may arise, and assess the problems that limited early implementations in the 60s and if they can be overcome; and if wafer scale integration will provide new opportunities such as systolic arrays, connection-oriented architectures and other related disciplines.
Abstract: With lowering defect densities in LSI fabrication technologies interest in wafer scale integration has revived. The process holds the promise of higher performance, lower cost, and increased packing density, particularly at the system level. However, it is necessary to consider if recent advances are sufficient to outweigh problems in testing, repairability and system configuration, ancl flexibility. To be assessed too are the problems that limited early implementations in the 60s and if they can be overcome; and if wafer scale integration will provide new opportunities such as systolic arrays, connection-oriented architectures and other related disciplines . . . Panelists will discuss unique technological approaches and the future potentials along with limitations that may arise.
29 citations
•
TL;DR: A new approach for tolerating the defects in FPGA’s configurable logic blocks (CLBs) is proposed and two possibilities for distributing the spare resources (king-shifting and Horse-allocation) are introduced and compared.
Abstract: The homogeneous structure of field programmable gate arrays (FPGAs) suggests that the defect tolerance can be achieved by shifting the configuration data inside the FPGA. This paper proposes a new approach for tolerating the defects in FPGA’s configurable logic blocks (CLBs). The defects affecting the FPGA’s interconnection resources can also be tolerated with a high probability. This method is suited for the makers, since the yield of the chip is considerably improved, specially for large sizes. On the other hand, defect-free chips can be used as either maximum size, ordinary array chips or fault tolerant chips. In the fault tolerant chips, the users will be able to achieve directly the fault tolerance by only shifting the design data automatically, without changing the physical design of the running application, without loading other configurations data from the off-chip FPGA, and without the intervention of the company. For tolerating defective resources, the use of spare CLBs is required. In this paper, two possibilities for distributing the spare resources (king-shifting and Horse-allocation) are introduced and compared. key words: defect tolerance, fault tolerance, field programmable gate array (FPGA), shifting configurations data, yield improvement
10 citations
Cites methods from "Using laser defect avoidance to bui..."
...The utilization of a laser technique for defect avoidance is an other direction for defect tolerance approaches [11]....
[...]
References
More filters
••
TL;DR: FPGAs enable engineers to mitigate the effects entailed by the wellknown tradeoff in computing between cost and performance, and combine the advantages of both general-purpose processors and specialized circuits.
Abstract: Field-programmable gate arrays (FPGAs) are integrated circuits, which constitute the “new kid in town’’ where digital hardware technology is concerned. An FPGA is an array of logic blocks (cells) placed in an infrastructure of interconnections, which can be programmed at three distinct levels (see Figure 1): (1) the function of the logic cells, (2) the interconnections between cells, and (3) the inputs and outputs. All three levels are configured via a string of bits that is loaded from an external source, either once or several times. In the latter case the FPGA is considered reconfigurable. FPGAs are highly versatile devices that offer the designer a wide range of design choices and options. However, this potential power necessitates a suite of tools in order to design a system. Essentially, these tools generate the configuration bit string, given such inputs as a logic diagram or a high-level functional description. Our focus here is on the hardware aspect of digital FPGAs; we shall not treat such issues as programming tools and analog FPGAs [4, 5]. FPGAs enable engineers to mitigate the effects entailed by the wellknown tradeoff in computing between cost and performance. When one sets about implementing a certain computational task, obtaining the highest performance (speed) is inarguably achieved by constructing a specialized machine, that is, hardware. This possibility exists, for example, in the form of applicationspecific integrated circuits (ASICs); however, the price per application as well as the turnaround time (from design to actual operation) are both quite prohibitive. On the whole, the computing industry has opted for general-purpose computing, which trades maximum speed for much lower design and implementation costs. A general-purpose processor can be easily and quickly instructed to change its task, say, from word processing to number crunching. This ability is made possible since such a processor is programmable; nonetheless, programming (and reprogramming) does not change the processor’s hardware. An FPGA is programmable at the hardware level, thus combining the advantages of both general-purpose processors and specialized circuits.
248 citations
•
09 Feb 1989
TL;DR: Fault Tolerance Through Reconfiguration in VLSI and "I Arrays" is included in the Computer Systems series, edited by Herb Schwetman and presents the authors' own results in the reconfiguration of processing arrays.
Abstract: Fault tolerance is one of the principle mechanisms for achieving high reliability, high availability in digital systems It is the survival attribute of digital systems This book brings together and discusses the most significant results scattered across the vast field of research in fault tolerance It focuses in particular on reconfiguration techniques and presents the authors' own results in the reconfiguration of processing arrays By means of dedicated arrays, they note, it is possible to build systems that are orders of magnitude more powerful than programmed computers Their treatment of networks and arrays is extensive and has wide applicabilityContents: Introduction Typical Processing Arrays Failure Mechanisms and Fault Models Basic Problems of Fault-Tolerance Through Array Configuration Technologies Supporting Reconfiguration Testing Reconfiguration: An Introduction The Diogenes Approach Reconfiguration for Linear Arrays Graph-Theoretical Approaches to Reconfiguration Local Reconfiguration Global Reconfiguration Techniques: Row/Column Elimination Global Mapping: Index Mapping Reconfiguration Techniques Reconfiguration Based on Request-Acknowledge Local Protocols Reconfiguration of Multiple-Pipeline Structures Some Extensions Toward Time Redundancy Appendix: Reliability Prediction of ArraysR Negrini, M G Sami, and R Stefanelli are researchers at the Politecnico di Milano "Fault Tolerance Through Reconfiguration in VLSI and "I Arrays" is included in the Computer Systems series, edited by Herb Schwetman
152 citations
••
TL;DR: The inability to contain faults within single cells and the need for fast reconfiguration are identified as the key obstacles to obtaining a significant increase in yield.
Abstract: The fine granularity and reconfigurable nature of field-programmable gate arrays (FPGA's) suggest that defect-tolerant methods can be readily applied to these devices in order to increase their maximum economic sizes, through increased yield. This paper identifies the inability to contain faults within single cells and the need for fast reconfiguration as the key obstacles to obtaining a significant increase in yield. Monte Carlo defect modeling of the photolithographic layers of VLSI FPGA's is used as a foundation for the yield modeling of various defect-tolerant architectures. Results suggest that a medium-grain architecture is the best solution, offering a substantial increase in size without significant side effects. This architecture is shown to produce greater gate densities than the alternative approach of realizing ultralarge scale FPGA's-multichip modules. >
89 citations
••
TL;DR: The PI faults are model and the design and routability of the FPGA channel architecture is addressed to achieve 100% routing with minimum performance penalty in the presence of PI faults to show the feasibility of achieving routability with minimumperformance penalty when a large number of faults are present in the channel.
Abstract: The field programmable gate array (FPGA) routing resources are fixed and their usage is constrained by the location of programmable interconnects (PIs) such as antifuses. The routing or the interconnect delays are determined by the length of segments assigned to the nets of various lengths and the number of PIs programmed for routing of each net. Due to the use of PIs certain unconventional faults may appear. In this paper we model the PI faults and address the design and routability of the FPGA channel architecture to achieve 100% routing with minimum performance penalty in the presence of PI faults. A channel routing algorithm has also been developed which routes nets in the presence of PI faults. Experiments were performed by randomly injecting faults of different types into the routing channel and then using the routing algorithm to determine the routability of the synthesized architecture. Results on a set of industrial designs and MCNC benchmark examples show the feasibility of achieving routability with minimum performance penalty when a large number of faults are present in the channel. >
39 citations
••
01 Jan 1989
TL;DR: The Restructurable VLSI project at MIT Lincoln Laboratory has developed a design methodology, new technology, and CAD tools for WSI and six wafer scale systems have been fabricated and three of much larger size are being designed.
Abstract: The Restructurable VLSI project at MIT Lincoln Laboratory has developed a design methodology, new technology, and CAD tools for WSI. Six wafer scale systems have been fabricated and three of much larger size are being designed. Figure 1 shows one of these packaged WS circuits. The accomplishments and current research status of this project, which was conceived in 1979 [1], are described in this chapter.
33 citations