scispace - formally typeset
Search or ask a question
Author

Douglas W. Stout

Other affiliations: GlobalFoundries
Bio: Douglas W. Stout is an academic researcher from IBM. The author has contributed to research in topics: Integrated circuit & Dropout voltage. The author has an hindex of 17, co-authored 50 publications receiving 1513 citations. Previous affiliations of Douglas W. Stout include GlobalFoundries.

Papers
More filters
Proceedings ArticleDOI
10 Nov 2002
TL;DR: In this article, the authors discuss Voltage Islands, a system architecture and chip implementation methodology that can be used to dramatically reduce active and static power consumption for System-on-Chip (SoC) designs.
Abstract: This paper discusses Voltage Islands, a system architecture and chip implementation methodology, that can be used to dramatically reduce active and static power consumption for System-on-Chip (SoC) designs. As technology scales for increased circuit density and performance, the need to reduce power consumption increases in significance as designers strive to utilize the advancing silicon capabilities. The consumer product market further drives the need to minimize chip power consumption. Effective use of Voltage Islands for meeting SoC power and performance requirements, while meeting Time to Market (TAT) demands, requires novel approaches throughout the design flow as well as special circuit components and chip powering structures. This paper outlines methods being used today to design Voltage Islands in a rapid-TAT product development environment, and discusses the need for industry EDA advances to create an industry-wide Voltage Island design capability.

331 citations

Patent
Alice Irene Biber1, Douglas W. Stout1
07 Jun 1990
TL;DR: In this paper, a self-adjusting impedance matching driver for a digital circuit is presented, where the driver has both a pull-up gate to VDD and a pulldown gate to ground.
Abstract: A self-adjusting impedance matching driver for a digital circuit. The driver has both a pull-up gate to VDD and a pull-down gate to ground. An array of gates is provided in parallel with each of the pull-up gate and the pull-down gate, with any one or more of such gates being selectively enabled in response to circuit means that monitors the impedance match between the output of the driver and the network it drives. By enabling selectively such gates, any impedance mismatch can be minimized. The selective enablement may be done only at power up, and thereafter only if the driven network is changed substantially.

276 citations

Patent
19 Oct 2007
TL;DR: In this article, a gate array cell adapted for standard cell design methodology or programmable gate array that incorporates a dual gate FET device to offer a range of performance options within the same unit cell area.
Abstract: A gate array cell adapted for standard cell design methodology or programmable gate array that incorporates a dual gate FET device to offer a range of performance options within the same unit cell area. The conductivity and drive strength of the dual gate device may be selectively tuned through independent processing of manufacturing parameters to provide an asymmetric circuit response for the device or a symmetric response as dictated by the circuit application.

158 citations

Patent
15 Jun 2004
TL;DR: In this article, a method and structure for designing an integrated circuit chip supplies a chip design and partitions elements of the chip design according to similarities in voltage requirements and timing of power states of the elements to create voltage islands.
Abstract: A method and structure for designing an integrated circuit chip supplies a chip design and partitions elements of the chip design according to similarities in voltage requirements and timing of power states of the elements to create voltage islands. The invention outputs a voltage island specification list comprising power and timing information of each voltage island; and automatically, and without user intervention, synthesizes power supply networks for the voltage islands.

70 citations

Patent
31 Dec 1996
TL;DR: In this article, a gate array book layout for an integrated circuit chip is disclosed in which a local interconnect layer provides N-well and P-well contact straps extending substantially along the entire width of said gate array books across the top and bottom edges thereof.
Abstract: A gate array book layout for an integrated circuit chip is disclosed in which a local interconnect layer provides N-well and P-well contact straps extending substantially along the entire width of said gate array book across the top and bottom edges thereof. This enables efficient electrical connections between the various connection points located within the book. In particular, primarily vertical strips of local interconnect are used to connect contact points which exist at or near the same layer as the local interconnect layer. By using local interconnect in this manner, metal-1 layer usage is significantly reduced thereby allowing for a more efficient integrated circuit chip design.

69 citations


Cited by
More filters
Book
01 Jul 1990
TL;DR: In this article, a wheel decorating ornament comprising an annular, planar sheet of material decorated on opposite sides, axially disposed between the groups of spokes and radially disposed at the rim and the hub, is presented.
Abstract: In combination with a wheel for a bicycle and the like having an annular rim, a hub rotatable about its axis, and axially offset groups of circumferentially spaced spokes which centrally support the hub on the rim; a wheel decorating ornament comprising an annular, planar sheet of material decorated on opposite sides, axially disposed between the groups of spokes and radially disposed between the rim and the hub.

1,093 citations

Journal ArticleDOI
TL;DR: Cache memories are a general solution to improving the performance of a memory system by placing smaller faster memories in front of larger, slower, and cheaper memories to approach that of a perfect memory system—at a reasonable cost.
Abstract: A computer’s memory system is the repository for all the information the computer’s central processing unit (CPU, or processor) uses and produces. A perfect memory system is one that can supply immediately any datum that the CPU requests. This ideal memory is not practically implementable, however, as the three factors of memory capacity, speed, and cost are directly in opposition. By placing smaller faster memories in front of larger, slower, and cheaper memories, the performance of the memory system may approach that of a perfect memory system—at a reasonable cost. The memory hierarchies of modern general-purpose computers generally contain registers at the top, followed by one or more levels of cache memory, main memory (all three are semiconductor memory), and virtual memory (on a magnetic or optical disk). Figure 1 shows a memory hierarchy typical of today’s (1995) commodity systems. Performance of a memory system is measured in terms of latency and bandwidth. The latency of a memory request is how long it takes the memory system to produce the result of the request. The bandwidth of a memory system is the rate at which the memory system can accept requests and produce results. The memory hierarchy improves average latency by quickly returning results that are found in the higher levels of the hierarchy. The memory hierarchy usually reduces bandwidth requirements by intercepting a fraction of the memory requests at higher levels of the hierarchy. Some machines, such as high-performance vector machines, may have fewer levels in the hierarchy—increasing cost for better predictability and performance. Some of these machines contain no caches at all, relying on large arrays of main memory banks to supply very high bandwidth. Pipelined accesses of operands reduce the performance impact of long latencies in these machines. Cache memories are a general solution to improving the performance of a memory system. Although caches are smaller than typical main memory sizes, they ideally contain the most frequently accessed portions of main memory. By keeping the most heavily used data near the CPU, caches can service a large fraction of the requests without needing to access main memory (the fraction serviced is called the hit rate). Caches require locality of reference to work well transparently—they assume that accessed memory words will be accessed again quickly (temporal locality) and that memory words adjacent to an accessed word will be accessed soon after the access in question (spatial locality). When the CPU issues a request for a datum not in the cache (a cache miss), the cache loads that datum and some number of adjacent data (a cache block) into itself from main memory. To reduce cache misses, some caches are associative—a cache may place a given block in one of several places, collectively called a set. This set is content-addressable; a block may be accessed based on an address tag, one of which is coupled with each block. When a new block is brought into a set and the set is full, the cache’s replacement policy dictates which of the old blocks should be removed from the cache to make room for the new. Most caches use an approximation of least recently used

702 citations

Proceedings ArticleDOI
10 Nov 2002
TL;DR: A new hybrid ASIC/FPGA chip architecture that is being developed in collaboration between IBM and Xilinx is introduced, and some of the design challenges this offers for designers and CAD developers are highlighted.
Abstract: This paper introduces a new hybrid ASIC/FPGA chip architecture that is being developed in collaboration between IBM and Xilinx, and highlights some of the design challenges this offers for designers and CAD developers. We review recent data from both the ASIC and FPGA industries, including technology features, and trends in usage and costs. This background data indicates that there are advantages to using standard ASICs and FPGAs for many applications, but technical and financial considerations are increasingly driving the need for a hybrid ASIC/FPGA architecture at specific volume tiers and technology nodes. As we describe the hybrid chip architecture ,we point out evolving tool and methodology issues that will need to be addressed to enable customers to effectively design hybrid ASIC/FPGAs. The discussion highlights specific automation issues in the areas of logic partitioning, logic simulation, verification, timing, layout and test.

328 citations

Proceedings ArticleDOI
01 Nov 1998
TL;DR: A comprehensive approach to accurately characterize the device and interconnect characteristics of present and future process generations is described, resulting in the generation of a representative strawman technology that is used in conjunction with analytical model simulation tools and empirical design data to obtain a realistic picture of the future of circuit design.
Abstract: We take a fresh look at the problems posed by deep submicron (DSM) geometries and re-open the investigation into how DSM effects are most likely to affect future design methodologies. We describe a comprehensive approach to accurately characterize the device and interconnect characteristics of present and future process generations. This approach results in the generation of a representative strawman technology that is used in conjunction with analytical model simulation tools and empirical design data to obtain a realistic picture of the future of circuit design. We then proceed to quantify the precise impact of interconnect, including delay degradation due to noise, on high performance ASIC designs. Having determined the role of interconnect in performance, we then reconsider the impact of future processes on ASIC design methodology.

322 citations

Patent
30 Mar 1993
TL;DR: In this article, the electrical current source circuitry for a bus is described, which includes transistor circuitry coupled between the bus and ground for controlling bus current, control circuitry coupled to the transistor circuitry, and a controller coupled with the control circuitry for controlling transistor circuitry.
Abstract: Electrical current source circuitry for a bus is described. The circuitry includes transistor circuitry coupled between the bus and ground for controlling bus current, control circuitry coupled to the transistor circuitry, and a controller coupled to the control circuitry for controlling the transistor circuitry. The controller comprises a variable level circuit comprising setting circuitry for setting a desired current for the bus and transistor reference circuitry coupled to the setting circuitry. The variable level circuit provides a first voltage. Voltage reference circuitry provides a reference voltage. Comparison circuitry is coupled to the voltage reference circuitry and to the variable level circuit for comparing the first voltage with the reference voltage. Logic circuitry is responsive to a trigger signal from the comparison circuitry. An output of the logic circuitry is coupled to the control circuitry in order to turn on the transistor circuitry in a manner dependent upon an output of the logic circuitry.

272 citations