scispace - formally typeset
Search or ask a question
Author

Carlo Guardiani

Other affiliations: PDF Solutions
Bio: Carlo Guardiani is an academic researcher from STMicroelectronics. The author has contributed to research in topics: Design for manufacturability & Very-large-scale integration. The author has an hindex of 8, co-authored 22 publications receiving 212 citations. Previous affiliations of Carlo Guardiani include PDF Solutions.

Papers
More filters
Proceedings ArticleDOI
10 Nov 1996
TL;DR: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented and permits the statistical characterization of large analog and mixed-signal systems.
Abstract: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.

74 citations

Proceedings ArticleDOI
12 Aug 1996
TL;DR: This paper discusses manufacturabilty of state-of-the-art low power technologies, reporting the results on two generations of bulk CMOS technologies, triple-well CMOS and Thin Film Silicon on Insulator technologies.
Abstract: This paper discusses manufacturability of state-of-the-art low power technologies. We report the results on two generations of bulk CMOS technologies, triple-well CMOS and Thin Film Silicon on Insulator (TFSOI) technologies. We present technology capabilities for several values of supply voltage and address the issue of performance scaling with the supply voltage reduction. Then we focus on the statistical characterization of these technologies and discuss both interchip and intrachip variations. Finally, we present the digital and analog designer perspectives on the low power IC operation.

29 citations

Journal ArticleDOI
TL;DR: Two innovative methods for statistical design optimization in the design of a CMOS OP-AMP are applied and the derivation of an analytic function representing the yield surface in thedesign parameters space is derived.
Abstract: Innovative methods for statistical design optimization have been applied to the development of analog IC blocks. The most important feature of these methods is the derivation of an analytic function representing the yield surface in the design parameter space. Using this analytic model it is possible to optimize the yield accurately and efficiently. All the required operations are implemented in an integrated and fully automated CAD system. A comparison between simulated and measured data for several wafer lots demonstrates the validity of the approach. >

20 citations

Proceedings ArticleDOI
01 Jan 1995
TL;DR: In this paper it is described a technique that allows to determine the worst-case points with an assigned probability value and the results are presented and compared with those obtained by standard methods both at cell and at circuit level showing the considerable benefits of the new method.
Abstract: The possibility of determining the accurate worstcase timing performance of a library of standard cells is of great importance in a modern VLSI structured semicustom IC design flow. The margin for profitability is indeed extremely tight because of the ever increasing performance demand which can hardly be satisfied by a corresponding progress of the process technology. It is therefore of utmost importance to avoid excessively pessimistic estimates of the actual cell performance in order to exploit all the potential of the fabrication process. In this paper it is described a technique that allows to determine the worst-case points with an assigned probability value. It is thus possible to select the desired level of confidence for the worst-case evaluation of digital IC designs with good accuracy. The results of the Assigned Probability Technique (APT) are presented and compared with those obtained by standard methods both at cell and at circuit level showing the considerable benefits of the new method.

19 citations

Proceedings ArticleDOI
10 Aug 1998
TL;DR: An automatic modeling technique is presented in this paper that allows one to build an accurate model of power consumption in embedded memory blocks by automatically splitting those variables that have a discontinuous effect on the power consumption.
Abstract: An automatic modeling technique is presented in this paper that allows one to build an accurate model of power consumption in embedded memory blocks. A software neural-network is used to create a regression tree by automatically splitting those variables that have a discontinuous effect on the power consumption. An application of the methodology to the modeling of a 0.35 /spl mu/m CMOS embedded SRAM is presented.

16 citations


Cited by
More filters
Journal ArticleDOI
01 Dec 2000
TL;DR: This survey presents an overview of recent advances in the state of the art for computer-aided design (CAD) tools for analog and mixed-signal integrated circuits (ICs) and outlines progress on the various design problems involved.
Abstract: This survey presents an overview of recent advances in the state of the art for computer-aided design (CAD) tools for analog and mixed-signal integrated circuits (ICs). Analog blocks typically constitute only a small fraction of the components on mixed-signal ICs and emerging systems-on-a-chip (SoC) designs. But due to the increasing levels of integration available in silicon technology and the growing requirement for digital systems to communicate with the continuous-valued external world, there is a growing need for CAD tools that increase the design productivity and improve the quality of analog integrated circuits. This paper describes the motivation and evolution of these tools and outlines progress on the various design problems involved: simulation and modeling, symbolic analysis, synthesis and optimization, layout generation, yield analysis and design centering, and test. This paper summarizes the problems for which viable solutions are emerging and those which are still unsolved.

579 citations

Journal ArticleDOI
TL;DR: It is found that circuits with a large number of critical paths and with a low logic depth are most sensitive to uncorrelated gate delay variations, and scenarios for future technologies show the increased impact of uncor related delay variations on digital design.
Abstract: The yield of low voltage digital circuits is found to he sensitive to local gate delay variations due to uncorrelated intra-die parameter deviations. Caused by statistical deviations of the doping concentration they lead to more pronounced delay variations for minimum transistor sizes. Their influence on path delays in digital circuits is verified using a carry select adder test circuit fabricated in 0.5 and 0.35 /spl mu/m complementary metal-oxide-semiconductor (CMOS) technologies with two different threshold voltages. The increase of the path delay variations for smaller device dimensions and reduced supply voltages as well as the dependence on the path length is shown. It is found that circuits with a large number of critical paths and with a low logic depth are most sensitive to uncorrelated gate delay variations. Scenarios for future technologies show the increased impact of uncorrelated delay variations on digital design. A reduction of the maximal clock frequency of 10% is found for, for example, highly pipelined systems realized in a 0.18-/spl mu/m CMOS technology.

177 citations

Journal ArticleDOI
TL;DR: A simple, yet reliable methodology to expedite yield estimation and optimization of microwave structures by exploiting the almost linear dependence of the feature points on the designable parameters of the structure.
Abstract: In this paper, we propose a simple, yet reliable methodology to expedite yield estimation and optimization of microwave structures. In our approach, the analysis of the entire response of the structure at hand (e.g., $S$ -parameters as a function of frequency) is replaced by response surface modeling of suitably selected feature points. On the one hand, this is sufficient to determine whether a design satisfies given performance specifications. On the other, by exploiting the almost linear dependence of the feature points on the designable parameters of the structure, reliable yield estimates can be realized at low computational cost. Our methodology is verified using two examples of waveguide filters and one microstrip hairpin filter and compared with conventional Monte Carlo analysis based on repetitive electromagnetic simulations, as well as with statistical analysis exploiting linear response expansions around the nominal design. Finally, we perform yield-driven design optimizations on these filters.

134 citations

Journal ArticleDOI
TL;DR: This paper develops an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level and employs tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points.
Abstract: Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient analysis of variance-based stochastic circuit/microelectromechanical systems simulator to efficiently extract the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is efficiently simulated by our simulator at the cost of only 10min in MATLAB on a regular personal computer.

126 citations

Proceedings ArticleDOI
07 Nov 2004
TL;DR: An asymptotic probability extraction method, APEX, for estimating the unknown random distribution when using nonlinear response surface modeling, which uses a binomial moment evaluation to efficiently compute the high order moments of the unknown distribution and applies moment matching to approximate the characteristic function of the random circuit performance by an efficient rational function.
Abstract: While process variations are becoming more significant with each new IC technology generation, they are often modeled via linear regression models so that the resulting performance variations can be captured via normal distributions. Nonlinear (e.g. quadratic) response surface models can be utilized to capture larger scale process variations; however, such models result in non-normal distributions for circuit performance which are difficult to capture since the distribution model is unknown. In this paper we propose an asymptotic probability extraction method, APEX, for estimating the unknown random distribution when using nonlinear response surface modeling. APEX first uses a binomial moment evaluation to efficiently compute the high order moments of the unknown distribution, and then applies moment matching to approximate the characteristic function of the random circuit performance by an efficient rational function. A simple statistical timing example and an analog circuit example demonstrate that APEX can provide better accuracy than Monte Carlo simulation with 10 samples and achieve orders of magnitude more efficiency. We also show the error incurred by the popular normal modeling assumption using standard IC technologies.

110 citations