scispace - formally typeset
Search or ask a question
Author

Sunil P. Khatri

Bio: Sunil P. Khatri is an academic researcher from Texas A&M University. The author has contributed to research in topics: Logic synthesis & Logic gate. The author has an hindex of 30, co-authored 275 publications receiving 3984 citations. Previous affiliations of Sunil P. Khatri include Motorola & Arizona State University.


Papers
More filters
Book ChapterDOI
03 Aug 1996
TL;DR: VIS provides the capability to check the combinational equivalence of two designs and provides traditional verification in the form of a cycle-based simulator that uses BDD techniques.
Abstract: ion Manual abstraction can be performed by giving a file containing the names of variables to abstract. For each variable appearing in the file, a new primary input node is created to drive all the nodes that were previously driven by the variable. Abstracting a net effectively allows it to take any value in its range, at every clock cycle. Fair CTL model checking and language emptiness check VIS performs fair CTL model checking under Buchi fairness constraints. In addition, VIS can perform language emptiness checking by model checking the formula EG true. The language of a design is given by sequences over the set of reachable states that do not violate the fairness constraint. The language emptiness check can be used to perform language containment by expressing the set of bad behaviors as another component of the system. If model checking or language emptiness fail, VIS reports the failure with a counterexample, (i.e., behavior seen in the system that does not satisfy the property for model checking, or valid behavior seen in the system for language emptiness). This is called the “debug” trace. Debug traces list a set of states that are on a path to a fair cycle and fail the CTL formula. Equivalence checking VIS provides the capability to check the combinational equivalence of two designs. An important usage of combinational equivalence is to provide a sanity check when re-synthesizing portions of a network. VIS also provides the capability to test the sequential equivalence of two designs. Sequential verification is done by building the product finite state machine, and checking whether a state where the values of two corresponding outputs differ, can be reached from the set of initial states of the product machine. If this happens, a debug trace is provided. Both combinational and sequential verification are implemented using BDD-based routines. Simulation VIS also provides traditionaldesign verification in the form of a cycle-based simulator that uses BDD techniques. Since VIS performs both formal verification and simulation using the same data structures, consistency between them is ensured. VIS can generate random input patterns or accept user-specified input patterns. Any subtree of the specified hierarchy may be simulated.

655 citations

Proceedings ArticleDOI
07 Nov 2004
TL;DR: This work uses an array of dynamic PLAs which require only metal and via mask customization in order to implement a new design, and demonstrates that this approach strikes a reasonable compromise between ASIC and field programmable design methodologies in terms of placed-and-routed area and delay.
Abstract: In recent times there has been a substantial increase in the cost and complexity of fabricating a VLSI chip. The lithography masks themselves can cost between /spl epsi/ and /spl ges/. It is conjectured that due to these increasing costs, the number of ASIC starts in the last few years has declined. We address this problem by using an array of dynamic PLAs which require only metal and via mask customization in order to implement a new design. This would allow several similar-sized designs to share the same base set of masks (right up to the metal layers) and only have different metal and via masks. We have implemented our methodology for both combinational and sequential designs, and demonstrate that our approach strikes a reasonable compromise between ASIC and field programmable design methodologies in terms of placed-and-routed area and delay. Our method has a 2.89/spl times/ (3.58/spl times/) delay overhead and a 4.96/spl times/ (3.44/spl times/) area overhead compared to standard cells for combinational (sequential) designs.

187 citations

Proceedings ArticleDOI
22 Aug 2001
TL;DR: The experimental results show that the proposed techniques result in reduced delay variation due to cross-talk, and the overall delay of a bus actually decreases even after the use of the encoding scheme.
Abstract: We present techniques to analyze and alleviate cross-talk in on-chip buses. With rapidly shrinking process feature sizes, wire delay is becoming a large fraction of the overall delay of a circuit. Additionally, the increasing cross-coupling capacitances between wires on the same metal layer create a situation where the delay of a wire is strongly dependent on the electrical state of its neighboring wires. The delay of a wire can vary widely depending on whether its neighbors perform a like or unlike transition. This effect is acute for long on-chip buses. In this work, we classify cross-talk interactions between the wires of an on-chip bus. We present encoding techniques which can help a designer trade off cross-talk against area overhead. Our experimental results show that the proposed techniques result in reduced delay variation due to cross-talk. As a result, the overall delay of a bus actually decreases even after the use of the encoding scheme.

173 citations

Proceedings ArticleDOI
08 Jun 2008
TL;DR: This paper implements a fault simulator that exploits thread level parallelism on a graphics processing unit (GPU) and fault- simulates all the gates in a particular level of a circuit, including good and faulty circuit simulations, for all patterns, in parallel.
Abstract: In this paper, we explore the implementation of fault simulation on a graphics processing unit (GPU). In particular, we implement a fault simulator that exploits thread level parallelism. Fault simulation is inherently parallelizable, and the large number of threads that can be computed in parallel on a GPU results in a natural fit for the problem of fault simulation. Our implementation fault- simulates all the gates in a particular level of a circuit, including good and faulty circuit simulations, for all patterns, in parallel. Since GPUs have an extremely large memory bandwidth, we implement each of our fault simulation threads (which execute in parallel with no data dependencies) using memory lookup. Fault injection is also done along with gate evaluation, with each thread using a different fault injection mask. All threads compute identical instructions, but on different data, as required by the Single Instruction Multiple Data (SIMD) programming semantics of the GPU. Our results, implemented on a NVIDIA GeForce GTX 8800 GPU card, indicate that our approach is on average 35 x faster when compared to a commercial fault simulation engine. With the recently announced Tesla GPU servers housing up to eight GPUs, our approach would be potentially 238 times faster. The correctness of the GPU based fault simulator has been verified by comparing its result with a CPU based fault simulator.

112 citations

Journal ArticleDOI
TL;DR: This work presents guidelines for the CODEC design of the ldquo forbidden pattern free crosstalk avoidance coderdqui (FPF-CAC), and shows that mathematically, a mapping scheme exists based on the representation of numbers in the Fibonacci numeral system.
Abstract: Interconnect delay has become a limiting factor for circuit performance in deep sub-micrometer designs. As the crosstalk in an on-chip bus is highly dependent on the data patterns transmitted on the bus, different crosstalk avoidance coding schemes have been proposed to boost the bus speed and/or reduce the overall energy consumption. Despite the availability of the codes, no systematic mapping of data words to codewords has been proposed for CODEC design. This is mainly due to the nonlinear nature of the crosstalk avoidance codes (CAC). The lack of practical CODEC construction schemes has hampered the use of such codes in practical designs. This work presents guidelines for the CODEC design of the ldquoforbidden pattern free crosstalk avoidance coderdquo (FPF-CAC). We analyze the properties of the FPF-CAC and show that mathematically, a mapping scheme exists based on the representation of numbers in the Fibonacci numeral system. Our first proposed CODEC design offers a near-optimal area overhead performance. An improved version of the CODEC is then presented, which achieves theoretical optimal performance. We also investigate the implementation details of the CODECs, including design complexity and the speed. Optimization schemes are provided to reduce the size of the CODEC and improve its speed.

107 citations


Cited by
More filters
Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Journal ArticleDOI
TL;DR: The state of the art in specification and verification, which includes advances in model checking and theorem proving, is assessed and future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer are outlined.
Abstract: Hardware and software systems will inevitably grow in scale and functionality. Because of this increase in complexity, the likelihood of subtle errors is much greater. Moreover, some of these errors may cause catastrophic loss of money, time, or even human life. A major goal of software engineering is to enable developers to construct systems that operate reliably despite this complexity. One way of achieving this goal is by using formal methods, which are mathematically based languages, techniques, and tools for specifying and verifying such systems. Use of formal methods does not a priori guarantee correctness. However, they can greatly increase our understanding of a system by revealing inconsistencies, ambiguities, and incompleteness that might otherwise go undetected. The first part of this report assesses the state of the art in specification and verification. For verification, we highlight advances in model checking and theorem proving. In the three sections on specification, model checking, and theorem proving, we explain what we mean by the general technique and briefly describe some successful case studies and well-known tools. The second part of this report outlines future directions in fundamental concepts, new methods and tools, integration of methods, and education and technology transfer. We close with summary remarks and pointers to resources for more information.

1,429 citations

Journal ArticleDOI

1,380 citations

Journal ArticleDOI
01 May 2001
TL;DR: This paper analyzes the limitations of the existing interconnect technologies and design methodologies and presents a novel three-dimensional chip design strategy that exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize a system-on-a-chip (SoC) design.
Abstract: Performance of deep-submicrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to decreasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies in one single chip is becoming increasingly desirable, for which planar (two-dimensional) ICs may not be suitable. This paper analyzes the limitations of the existing interconnect technologies and design methodologies and presents a novel three-dimensional (3-D) chip design strategy that exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize a system-on-a-chip (SoC) design. A comprehensive analytical treatment of these 3-D ICs has been presented and it has been shown that by simply dividing a planar chip into separate blocks, each occurring a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved, without the aid of any other circuit or design innovations. A scheme to optimize the interconnect distribution among different interconnect tiers is presented and the effect of transferring the repeaters to upper Si layers has been quantified in this analysis for a two-layer 3-D chip. Furthermore, one of the major concerns in 3-D ICs arising due to power dissipation problems has been analyzed and an analytical model has been presented to estimate the temperatures of the different active layers. It is demonstrated that advancement in heat sinking technology will be necessary in order to extract maximum performance from these chips. Implications of 3-D device architecture on several design issues have also been discussed with special attention to SoC design strategies. Finally some of the promising technologies for manufacturing 3-D ICs have been outlined.

1,057 citations