scispace - formally typeset
Search or ask a question

Showing papers presented at "Asia and South Pacific Design Automation Conference in 2001"


Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper introduces the new approach for Low-Energy Scheduling (LEneS) and compares it to two other scheduling methods, based on a list-scheduling heuristic with dynamic recalculation of priorities, and assumes a given allocation and assignment of tasks to processors.
Abstract: The work presented in this paper addresses minimization of the energy consumption of a system during system-level design. The paper focuses on scheduling techniques for architectures containing variable supply voltage processors, running dependent tasks. We introduce our new approach for low-energy scheduling (LEneS) and compare it to two other scheduling methods. LEneS is based on a list-scheduling heuristic with dynamic recalculation of priorities, and assumes a given allocation and assignment of tasks to processors. Our approach minimizes the energy by choosing the best combination of supply voltages for each task running on its processor. The set of experiments we present shows that, using the LEneS approach, we can achieve up to 28% energy savings for the tightest deadlines, and up to 77% energy savings when these deadlines are relaxed by 50%.

194 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: FAST-SP translates each sequence pair to its corresponding block placement in O(n log log n) time based on a fast longest common subsequence computation, much faster than the traditional O( n2) method by first constructing horizontal and vertical constraint graphs and then performing longest path computations.
Abstract: In this paper we present FAST-SP which is a fast block placement algorithm based on the sequence-pair placement representation. FAST-SP has two significant improvements over previous sequence-pair based placement algorithms: (1) FAST-SP translates each sequence pair to its corresponding block placement in O(n log log n) time based on a fast longest common subsequence computation. This is much faster than the traditional O(n/sup 2/) method by first constructing horizontal and vertical constraint graphs and then performing longest path computations. As a result, FAST-SP can examine more sequence pairs and obtain a better placement solution in less runtime. (2) FAST-SP can handle placement constraints such as pre-placed constraint, range constraint, and boundary constraint. No previous sequence-pair based algorithms can handle range constraint and boundary constraint. Fast evaluation in O(n log log n) time is still valid in the presence of placement constraints and a novel cost function which unifies the evaluation of feasible and infeasible sequence pairs is used. We have implemented FAST-SP and obtained excellent experimental results. For all MCNC benchmark block placement problems, we have obtained the best results ever reported in the literature (including those reported by algorithms based on O-tree and B/sup */-tree) with significantly less runtime. For example, the best known result for ami49 (36.8 mm/sup 2/) was obtained by a B/sup */-tree based algorithm using 4752 seconds, and FAST-SP obtained a better result (36.5 mm/sup 2/) in 31 seconds.

174 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper presents a much improved, highly accurate yet efficient crosstalk noise model, the 2-pie model, and applies it to noise-constrained interconnect optimizations and demonstrates its effectiveness in two applications.
Abstract: This paper presents a much improved, highly accurate yet efficient crosstalk noise model, the 2-/spl pi/ model, and applies it to noise-constrained interconnect optimizations. Compared with previous crosstalk noise models of similar complexity, our 2-/spl pi/ model takes into consideration many key parameters, such as coupling locations (near-driver or near-receiver), and the coarse distributed RC characteristics for victim net. Thus, it is very accurate (less than 6% error on average compared with HSPICE simulations). Moreover, our model provides simple closed-form expressions for both peak noise amplitude and noise width, so it is very useful for noise-aware layout optimizations. In particular we demonstrate its effectiveness in two applications: (i) optimization rule generation for noise reduction using various interconnect optimization techniques, (ii) simultaneous wire spacing to multiple nets for noise constrained interconnect minimization.

131 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: The paper gives a brief survey over a decade of R&D on coarse grain reconfigured hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.
Abstract: The paper gives a brief survey over a decade of R&D on coarse grain reconfigurable hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.

121 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: The Elmore delay is extended to account for a distributed model with distributed coupling component and an arbitrary number of lines driven by independent sources and a technique to speed up the communication through a data bus using coding is proposed.
Abstract: In this paper we study the delay associated with transmission of data through busses. Previous work in this area has presented models for delay assuming a distributed wire model or a lumped capacitive coupling between wires. In this paper we extend the Elmore delay to account for a distributed model with distributed coupling component and an arbitrary number of lines driven by independent sources. The effect of data patterns is taken into account allowing us to estimate the delay on a sample by sample basis instead of making a worst case assumption. Using this detailed wire delay model, we propose a technique to speed up the communication through a data bus using coding. The idea is to encode the data being transmitted through the bus with the goal of eliminating certain types of transitions that require a large delay. We show that by using proper encoding techniques, the bus can be sped up by a factor of 2.

118 citations


Proceedings ArticleDOI
Shekhar Borkar1
30 Jan 2001
TL;DR: In this paper, the authors discuss a few techniques that reduce active and leakage power, and deliver higher performance, and point out some potential paradigm shifts in the design of circuits beyond 0.18 micron.
Abstract: Technology scaling will become difficult beyond 0.18 micron. For continued growth in performance, transistor density, and reduced energy per computation, circuit design will have to employ a new set of design techniques, with adequate design automation tools support. This paper discusses a few such techniques that reduce active and leakage power, and deliver higher performance. It concludes by pointing out some of the potential paradigm shifts.

108 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: The paper gives a brief survey over a decade of R&D on coarse grain reconfigured hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.
Abstract: The paper gives a brief survey over a decade of R&D on coarse grain reconfigurable hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.

107 citations


Proceedings ArticleDOI
Sani R. Nassif1
30 Jan 2001
TL;DR: In this article, the authors examine the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and propose a modeling and simulation methodology to deal with this variability.
Abstract: Process-induced variations are an important consideration in the design of integrated circuits. Until recently, it was sufficient to model die-to-die shifts in device performance, leading to the well known worst-case modeling and design methodology. However, current and near-future integrated circuits are large enough that device and interconnect parameter variations within the chip are as important as those same variations from chip to chip. This presents a new set of challenges for process modeling and characterization and for the associated design tools and methodologies. This paper examines the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and proposes a modeling and simulation methodology to deal with this variability.

85 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper is presenting a routability-driven clustering method for cluster-based FPGAs that packs LUTs into logic clusters while incorporating routability metrics into a cost function to minimize this routability cost function.
Abstract: Routing tools consume a significant portion of the total design time. Considering routability at earlier steps of the CAD flow would both yield better quality and faster design process. In this paper we are presenting a routability-driven clustering method for cluster-based FPGAs. Our method packs LUTs into logic clusters while incorporating routability metrics into a cost function. The objective is to minimize this routability cost function . Our cost function is consistently able to indicate improved routability. Our method yields up to 50 % improvement over existing clustering methods in terms of the number of routing tracks required. The average improvement obtained is 16.5 %. Reduction in number of tracks yields reduced routing area.

74 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: Acosimulation environment that provides modularity, scalability, and flexibility in cosimulation of SoC designs with heterogeneous multi-processor target architectures is presented and experiments with an IS-95 CDMA cellular phone system design show the effectiveness of the cosimulations environment.
Abstract: In this paper, we present a cosimulation environment that provides modularity, scalability, and flexibility in cosimulation of SoC designs with heterogeneous multi-processor target architectures. Our cosimulation environment is based on an object-oriented simulation environment, SystemC. Exploiting the object orientation in SystemC representation, we achieve modularity and scalability of cosimulation by developing modular cosimulation interfaces. The object orientation also enables mixed-level cosimulation to be easily implemented; thereby the designer can have flexibility in trade off between simulation performance and accuracy. Experiments with an IS-95 CDMA cellular phone system design show the effectiveness of the cosimulation environment.

69 citations


Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper is not intended as a comprehensive review, rather as a starting point for understanding power-aware design methodologies and techniques targeted toward embedded systems.
Abstract: Power-efficient design requires reducing power dissipation in all parts of the design and during all stages of the design process subject to constraints on the system performance and quality of service (QoS). Power-aware high-level language compilers, dynamic power management policies, memory management schemes, bus encoding techniques, and hardware design tools are needed to meet these often-conflicting design requirements. This paper reviews techniques and tools for power-efficient embedded system design, considering the hardware platform, the application software, and the system software. Design examples from an Intel StrongARM based system are provided to illustrate the concepts and the techniques. This paper is not intended as a comprehensive review, rather as a starting point for understanding power-aware design methodologies and techniques targeted toward embedded systems.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A new congestion-driven placement based on cell inflation is described, using the method of probability- estimation to evaluate the routing of nets and taking use of the strategy of cell inflation to eliminate the routing congestion.
Abstract: In this paper, we describe a new congestion-driven placement based on cell inflation. In our approach, we have used the method of probability- estimation to evaluate the routing of nets. We also take use of the strategy of cell inflation to eliminate the routing congestion. Further reduction in congestion is obtained by the scheme of cell moving. We have tested our algorithm on a set of sample circuits from American industry and the results obtained have shown great improvement of routability.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The design and implementation of a systolic RSA cryptosystem based on a modified Montgomery's algorithm and the Chinese Remainder Theorem technique and the CRT technique is presented, which improves the throughput rate up to 4 times in the best case.
Abstract: In this paper, we present the design and implementation of a systolic RSA cryptosystem based on a modified Montgomery's algorithm and the Chinese Remainder Theorem (CRT) technique. The CRT technique improves the throughput rate up to 4 times in the best case. The processing unit of the systolic array has 100% utilization because of the proposed block interleaving technique for multiplication and square operations in the modular exponentiation algorithm. For 512-bit inputs, the number of clock cycles needed for a modular exponentiation is about 0.13M to 0.24M. The critical path delay is 6.13 ns using a 0.6 /spl mu/m CMOS technology. With a 150 MHz clock, we can achieve an encryption/decryption rate of about 328 to 578 Kb/s.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The first methods for hierarchical layout density control for process uniformity are given, which trades off naturally between runtime, solution quality, and output data volume.
Abstract: To improve manufacturability and performance predictability, we seek to make a layout uniform with respect to prescribed density criteria, by inserting "fill" geometries into the layout. Previous approaches for flat layout density control are not scalable due to the necessity of solving very large linear programs, the large data volume of the solution, and the impact of hierarchy-breaking on verification. In this paper, we give the first methods for hierarchical layout density control for process uniformity. Our approach trades off naturally between runtime, solution quality, and output data volume. We also allow generation of compressed GDSII of fill geometries. Our experiments show that this hybrid hierarchical filling approach saves data volume and is scalable, while yielding solution quality that is competitive with existing Monte-Carlo and linear programming based approaches.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A concurrent scheduling and binding algorithm that takes interconnect delay into account is proposed, which can obtain latency improvement of up to 54 % and of 37% on the average by introducing interconnectdelay.
Abstract: As process technology goes into deep submicron range, interconnect delay becomes dominant among overall system delay, occupying most of the system clock cycle time. Interconnect delay is now a crucial factor that needs to be considered even during high-level synthesis. In this paper, we propose a concurrent scheduling and binding algorithm that takes interconnect delay into account. We first define our distributed target architecture, which minimizes the effect of interconnect delay on clock cycle time. We no longer assume that interconnect delay between functional units is a part of one clock cycle. Interconnect delay can span over multiple clock cycles. We incorporate the concept of multi-cycle interconnect delay into scheduling and binding process, to reduce the critical path length and therefore the system latency. We show that by introducing interconnect delay, we can obtain latency improvement of up to 54 % and of 37% on the average.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The physical interpretation of K is presented and it is proved that after ignoring faraway mutual K, the resultant K matrix is positive definite (stability).
Abstract: On-chip inductance extraction is difficult due to the global effect of inductance, and simulating the resulting dense partial inductance matrix is even more difficult. Furthermore, it is well known that simply discarding smallest terms to sparsify the inductance matrix can render the partial inductance matrix indefinite and result in an unstable circuit model. Recently a new circuit element, K , has been introduced to capture global effect of inductance by evaluating a corresponding sparse K matrix [1]. However, the reason that K has such local properties is not clear, and the positive semi-definiteness of the corresponding sparse K matrix is not proved. In this paper, we present the physical interpretation of K. Based on the physical interpretation, we explain why the faraway mutual K can be ignored (locality) and prove that after ignoring faraway mutual K ,the resultant K matrix is positive definite (stability). Together with a RKC equivalent circuit model, the locality and stability enables us to simulate RKC circuit directly and efficiently for real circuits. A new circuit simulation tool, KSim, has been developed by incorporating the new circuit element K into Berkeley SPICE. The RKC simulation matches better with the full partial inductance matrix simulation with significant less computing time and memory usage, compared to other proposed methods, such as shift-truncate method [2, 3].

Proceedings ArticleDOI
Sani R. Nassif1
30 Jan 2001
TL;DR: This paper examines the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and proposes a modeling and simulation methodology to deal with this variability.
Abstract: Process-induced variations are an important consideration in the design of integrated circuits. Until recently, it was sufficient to model die-to-die shifts in device performance, leading to the well known worst-case modeling and design methodology [1, 2]. However, current and near-future in-tegrated circuits are large enough that device and intercon-nect parameter variations within the chip are as important as those same variations from chip to chip. This presents a new set of challenges for process modeling and characterization and for the associated design tools and method-ologies. This paper examines the sources and trends of process variability, the new challenges associated with the increase in within-die variability analysis, and proposes a modeling and simulation methodology to deal with this variability.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A new graph bipartization formulation that pertains to the more technologically relevant bright-field regime is defined, which allows two degrees of freedom for layout perturbation and corresponds to node deletion in a new layout-derived graph, called the feature graph.
Abstract: We describe new graph bipartization algorithms for layout modification and phase assignment of bright-field alternating phase-shifting masks (AltPSM). The problem of layout modification for phase-assignability reduces to the problem of making a certain layout-derived graph bipartite (i.e., 2-colorable). Previous work by Berman et al. (2000) solves bipartization optimally for the dark field alternating PSM regime. Only one degree of freedom is allowed (and relevant) for such a bipartization: edge deletion, which corresponds to increasing the spacing between features in order to remove phase conflict. Unfortunately, dark-field PSM is used only for contact layers, due to limitations of negative photoresists. Poly and metal layers are actually created using positive photoresists and bright-field masks. In this paper, we define a new graph bipartization formulation that pertains to the more technologically relevant bright-field regime. The previous work by Berman et al. does not apply to this regime. This formulation allows two degrees of freedom for layout perturbation: (i) increasing the spacing between features, and (ii) increasing the width of critical features. Each of these corresponds to node deletion in a new layout-derived graph that we define, called the feature graph. Graph bipartization by node deletion asks for a minimum weight node set A such that deletion of A makes the graph bipartite. Unlike bipartization by edge deletion, this problem is NP-hard. We investigate several practical heuristics for the node deletion bipartization of planar graphs, including one that has 9/4 approximation ratio. Computational experience with industrial VLSI layout benchmarks shows promising results.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper presents a new algorithm for the statistical static timing analysis of a CMOS combinatorial circuit, which can treat correlations of arrival times of input signals to a logic Gate and correlations of switching delays in a logic gate as correlations of correlation.
Abstract: In this paper, we present a new algorithm for the statistical static timing analysis of a CMOS combinatorial circuit, which can treat correlations of arrival times of input signals to a logic gate and correlations of switching delays in a logic gate We model each switching delay by a normal distribution, and use a normal distribution of two stochastic variables with a coefficient of correlation for computing the distribution of output delay of a logic gate Since the algorithm takes the correlation into account, the time complexity is O(n/spl middot/m) in the worst-case, where n and m are the numbers of vertices and edges of the acyclic graph representing a given combinatorial circuit

Proceedings ArticleDOI
Shekhar Borkar1
30 Jan 2001
TL;DR: This paper discusses a few design techniques that reduce active and leakage power, and deliver higher performance, and concludes by pointing out some of the potential paradigm shifts.
Abstract: Technology scaling will become difficult beyond 0.18 micron. For continued growth in performance, transistor density, and reduced energy per computation, circuit design will have to employ a new set of design techniques, with adequate design automation tools support. This paper discusses a few such techniques that reduce active and leakage power, and deliver higher performance. It concludes by pointing out some of the potential paradigm shifts.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The Tangram framework supports the design of asynchronous circuits in a high-level programming language and has designed several chips, for instance for pagers and smart cards, which are clearly superior to synchronous designs.
Abstract: Asynchronous CMOS circuits have the potential for very low power consumption, because they only dissipate when and where active. In addition they have favorable EMC properties, since they emit less energy, which in addition is evenly distributed over the spectrum. The Tangram framework supports the design of asynchronous circuits in a high-level programming language. Using this framework we have designed several chips, for instance for pagers and smart cards, which are clearly superior to synchronous designs.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper presents an efficient heuristic algorithm to exploit the "cut enumeration" technique to generate possible mapping solutions for the sub-circuit rooted at each node in lookup table (LUT) based FPGA technology mapping for power minimization in combinational circuits.
Abstract: In this paper, we consider the problem of lookup table (LUT) based FPGA technology mapping for power minimization in combinational circuits. The problem has been previously proved to be NP-hard, and hence we present an efficient heuristic algorithm for it. The main idea of our algorithm is to exploit the "cut enumeration" technique to generate possible mapping solutions for the sub-circuit rooted at each node. However, for the consideration of both run time and memory space, only a fixed-number of solutions are selected and stored by our algorithm. To facilitate the selection process, a method that correctly calculates the estimated power consumption for each mapped sub-circuit is developed. The experimental results indicate that our algorithm reduces the average power consumption by up to 14.18%, and the average number of LUTs by up to 6.99% over an existing method.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper describes a C-based system LSI design system called Bach, which is developed and summarized the design now, effects and current issues of an MEPG-4 video codec design.
Abstract: In system LSI design, a desirable system is one that allows the designer to describe, partition, and verify systems, and to generate circuits efficiently. In this paper, we describe a C-based system LSI design system called Bach which we have developed. Using the example of an MEPG-4 video codec design, we summarize its design now, effects and current issues.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A processor-programmable built-in self-test (BIST) scheme suitable for embedded memory testing in the system-on-a-chip (SOC) environment and the test time of the proposed memory BIST scheme is greatly reduced.
Abstract: We present a processor-programmable built-in self-test (BIST) scheme suitable for embedded memory testing in the system-on-a-chip (SOC) environment. The proposed BIST circuit can be programmed via an on chip microprocessor. Upon receiving the commands from the microprocessor, the BIST circuit generates pre-defined test patterns and compares the memory outputs with the expected outputs. Most popular memory test algorithms can be realized by properly programming the BIST circuit using the processor instructions. Compared with processor-based memory BIST schemes that use an assembly-language program to generate test patterns and compare the memory outputs, the test time of the proposed memory BIST scheme is greatly reduced.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: This paper discusses several industrial timed circuits and gives an overview of the timed circuit design methodology, which cannot be efficiently and accurately analyzed using traditional static timing analysis methods.
Abstract: In order to continue to produce circuits of increasing speeds, designers must consider aggressive circuit design styles such as self-resetting or delayed-reset domino circuits used in IBM's gigahertz processor (GUTS) and asynchronous circuits used in Intel's RAPPID instruction length decoder. These new timed circuit styles, however, cannot be efficiently and accurately analyzed using traditional static timing analysis methods. This lack of efficient analysis tools is one of the reasons for the lack of mainstream acceptance of these design styles. This paper discusses several industrial timed circuits and gives an overview of our timed circuit design methodology.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The results show that energy can be conserved in embedded real-time systems using energy-aware task scheduling and it is shown that switching times have a significant effect on the energy consumed in hard real- time systems.
Abstract: We investigate the effect of voltage-switching on task execution times and energy consumption for dual-speed hard real-time systems, and present a new approach for scheduling workloads containing periodic tasks. Our method minimizes the total energy consumed by the task set and guarantees that the deadline for every task is met. We present a mixed-integer linear programming model for the NP-complete scheduling problem and solve it for moderate-sized problem instances using a public-domain solver. For larger task sets, we present a novel extended-low-energy earliest-deadline-first (E-LEDF) scheduling algorithm and apply it to two real-life task sets. Our results show that energy can be conserved in embedded real-time systems using energy-aware task scheduling. We also show that switching times have a significant effect on the energy consumed in hard real-time systems.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A very fast and low-complexity FIR digital filter on FPGA is presented, whose coefficients are expressed as canonic signed digit (CSD) code are realized with wired-shifters, adders and subtracters.
Abstract: A very fast and low-complexity FIR digital filter on FPGA is presented. Multipliers in the filter whose coefficients are expressed as canonic signed digit (CSD) code are realized with wired-shifters, adders and subtracters. The critical path is minimized by insertion of pipeline registers and is equal to the propagation delay of an adder. The number of pipeline registers is limited by using an equivalent transformation on a signal flow graph. The price paid for the 100% speedup is 5% increase in the area. The maximum sampling frequency is 78.6 MHz.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The boundary constraint algorithm for general floorplan is implemented by extending the Corner Block List (CBL) - a new efficient topology representation for non-slicing floorplan by finding the necessary and sufficient characterization of the modules along the boundary represented by Corner Block list.
Abstract: In floorplanning of typical VLSI design, some modules are required to satisfy some placement constraints in the final packing. Boudary Constraint is one kind of those placement constraints to pack some modules along one of the four sides: on the left, on the right, at the bottom or at the top of the final floorplan. We implement the boundary constraint algorithm for general floorplan by extending the Corner Block List (CBL) - a new efficient topology representation for non-slicing floorplan. Our contribution is to find the necessary and sufficient characterization of the modules along the boundary represented by Corner Block List. So that we can check the boundary constraints by scanning the intermediate solutions in the linear time during the simulated annealing process and fix the corner block list in case the constraints are violated. The experiment results are demonstrated by several examples of MCNC benchmarks and the performance is remarkable.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: A technique for minimizing the overall sum of switching probabilities is presented and the resulting circuit that is obtained by mapping the BDD to CMOS Pass Transistors has in simulation using a commercially available process model) shown reduced power dissipation characteristic.
Abstract: The minimization of power consumption is an important design constraint for circuits used in portable devices. The switching activity of a circuit node in a CMOS digital circuit directly contributes to overall power dissipation. By approximating the switching activity of circuit nodes as internal switching probabilities in binary decision diagrams (BDDs), it is possible to estimate the dynamic power dissipation characteristic of circuits resulting from a structural mapping of a BDD. A technique for minimizing the overall sum of switching probabilities is presented. The method is based on efficient local operations on a BDD representing the functionality of the circuit to be realized. The resulting circuit that is obtained by mapping the BDD to CMOS pass transistors has in simulation (using a commercially available process model) shown reduced power dissipation characteristic. Experimental results on a set of MCNC benchmarks are given for this technique.

Proceedings ArticleDOI
30 Jan 2001
TL;DR: The model maps two coupled lines into two completely isolated lines with separated drivers and receivers, and has no loss of accuracy during the decoupling procedure, and derives a closed-form time domain response for an isolated transmission line using a one-segment RLC II model.
Abstract: In this paper, we present a new decoupled model for two coupled transmission lines with consideration of the inductive effect. It maps two coupled lines into two completely isolated lines with separated drivers and receivers, and has no loss of accuracy during the decoupling procedure. Further, we derive a closed-form time domain response for an isolated transmission line using a one-segment RLC /spl Pi/ model. Combining the two models, we have an analytical time-domain solution to two coupled transmission lines. The model gives satisfied results for up to 5000 /spl mu/m long lines when compared to SPICE simulation over an accurate distributed RLC circuit model, and can be used to model on-chip wires in the layout design, logic synthesis and high level design.