scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2003"


Journal ArticleDOI
TL;DR: An algorithm for generating a binary search tree that allows efficient computation of piecewise affine (PWA) functions defined on a polyhedral partition is presented, useful for PWA control approaches, such as explicit model predictive control, as it allows the controller to be implemented online with small computational effort.

375 citations


Journal ArticleDOI
TL;DR: This work accelerates the computation of the LBM on general-purpose graphics hardware, by grouping particle packets into 2D textures and mapping the Boltzmann equations completely to the rasterization and frame buffer operations.
Abstract: The Lattice Boltzmann Model (LBM) is a physically-based approach that simulates the microscopic movement of fluid particles by simple, identical, and local rules. We accelerate the computation of the LBM on general-purpose graphics hardware, by grouping particle packets into 2D textures and mapping the Boltzmann equations completely to the rasterization and frame buffer operations. We apply stitching and packing to further improve the performance. In addition, we propose techniques, namely range scaling and range separation, that systematically transform variables into the range required by the graphics hardware and thus prevent overflow. Our approach can be extended to acceleration of the computation of any cellular automata model.

199 citations


Proceedings ArticleDOI
16 Jun 2003
TL;DR: This paper presents a formal characterization of the degree of simplification of the θ-SMA as a function of θ, and quantifies the degree to which the simplified medial axis retains the features of the original polyhedron.
Abstract: Applications of of the medial axis have been limited because of its instability and algebraic complexity. In this paper, we use a simplification of the medial axis, the θ-SMA, that is parameterized by a separation angle (θ) formed by the vectors connecting a point on the medial axis to the closest points on the boundary. We present a formal characterization of the degree of simplification of the θ-SMA as a function of θ, and we quantify the degree to which the simplified medial axis retains the features of the original polyhedron.We present a fast algorithm to compute an approximation of the θ-SMA. It is based on a spatial subdivision scheme, and uses fast computation of a distance field and its gradient using graphics hardware. The complexity of the algorithm varies based on the error threshold that is used, and is a linear function of the input size. We have applied this algorithm to approximate the SMA of models with tens or hundreds of thousands of triangles. Its running time varies from a few seconds, for a model consisting of hundreds of triangles, to minutes for highly complex models.

174 citations



Journal ArticleDOI
TL;DR: This work introduces and develops a mathematical model of dendrite computation in a morphological neuron based on lattice algebra and proves that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy.
Abstract: Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.

144 citations


Journal Article
TL;DR: Developments in numerical and analytical techniques for handling such multiscale phenomena that act at scales of more than one order of magnitude are discussed.
Abstract: Many applications involve phenomena that act at scales of more than one order of magnitude. This article discusses developments in numerical and analytical techniques for handling such multiscale d ...

135 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a number of visibility problems on terrains and present an overview of algorithms to tackle such problems on triangulated irregular networks and regular square grids.
Abstract: Several environment applications require the computation of visibility information on a terrain. Examples are optimal placement of observation points, line-of-sight communication, and computation of hidden as well as scenic paths. Visibility computations on a terrain may involve either one or many viewpoints, and range from visibility queries (for example, testing whether a given query point is visible), to the computation of structures that encode the visible portions of the surface. In this paper, the authors consider a number of visibility problems on terrains and present an overview of algorithms to tackle such problems on triangulated irregular networks and regular square grids.

128 citations


Journal ArticleDOI
TL;DR: The underlying computation and implementation of such a mechanism in SpikeNET, the authors' neural network simulation package, is described and the type of model one can build is not only biologically compliant, it is also computationally efficient.
Abstract: Many biological neural network models face the problem of scalability because of the limited computational power of today's computers. Thus, it is difficult to assess the efficiency of these models to solve complex problems such as image processing. Here, we describe how this problem can be tackled using event-driven computation. Only the neurons that emit a discharge are processed and, as long as the average spike discharge rate is low, millions of neurons and billions of connections can be modelled. We describe the underlying computation and implementation of such a mechanism in SpikeNET, our neural network simulation package. The type of model one can build is not only biologically compliant, it is also computationally efficient as 400 000 synaptic weights can be propagated per second on a standard desktop computer. In addition, for large networks, we can set very small time steps (<0.01 ms) without significantly increasing the computation time. As an example, this method is applied to solve complex co...

122 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a very different model involving only projective measurements and quantum memory is also universal for quantum computation, in particular, no coherent unitary dynamics are involved in the computation.

112 citations


Journal ArticleDOI
01 Apr 2003
TL;DR: Researchers are seeking appropriate methods to model computing and human thought to help improve the quality of knowledge in the rapidly changing environment.
Abstract: Seeking appropriate methods to model computing and human thought.

110 citations


Journal ArticleDOI
TL;DR: An approach that departs from the main stream of methods for the reliability analysis of SFE models is proposed, and the reliability problem is treated as a classification task and not as the computation of an integral.
Abstract: The assessment of the reliability of structural systems is increasingly being estimated with regard to the spatial fluctuation of the mechanical properties as well as loads. This leads to a detailed probabilistic modeling known as stochastic finite elements (SFE). In this paper an approach that departs from the main stream of methods for the reliability analysis of SFE models is proposed. The difference lies in that the reliability problem is treated as a classification task and not as the computation of an integral. To this purpose use is made of a kernel method for classification, which is the object of intensive research in pattern recognition, image analysis, and other fields. A greedy sequential procedure requiring a minimal number of limit state evaluations is developed. The algorithm is based on the key concept of support vectors, which guarantee that only the points closest to the decision rule need to be evaluated. The numerical examples show that this algorithm allows obtaining a highly accurate approximation of the failure probability of SFE models with a minimal number of calls of the finite element solver and also a fast computation.

Journal ArticleDOI
TL;DR: The progress that has been made toward building a minimal biped system is described, and a significant portion of the computation is embedded in physical devices, such as capacitors and transistors, to underline the potential power of emphasizing the understanding of physical computation.
Abstract: In biological systems, the task of computing a gait trajectory is shared between the biomechanical and nervous systems. We take the perspective that both of these seemingly different computations are examples of physical computation. Here we describe the progress that has been made toward building a minimal biped system that illustrates this idea. We embed a significant portion of the computation in physical devices, such as capacitors and transistors, to underline the potential power of emphasizing the understanding of physical computation. We describe results in the exploitation of physical computation by (1) using a passive knee to assist in dynamics computation, (2) using an oscillator to drive a monoped mechanism based on the passive knee, (3) using sensory entrainment to coordinate the mechanics with the neural oscillator, (4) coupling two such systems together mechanically at the hip and computationally via the resulting two oscillators to create a biped mechanism, and (5) demonstrating the resulting gait generation in the biped mechanism.

Journal ArticleDOI
TL;DR: A high efficient gradient-based algorithm to determine the subpixel registration of the DSCM and the practical deformation measurements with the rigid body translation and rotation as well as an experiment on biomechanics are presented to certify the feasibility and the validity of the algorithm.
Abstract: Digital speckle correlation method (DSCM) has been widely used in experimental mechanics to obtain the surface deformation fields One of the challenges in practical applications is how to obtain the high accuracy with far less computation complexity To determine the subpixel registration of the DSCM, a high efficient gradient-based algorithm is developed in this paper The principle is described and four different modes of the algorithm are given Based on computer-simulated images, the optimal mode of the algorithm is verified through the comparison of computation time, optimal subset-region size and sensitivity of the four modes The influences of speckle-granule size and speckle-granule density on accuracy are studied and a quantitative estimation of the optimal speckle-granule size range is obtained As the applications of this method, the practical deformation measurements with the rigid body translation and rotation as well as an experiment on biomechanics are presented to certify the feasibility and the validity of the algorithm

Proceedings ArticleDOI
15 Nov 2003
TL;DR: A new adaptive fast multipole algorithm that is kernel-independent in the sense that the evaluation of pairwise interactions does not rely on any analytic expansions, but only utilizes kernel evaluations, and its parallel implementation logically separates the computation and communication phases.
Abstract: We present a new adaptive fast multipole algorithm and its parallel implementation. The algorithm is kernel-independent in the sense that the evaluation of pairwise interactions does not rely on any analytic expansions, but only utilizes kernel evaluations. The new method provides the enabling technology for many important problems in computational science and engineering. Examples include viscous flows, fracture mechanics and screened Coulombic interactions. Our MPI-based parallel implementation logically separates the computation and communication phases to avoid synchronization in the upward and downward computation passes, and thus allows us to fully exploit computation and communication overlapping. We measure isogranular and fixed-size scalability for a variety of kernels on the Pittsburgh Supercomputing Center's TCS-1 Alphaserver on up to 3000 processors. We have solved viscous flow problems with up to 2.1 billion unknowns and we have achieved 1.6 Tflops/s peak performance and 1.13 Tflops/s sustained performance.

Book ChapterDOI
25 Aug 2003
TL;DR: It is shown that real-time variational computation of optic flow fields is possible when appropriate methods are combined with modern numerical techniques, and the CLG method is considered, a recent variational technique that combines the quality of the dense flow fields of the Horn and Schunck approach with the noise robustness of the Lucas–Kanade method.
Abstract: Variational methods for optic flow computation have the reputation of producing good results at the expense of being too slow for real-time applications. We show that real-time variational computation of optic flow fields is possible when appropriate methods are combined with modern numerical techniques. We consider the CLG method, a recent variational technique that combines the quality of the dense flow fields of the Horn and Schunck approach with the noise robustness of the Lucas–Kanade method. For the linear system of equations resulting from the discretised Euler–Lagrange equations, we present a fast full multigrid scheme in detail. We show that under realistic accuracy requirements this method is 175 times more efficient than the widely used Gaus-Seidel algorithm. On a 3.06 GHz PC, we have computed 27 dense flow fields of size 200 × 200 pixels within a single second.

Journal ArticleDOI
TL;DR: Two new algorithms, namely coefficient method and p-recursive method, are proposed, to accelerate the computation of pseudo-Zernike moments and the performance of the proposed algorithms on moment computation and image reconstruction is experimentally verified.
Abstract: Pseudo-Zernike moments have better feature representation capability, and are more robust to image noise than those of the conventional Zernike moments. However, due to the computation complexity of pseudo-Zernike polynomials, pseudo-Zernike moments are yet to be extensively used as feature descriptors as compared to Zernike moments. In this paper, we propose two new algorithms, namely coefficient method and p-recursive method, to accelerate the computation of pseudo-Zernike moments. Coefficient method calculates polynomial coefficients recursively. It eliminates the need of using factorial functions. Individual order or index of pseudo-Zernike moments can be derived independently, which is useful if selected orders or indices of moments are needed as pattern features. p-recursive method uses a combination of lower order polynomials to derive higher order polynomials with the same index q. Fast computation is achieved because it eliminates the requirements of calculating polynomial coefficients, Bpqk, and power of radius, rk, in each polynomial. The performance of the proposed algorithms on moment computation and image reconstruction, as compared to those of the present methods, are experimentally verified using a set of binary and grayscale images.

Journal ArticleDOI
TL;DR: This paper designs an efficient algorithm by rearranging recursion formula in the reflection matrix approach to compute plane wave seismograms and the Frechet derivative matrix as a by‐product of forward modeling and demonstrates that in a gradient‐descent–...
Abstract: Seismic waveform inversion is a highly challenging task. Nonlinearity, nonuniqueness, and robustness issues tend to make the problem computationally intractable. We have developed a simple regularized Gauss‐Newton–type algorithm for the inversion of seismic data that addresses several of these issues. The salient features of our algorithm include an efficient approach to sensitivity computation, a strategy for band‐limiting the Jacobian matrix, and a novel approach to computing regularization weight that is iteration adaptive. In this paper, we first review various forward modeling and differential seismogram computation algorithms and then evaluate different strategies for choosing the regularization weight. Under the assumption of locally 1D earth models, we design an efficient algorithm by rearranging recursion formula in the reflection matrix approach to compute plane wave seismograms and the Frechet derivative matrix as a by‐product of forward modeling. We then demonstrate that in a gradient‐descent–...

Journal ArticleDOI
TL;DR: A method for the exact computation of the parameters which determine the optimal periodic policy and two different approximations, one based on an approximation of the value-function in the dynamic programming problem while the other based on a deterministic model are provided.

Book ChapterDOI
02 Jun 2003
TL;DR: This paper discusses the multiresolution formulation of quantum chemistry including application to density functional theory and developments that make practical computation in three and higher dimensions.
Abstract: Multiresolution analysis in multiwavelet bases is being investigated as an alternative computational framework for molecular electronic structure calculations. The features that make it attractive include an orthonormal basis, fast algorithms with guaranteed precision and sparse representations of many operators (e.g., Green functions). In this paper, we discuss the multiresolution formulation of quantum chemistry including application to density functional theory and developments that make practical computation in three and higher dimensions.

Journal Article
TL;DR: In this paper, the authors propose a secure MPC protocol over an arbitrary finite ring, an algebraic object with a much less nice structure than a field, and obtain efficient MPC protocols requiring only a black-box access to the ring operations and to random ring elements.
Abstract: Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: - GENERALITY. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). - EFFICIENCY. The best known constant-round protocols do not efficiently scale even to the case of large finite fields. Our contribution goes in these two directions. First, we propose a basis for unconditionally secure MPC over an arbitrary finite ring, an algebraic object with a much less nice structure than a field, and obtain efficient MPC protocols requiring only a black-box access to the ring operations and to random ring elements. second, we extend these results to the constant-round setting, and suggest efficiency improvements that are relevant also for the important special case of fields. We demonstrate the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function.

Posted Content
TL;DR: This work proposes a basis for unconditionally secure MPC over an arbitrary finite ring, an algebraic object with a much less nice structure than a field, and obtains efficient MPC protocols requiring only a black-box access to the ring operations and to random ring elements.
Abstract: Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: - GENERALITY. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). - EFFICIENCY. The best known constant-round protocols do not efficiently scale even to the case of large finite fields. Our contribution goes in these two directions. First, we propose a basis for unconditionally secure MPC over an arbitrary finite ring, an algebraic object with a much less nice structure than a field, and obtain efficient MPC protocols requiring only a black-box access to the ring operations and to random ring elements. Second, we extend these results to the constant-round setting, and suggest efficiency improvements that are relevant also for the important special case of fields. We demonstrate the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function.

Patent
19 Dec 2003
TL;DR: In this article, a decoder of LDPC codewords using the iterative belief propagation algorithm stores a posteriori information on variables, and a shuffle device transfers the information from the second computation device to the storing device.
Abstract: A decoder of LDPC codewords using the iterative belief propagation algorithm stores a posteriori information on variables. An updating device updates the a posteriori information on variables, and a first computation device computes variables to constrain messages from a posteriori information on variables and variable to constraint messages from previous iteration. A second computation device computes a constraint to variable messages from variable to constraint messages computed by the first computation device. A further computation device updates the a posteriori information on variables. A shuffle device transfers the a posteriori information on variables to the first computation device, and a further shuffle device transfers information from the second computation device to the storing device. The decoder further includes a device for compression-storage-decompression of the constraint to variable messages. The disclosure also relates to a corresponding method, computer program and system.

Journal Article
TL;DR: In this paper, the authors explored the latency and energy tradeoffs introduced by the heterogeneity of sensor nodes in the network and explored the tradeoff between energy and latency of sensor node heterogeneity.
Abstract: Explored the latency and energy tradeoffs introduced by the heterogeneity of sensor nodes in the netework.

Journal ArticleDOI
TL;DR: In this paper, the authors present an inversion method for 3D electrical imaging in media with an inhomogeneous and anisotropic conductivity distribution, which is formulated as a functional optimization with an error functional containing terms measuring data misfit and model covariance by means of smoothness, anisotropy and deviation from a starting model.
Abstract: We present an inversion method for 3D electrical imaging in media with an inhomogeneous and anisotropic conductivity distribution. The conductivity distribution is discretized via finite elements and is described by a second-order tensor at each finite element node. The inversion method is formulated as a functional optimization with an error functional containing terms measuring data misfit and model covariance by means of smoothness, anisotropy and deviation from a starting model. Including the model covariance information overcomes the problem of ill-posedness at the expense of limiting the allowed models to the class of models which are compatible with the provided model covariance information. The discretized form of the error functional is minimized by a Levenberg–Marquardt type method using an iterative preconditioned conjugate gradient solver. The use of an iterative solver allows one to bypass the actual computation of the Jacobian or an inverse system matrix. The use of a memory efficient iterative solver together with the implementation on parallel computers allows large-scale inverse problems, comprising several hundred thousand nodes with hundreds of sources and receivers, to be solved. The new method is tested using computer-generated data from two- and three-dimensional synthetic models. For each inversion a choice of penalty parameters, gauging the level of model covariance information imposed, has to be made and the level of regularization required is hard to estimate. We find that running a suite of inversions with varying penalty parameters and subsequent examination of the results (including inspection of residual maps) offers a viable method for choosing appropriate numerical values for the penalty levels. In the applications we found the inversion process to be highly non-linear. Inversion models from intermediate steps of the iterative inversion show structure in places that do not exhibit structure in the true model and only at later iterations do anomalies move to the correct location in the modelling domain. This result indicates that linearized inversions that fail to re-linearize during the inversion process will fail to find meaningful inversion images. The inversion images achieved using the new method recover the important features of the true models, including the approximate magnitudes of the conductivity anomalies and the magnitudes and directions of anisotropy anomalies. The inversion images are generally 'blurred', that is sharp edges are smoothed, and the recovered magnitudes of conductivity, anisotropy and anisotropy direction are generally under-estimated.

Journal ArticleDOI
TL;DR: This work shows a natural model of neural computing that gives rise to hyper-computation, and proposes it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation.
Abstract: ``Neural computing'' is a research field based on perceiving the human brain as an information system. This system reads its input continuously via the different senses, encodes data into various biophysical variables such as membrane potentials or neural firing rates, stores information using different kinds of memories (e.g., short-term memory, long-term memory, associative memory), performs some operations called ``computation'', and outputs onto various channels, including motor control commands, decisions, thoughts, and feelings. We show a natural model of neural computing that gives rise to hyper-computation. Rigorous mathematical analysis is applied, explicating our model's exact computational power and how it changes with the change of parameters. Our analog neural network allows for supra-Turing power while keeping track of computational constraints, and thus embeds a possible answer to the superiority of the biological intelligence within the framework of classical computer science. We further propose it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation. In particular an analog of the Church-Turing thesis of digital computation is stated where the neural network takes place of the Turing machine.

Journal ArticleDOI
TL;DR: This study shows that various forms of caps employed in a recently developed molecular fractionation scheme for full quantum mechanical computation of protein–molecule interaction energy all give consistently accurate energies compared to the corresponding full system calculation with only small deviations.
Abstract: We present a systematic study of numerical accuracy of various forms of molecular caps that are employed in a recently developed molecular fractionation scheme for full quantum mechanical computation of protein-molecule interaction energy. A previously studied pentapeptide (Gly-Ser-Ala-Asp-Val) or P5 interacting with a water molecule is used as a benchmark system for numerical testing. One-dimensional potential energy curves are generated for a number of peptide-water interaction pathways. Our study shows that various forms of caps all give consistently accurate energies compared to the corresponding full system calculation with only small deviations. We also tested the accuracy of cutting peptide backbone at different positions and comparisons of results are presented.

Proceedings ArticleDOI
09 Dec 2003
TL;DR: This paper proposes an algorithm for the computation of shortest-path that reduces the problem to an optimization over a finite graph, and proposes a novel "honeycomb" sampling algorithm that minimizes the cost penalty introduced by discretization.
Abstract: This paper addresses the weighted anisotropic shortest-path problem on a continuous domain, i.e., the computation of a path between two points that minimizes the line integral of a cost-weighting function along the path. The cost-weighting depends both on the instantaneous position and direction of motion. We propose an algorithm for the computation of shortest-path that reduces the problem to an optimization over a finite graph. This algorithm restricts the search to paths formed by the concatenation of straight-line segments between points, from a suitably chosen discretization of the continuous region. To maximize efficiency, the discretization of the continuous region should not be uniform. We propose a novel "honeycomb" sampling algorithm that minimizes the cost penalty introduced by discretization. The resulting path is not optimal but the cost penalty can be made arbitrarily small at the expense of increased computation. This methodology is applied to the computation of paths for groups of unmanned air vehicles (UAVs) that minimize the risk of being destroyed by ground defenses. We show that this problem can be formulated as a weighted anisotropic shortest-path optimization and show that the algorithm proposed can efficiently produce low-risk paths.

Journal ArticleDOI
TL;DR: A novel computation-aware scheme is proposed, which first dynamically determines the target amount of computation power allocated to a frame, and then allocates this to each block in a computation-distortion-optimized manner.
Abstract: Many fast block-matching algorithms (BMAs) reduce the computational complexity of motion estimation by sophisticatedly inspecting a subset of checking points, and stop only when all those checking points have been examined. This means that the searching process for each current block cannot be interrupted, even when it is performed in a software-based computation environment. Our main goal is to allow the searching process to stop once a specified amount of computation has been performed. A novel computation-aware scheme is proposed, which first dynamically determines the target amount of computation power allocated to a frame, and then allocates this to each block in a computation-distortion-optimized manner. We propose a rate-control-like procedure and a predicted computation-distortion benefit heuristic to realize this scheme. Conventional BMAs, such as full-search block matching, three-step search, new three-step search, four-step search, and diamond search, can be transformed into their corresponding computation-aware BMA versions. In our simulations, the resulting computation-aware BMAs not only exhibit higher efficiency than conventional BMAs, but also allow the motion estimation to terminate after any specified amount of computation has been performed (in units of checking points).

Journal ArticleDOI
TL;DR: In this paper, the forward kinematics of the general Stewart-Gough platform using dialytic elimination is presented, which leads to a 40th degree univariate equation from a constructed 15×15 Sylvester's matrix which is relatively small in size.

Journal ArticleDOI
15 Apr 2003
TL;DR: In this article, a state-space technique for the computation of electromagnetic transients on transmission lines with nonlinear components is presented, where state equations for the nonlinear system are derived and these equations are converted to a set of algebraic equations using the trapezoidal rule of integration.
Abstract: A state-space technique (SST) for the computation of electromagnetic transients on transmission lines with nonlinear components is presented. State equations for the nonlinear system are derived and these equations are converted to a set of algebraic equations using the trapezoidal rule of integration. A state-space formulation and numerical solution steps are described. To show the validity of the method proposed, two illustrative examples are given. In the first example, lightning surges on a single-phase with a corona are analysed. The effect of the presence of a surge arrester is investigated in the second example. The results obtained using the state-space technique are compared with those obtained using the electromagnetic transients program (EMTP) and with experimental results available in the literature. The proposed method is accurate, numerically stable and suitable for the computation of electromagnetic transients on transmission lines with several nonlinearities.