scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Scientific Computing in 2006"


Proceedings Article
01 Jan 2006
TL;DR: The proposed model combines both teacher assignment and course scheduling problems simultaneously, which causes the entire model to be more complex, but it is able to solve the model for several randomly generated data sets of sizes comparable to that encountered in an institution.
Abstract: This paper considers a timetabling problem and describes a mathematical programming model for solving it. The proposed model combines both teacher assignment and course scheduling problems simultaneously, which causes the entire model to be more complex. However, we are able to solve the model for several randomly generated data sets of sizes comparable to that encountered in an institution. The computational results for solving the model are reported together with some comparison and analysis of the optimal solutions obtained.

21 citations


Proceedings Article
01 Jan 2006
TL;DR: A new topology independent scheme called the selective migration model is presented, which allows migration among demes only if the individuals meet certain criteria at both the source and the destination.
Abstract: Genetic algorithms are heuristic search algorithm used in science, engineering and many other areas. They are powerful but slow because of their evolutionary nature that mimics the natural selection process. The quality of the solutions delivered depends on the population size, causing larger demand on processing power. Parallel and distributed processing techniques resolve this issue by allocating subpopulations to a number of processors that interact by exchanging parts of their populations through a migration process. Two schemes of migration are in use today; the island and the step-stone models. This paper presents a new topology independent scheme called the selective migration model. This scheme allows migration among demes only if the individuals meet certain criteria at both the source and the destination. Experiments show that this model improves the performance by offering faster convergence in large population setups and better solutions in time-constrained small population setups.

14 citations


Proceedings Article
01 Jan 2006
TL;DR: A very simple computational algorithm is adopted for calculations of higher order Legendre polynomials and its use for Gaussian quadrature numerical integration till 44 order.
Abstract: There are many numerical methods adopted to solve mathematical problems. Early researchers focused on the methods to reduce computational costs. In recent years, reduction in computational costs makes many numerical methods available which were not tried for this reason. The use of higher order Legendre polynomials for more than 5-7 orders is usually not common. The efficient and quick numerical methods like Gaussian Quadrature were not adopted for higher orders. In this paper a very simple computational algorithm is adopted for calculations of higher order Legendre polynomials and its use for Gaussian quadrature numerical integration till 44 order.

7 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper presents two iterative local and global convergent algorithms to solve efficiently the IASVP when the matrix is Toeplitz (IASVPT), and implements a parallel version of these both algorithms, called PMIIIT and PLPT, respectively, that highly reduce the execution time of the sequential algorithm at sight of the experiments.
Abstract: When the Inverse Additive Singular Value Problem (IASVP) involves Toeplitz–type matrices it is possible to exploit this special structure to reduce the execution time. In this paper, we present two iterative local and global convergent algorithms (MIIIT and LPT) to solve efficiently the IASVP when the matrix is Toeplitz (IASVPT). As it will be shown, it can be achieved an asymptotic complexity one order of magnitude less than those algorithms that do not exploit the Toeplitz–like structure. Furthermore, we have implemented a parallel version of these both algorithms, called PMIIIT and PLPT, respectively, that highly reduce the execution time of the sequential algorithm at sight of the experiments. Keywords– Parallel programming, Inverse Singular Value Problem, Toeplitz matrices, Newton type methods, Least squares problem

7 citations


Proceedings Article
01 Jan 2006
TL;DR: This work reduces the high complex geometry of the fractures by applying a local transformation that suppresses the cumbersome meshing configurations while keeping the networks fundamental, geological and geometrical characteristics.
Abstract: Natural fractured media are highly unpredictable because of existing complex structures at the fracture and at the network levels Fractures are by themselves heterogeneous objects of broadly distributed sizes, shapes and orientations that are interconnected in large correlated networks With little field data and evidence, numerical modeling can provide important information on the underground transport phenomena However it must overcome several barriers Firstly, the complex network structure produces a structure difficult to mesh Secondly, the absence of a priori homogenization scale, along with the double fracture and network heterogeneity levels, requires the calculation of large but finely resolved fracture networks resulting in very large simulation domains To tackle these two related issues, we reduce the high complex geometry of the fractures by applying a local transformation that suppresses the cumbersome meshing configurations while keeping the networks fundamental, geological and geometrical characteristics We show that the flow properties are marginally affected while the problem complexity (ie, memory capacity and resolution time) can be divided by orders of magnitude To conclude, the developed method represents an adaptive method of the complex configurations

6 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper proposes several approaches to apply the high order weighted essentially non-oscillatory (WENO) scheme to the 1D cylindrical and spherical grid and tests these schemes with Sedov explosion problem, and finds that the conservation in multi-dimensional sense is essential to generate physical solutions.
Abstract: In this paper, we apply the high order WENO schemes to uniform cylindrical and spherical grid. Many 2-D and 3-D problems can be solved in 1-D equations if they have angular and radial symmetry. The reduced equations will typically involve geometric source terms. Therefore, conventional numerical schemes for Cartesian grid may not work well. We propose several approaches to apply the high order weighted essentially non-oscillatory (WENO) scheme to the 1D cylindrical and spherical grid. We have tested these schemes with Sedov explosion problem, and have found that the conservation in multi-dimensional sense is essential to generate physical solutions. The numerical results show that the global flux-splitting may fail to work even for high order WENO finite-difference schemes. We have also shown that only high order WENO finite-volume schemes can achieve both the high order accuracy and the conservation. Keyword: PDE, WENO, cylindrical and spherical, Euler equations, Sedov

5 citations


Proceedings Article
01 Jan 2006
TL;DR: Computational results are presented that demonstrate the effectiveness of a geometry based, domain partitioning heuristic with element weights for solving this load balancing problem and compare this heuristic to competing schemes for a representative combustion problem.
Abstract: The computation of radiative effects by the Photon Monte Carlo method is computationally demanding, especially when complex, nongray absorption models are employed. To solve such computationally expensive problems we have developed a parallel software framework for the photon Monte Carlo method based on ray tracing to compute radiative heat transfer effects. The central problem with obtaining scalable performance for this method is that widely varying physical properties over the computational domain result in highly skewed processor work assignment. In this paper we present computational results that demonstrate the effectiveness of a geometry based, domain partitioning heuristic with element weights for solving this load balancing problem. We present computational results that compare this heuristic to competing schemes for a representative combustion problem.

4 citations


Proceedings Article
01 Jan 2006
TL;DR: A new method is proposed in order to evaluate the stochastic solution of linear random differential equation based on the combination of the probabilistic transformation method for a single random variable and the numerical methods.
Abstract: In this paper, a new method is proposed in order to evaluate the stochastic solution of linear random differential equation. The method is based on the combination of the probabilistic transformation method for a single random variable and the numerical methods (e.g. finite difference, finite element, Runge-Kutta, etc...). The transformation technique evaluates the probability density function (PDF) of the solution by multiplying the PDF of the random variable by the Jacobean of the inverse function

4 citations


Proceedings Article
01 Jan 2006
TL;DR: For more information about the MRC/UCT Medical Imaging Research Unit Department of Human Biology University of Cape Town Observatory, please contact L.R.D. Bosanquet or L. John.
Abstract: D.R. Bosanquet MRC/UCT Medical Imaging Research Unit Department of Human Biology University of Cape Town Observatory 7925 South Africa Email: dbosan@cormack.uct.ac.za Tel: +27 84 661 1343 Fax: +27 21 448 7226 L.R. John MRC/UCT Medical Imaging Research Unit Department of Human Biology University of Cape Town Observatory 7925 South Africa Email: ljohn@cormack.uct.ac.za Tel: +27 21 406 6548 Fax: +27 21 448 7226

3 citations


Proceedings Article
01 Jan 2006
TL;DR: A hybrid SLI-FLP number system, together with some recent improvements of SLI arithmetic can result in a sound implementation of over/underflow free computer arithmetic.
Abstract: Symmetric level-index arithmetic was introduced to overcome the problems of overflow and underflow in scientific computations. A hybrid SLI-FLP number system, together with some recent improvements of SLI arithmetic can result in a sound implementation of over/underflow free computer arithmetic. The hybrid arithmetic automatically switches between FLP and SLI, in order to achieve both efficiency and robustness for the system. The number representation scheme and algorithms will be discussed briefly in this paper, followed by the description of a software implementation and its successful application to a turbulent combustion problem.

2 citations


Proceedings Article
01 Jan 2006
TL;DR: Commonly considered as the fastest, Cärtner’s modification of Welzl's algorithm has a proved expected performance of O(n) and in the experiment MEC algorithm outperformed it in more than 7 times in average.
Abstract: The 2006 International Conference on Scientific Computing (CSC’06) Abstract Partitions of a plane, based on two or three of its points, are introduced. The study of these partitions is applied to finding the minimal enclosing circle (MEC) for a set S of n planar points. MEC(S), the MEC of n points of S, is defined by either a pair of S points with the largest distance (tight two-tuple) or by a triplet of S points spread on more than half of its circumference (tight three-tuple) with the largest radius. An extension for an existing MEC by an outside point P∈S is a MEC for point P and the points of the tight tuple for the existing MEC. It has a larger radius than existing MEC. The MEC problem is dual to a problem of finding an optimal partition of S-plane by two or three points of S defined with the largest circular region. A two point partition divides the S-plane in 4 regions, a three point – in 7 regions, one region is a circle in either partition. The MEC algorithm is based on this duality. It begins with a MEC of two arbitrary points of S and corresponding two point partition of S-plane. Next, each point P of S is examined in a separate step of the algorithm. If it is outside of the current MEC, its extension by this point is obtained. The tight tuple for the extension is formed by replacing either none or one or two points of current MEC’s tight tuple by point P. Which points of the tuple are to be replaced by point P depends on the region to which P belongs in a plane partition by the points of current tight tuple. The next circle has a larger diameter and it retains at least one set of the defining points of a previous circle, thus limiting a possible loss of its Spoints during an extension. A n-step iteration is completed once each point of S is examined. It is repeated until no point of S is found outside of a current MEC during an entire iteration. Observed number of steps in the algorithm has rarely reached 5n and never exceeded 6n in an experiment over several point distributions with n in range from 10 to 28,000,000 . Commonly considered as the fastest, Cärtner’s modification of Welzl’s algorithm [18], [6] has a proved expected performance of O(n). In the experiment MEC algorithm outperformed it in more than 7 times in average. At this point no satisfying theoretical bound, matching this remarkable performance of MEC algorithm, has been found. This incremental algorithm is an on-line algorithm: if set S gets new points during its execution, the current and following iterations continue with an updated set S without a loss of the progress achieved before the update. The algorithm has been already extended to R and this will be reported elsewhere. This paper is also about the two and three point partitions. They provide the basis for the MEC algorithm.

Proceedings Article
01 Jan 2006
TL;DR: The results variously validate, and characterize some limits of applicability, of the ballistic limit equations (BLEs) that are commonly used in spacecraft shield design and spacecraft mission planning.
Abstract: Hypervelocity collisions with space debris (SD, natural meteoroids and man-made artifacts) can significantly affect the performance of spacecraft. Here, I use an adaptivemesh Eulerian hydrodynamic code, Mie-Grüneisen solid-mechanics, and a simple material-failure model, running on a modern PC, to analyze the protection afforded by a nominal two-plate aluminum shield to hypervelocity collisions with millimeterand centimeter-sized aluminum and iron-nickel spheres, considered as SD proxies. The results indicate that such a shield would stop a 1-mm iron-nickel impactor at 9 km/s (the nominal mean speed of SD), and would stop a 1-mm aluminum impactor at 20 km/s. The shield would fail to stop a 1-cm aluminum impactor at 9 km/s, and 1-mm, and 1-cm, iron-nickel impactors at 20 km/s. These results variously validate, and characterize some limits of applicability, of the ballistic limit equations (BLEs) that are commonly used in spacecraft shield design and spacecraft mission planning.

Proceedings Article
01 Jan 2006
TL;DR: An algorithm will find Hamilton Circuit in polynomial steps in a given graph of degree three and these principles and their use will be explained by different examples.
Abstract: The purpose of this paper is to develop an algorithm to determine the Hamilton Circuit in a given graph of degree three. This algorithm will find Hamilton Circuit in polynomial steps. We got some properties; these properties are combined to develop the above mentioned algorithm. These principles and their use will be explained by different examples. More or less these principles are simple and obvious.

Proceedings Article
01 Jan 2006
TL;DR: This paper extends earlier work and describes several strategies with which macromodels can be interconnected, that result in further savings of computation time.
Abstract: Full wave electromagnetic simulation requires numerically expensive methods such as FDTD. The computation time depends superlinearly on the number of unknowns in the simulation region. In some situations, especially when the results are not needed at every point of the grid, simulation time can be reduced. This reduction can be accomplished by partitioning the grid into macromodels and determining the macromodel impulse response. Then the impulse response can be decomposed into its eigenmodes, some of which can be eliminated because they are non-essential. In this paper we extend our earlier work and describe several strategies with which macromodels can be interconnected, that result in further savings of computation time.

Proceedings Article
01 Jan 2006
TL;DR: This paper investigates the approximation of the local function by the kernel based statistical method of Nadaraya and Watson (NW) and shows how the approximated solution in an arbitrary mesh can be matched in time to obtain the steady state and/or time dependent solution of the PDE.
Abstract: This paper will address the problem of time marching function approximated solutions inherent in emerging meshfree Computational Fluid Dynamics (CFD) solution techniques. The numerical solutions of partial differential equations (PDEs) of CFD has been dominated by either finite difference methods (FDM), finite element methods (FEM), and finite volume methods (FVM). These methods can be derived from the assumptions of the Taylor expansion based local interpolation schemes and they require a mesh to support the local approximation. The problem is that in complex shaped domains, the construction of the mesh is a non-trivial problem. Typically with these methods, only the function is continuous across meshes, but not its partial derivatives. The difficulties of mesh construction and discontinuous derivatives has led to the development of mesh independent methods or meshfree (MF). These new meshfree methods represent the next generation of CFD solvers as they mature. In these methods the local function approximation method is independent of the mesh (or design points) of the geometric domain in which a solution is sought. In this paper we investigate the approximation of the local function by the kernel based statistical method of Nadaraya and Watson (NW). We show how the approximated solution in an arbitrary mesh can be matched in time to obtain the steady state and/or time dependent solution of the PDE.

Proceedings Article
01 Jan 2006
TL;DR: It is interesting to mention that the best quadrature and its worst case error bound can be recursively expressed in terms of the given Hermite information via combinatorial analysis, obviating solving the nonlinear system.
Abstract: As usual, denote by KW [a, b] the Sobolev class consisting of every function whose (r − 1)st derivative is absolutely continuous on the interval [a, b] and its rth derivative is bounded by K a.e. in [a, b]. For a function f ∈ KW [a, b], its values and derivatives up to r−1 order at a set of nodes x are known. These values are said to be given Hermite information. This work reports results on best quadrature based on the given Hermite information for the class KW [a, b]. Existence and concrete construction issue of the best quadrature is settled down by perfect spline interpolation. It turns out that the best quadrature depends on a system of algebraic equations satisfied by a set of free nodes of the interpolation perfect spline. From our another new result, it is shown that the system can be converted in a closed form to two single-variable polynomial equations, each being of degree approximately r/2. It is interesting to mention that the best quadrature and its worst case error bound, although nonlinear in nature, can be recursively expressed in terms of the given Hermite information via combinatorial analysis, obviating solving the nonlinear system. As a by-product, best interpolation formula for the class KW [a, b] is also obtained.

Proceedings Article
01 Jan 2006
TL;DR: These algorithms are shown to compare favorably against many other lattice algorithms, which takes at least quadratic time in computation, and to improve the performance for pricing a wide variety of options.
Abstract: How to price options efficiently and accurately is an important research problem. Options can be priced by the lattice model. Although the pricing results converge to the theoretical option value, the prices do not converge monotonically. Worse, for some options like barrier-options, the prices can oscillate significantly. Thus, large computational time may be required to achieve acceptable accuracy. The combinatorial techniques can be used to improve the performance for pricing a wide variety of options. This paper uses vanilla options, single-barrier options, double-barrier options, and lookback options as examples to show how to derive linear-time pricing algorithms by combinatorial techniques. These algorithms are shown to compare favorably against many other lattice algorithms, which takes at least quadratic time in computation.

Proceedings Article
01 Jan 2006
TL;DR: Integer programming algorithms based on the ordered enumeration method are described and several such algorithms, extended application and parallelization of the method are presented.
Abstract: Integer programming algorithms based on the ordered enumeration method are described. Combining dynamic programming and branch and bound ideas in one efficient computational process, S. S. Lebedev implemented his method for solving integer linear programming problems in 1968. A number of ordered enumeration algorithms were developed since the 1970's. The author has been actively involved in this research and the paper presents several such algorithms, extended application and parallelization of the method. Most papers on this highly competitive combinatorial method of discrete optimization are written in Russian and little known to researchers and educators not familiar with the language.

Proceedings Article
01 Jan 2006
TL;DR: This work is about an algorithm for solving a linear program which is simple to apply and which will eventually lead to a stable solution that is optimal, if the problem is feasible.
Abstract: This work is about an algorithm for solving a linear program which is simple to apply. There are three algorithms in this work. The first algorithm solves a two-variable linear program. This algorithm is built on simple concepts such as the slope and the intercept of a line. The core idea of the algorithm is the Deleting Principle based on the consistency between slopes and intercepts along the boundary of the feasible region. The second algorithm is the main algorithm. It solves a general linear program. The algorithm starts from the origin and moves to other points on the boundary of the feasible region. The algorithm depends on a two-dimensional intersection process: a systematic and repeated application of the first algorithm that will lead to a feasible solution on the boundary of the feasible region called a stable solution. The third algorithm or the judging algorithm is applied on every stable solution reached in the main algorithm. The judging algorithm either verifies that a stable solution is optimal or outputs a new feasible solution on the boundary. In the latter case, the judging algorithm shows how to get out of a trapped stable solution. The main algorithm may then be repeated by applying the first algorithm along the new direction suggested by the judging algorithm where the objective function is guaranteed to improve. Thus the main algorithm will eventually lead to a stable solution that is optimal, if the problem is feasible.


Proceedings Article
01 Jan 2006
TL;DR: A Normal Mode Analysis of the Chesapeake Bay was performed using Neumann boundary conditions and COMSOL MultiPhysics and the lowest 100 eigenstates were calculated and compared to a finite difference solution.
Abstract: A Normal Mode Analysis (NMA) of the Chesapeake Bay was performed using Neumann boundary conditions and COMSOL MultiPhysics (formerly known as FEMLAB). The lowest 100 eigenstates were calculated and compared to a finite difference solution. Based on the normal modes derived numerically, surface current vector fields can be calculated. The vector fields of the Chesapeake Bay provide tools for the solution of problems such as the diffusion of pollutants, tracking of crab spat, particle transport (bio-terrorism), as well as providing a basis set for decomposing real-time currents. Given the difficulty of the boundary, attempts to measure the error in the calculation included tests for orthogonality within the basis set, convergence of the eigenvalue as a function of grid chosen and a comparison to a finite difference calculation for a similar sized grid.

Proceedings Article
01 Jan 2006
TL;DR: Two software packages for solving sparse systems of linear equations, SuperLU and UMFPACK, have been integrated with the University of Maine Ice Sheet Model for predicting the formation and disappearance of glacial ice sheets and are able to solve non-banded systems that can be produced by the ice sheet, but are impractical to solve with straightforward Gaussian elimination.
Abstract: More recent models have attempted to use better physics to describe ice velocities in regions where velocities vary significantly over short distances. These models use a 3-dimensional, rectangular FEM grid. While the 2-dimensional model is used for an entire ice sheet, the 3-dimensional model is used over a limited area of interest with a smaller distance between grid points. Output of the 2-dimensional model is used to establish boundary conditions for the 3-dimensional model. When solving for 3D velocities, FEM generates systems of banded linear equations with 81 non-zero coefficients per equation. Unlike the 2D model, these equations are not diagonally dominant. For this reason, the decision was made to investigate direct methods based on Gaussian elimination for solving these systems of equations. Abstract Two software packages for solving sparse systems of linear equations, SuperLU and UMFPACK, have been integrated with the University of Maine Ice Sheet Model for predicting the formation and disappearance of glacial ice sheets. Using a library of basic linear algebra subprograms, BLAS, tuned for the underlying hardware, these packages perform significantly better than our banded Gaussian elimination routine. Also, they are able to solve non-banded systems that can be produced by the ice sheet model, but are impractical to solve with straightforward Gaussian elimination. A modified compressed column data structure is presented for interfacing the ice sheet model with the two software packages. Test results are presented that indicate careful consideration must be given to the column ordering methods used by the packages when solving specific problems. An additional complication in the 3D model is internal pressure within the ice sheet. In one version of the model the internal pressure is eliminated, resulting in a system of linear equations that is purely banded. In a second version of the 3D model, pressure is explicitly calculated and results in a banded system of equations with lower and right borders. Figure 1 depicts the non-zero entries in these systems of equations.

Proceedings Article
01 Jan 2006
TL;DR: This paper considers the probability density function of a non-central χ distribution with odd number of degrees of freedom ν and presents three alternative expressions to this pdf in terms of a partial derivative of the hyperbolic cosine function.
Abstract: In this paper, we consider the probability density function (pdf) of a non-central χ distribution with odd number of degrees of freedom ν. This pdf is represented in the literature as an infinite sum. Consequently, we present three alternative expressions to this pdf. The first expression is in terms of a partial derivative of the hyperbolic cosine function. The second expression, on the other hand, is a finite sum representation of (ν + 1)/2 terms only instead of the infinite sum. Finally, we present a general recurrence relation for such pdf. These results have applications in approximation of the pdf of non-central χ distributed random variables.

Proceedings Article
01 Jan 2006
TL;DR: This paper develops a method that overcomes the limitations of the standard Tikhonov regularization and presents a criterion by which approximate solutions can be evaluated and used in a search method that is effective in locating points of irregular behavior.
Abstract: Tikhonov regularization is a popular and effective method for the approximate solution of illposed problems, including Fredholm equations of the first kind. The Tikhonov method works well when the solution of the equation is well-behaved, but fails for solutions with irregularities, such as jump discontinuities. In this paper we develop a method that overcomes the limitations of the standard Tikhonov regularization. We present a criterion by which approximate solutions can be evaluated and use it in a search method that is effective in locating points of irregular behavior. Once the points of irregularity have been found, the solution can be recovered with good accuracy.


Proceedings Article
01 Jan 2006
TL;DR: Polyurethane reaction injection molded products are prepared which have flexural modulus factors below 3.4 and often below 2.4, and are employed within specified solubility parameter relationships.
Abstract: Polyurethane reaction injection molded products are prepared which have flexural modulus factors (-20 DEG F./158 DEG F.) below 3.4 and often below 2. They are prepared by employing three different polyols or mixtures of polyols, each polyol having a specified reactivity relationship and are employed within specified solubility parameter relationships. These products also have at least two thermal transition temperatures.

Proceedings Article
01 Jan 2006
TL;DR: The e-AIRS and e- AIRS middleware is described to construct joint research environment of aeronautical engineering geographically distributed to avoid duplicate investment of each research centers.
Abstract: Nowaday areas of e-Science study are concentrated on biology, meteorology, aeronautical engineering and medical science Particularly e-AIRS is included in aeronautical engineering which has been progressing actively The studies and experiments of aeronautical engineering need long time due to insufficiency of experiment equipments and they are confronted with difficulty from geographical distance Therefore duplicate investment of each research centers drives a miscarriage of the research and studies Especially, there is important meaning in aeronautical engineering when researchers are sharing research results and analyzing them However, actually sharing results and analyzing are difficult Therefore, in the paper we describe the e-AIRS and e-AIRS middleware to construct joint research environment of aeronautical engineering geographically distributed keywords : e-Science, e-AIRS, numerical wind tunnel

Proceedings Article
01 Jan 2006
TL;DR: The solution of the equation of heat transfer for pyramidal spines is obtained using computer algebra software using Bessel functions and can be applied to the design of cooling systems of electronic circuits.
Abstract: The solution of the equation of heat transfer for pyramidal spines is obtained using computer algebra software. We consider pyramidal spines with three different sectional areas: triangle, square and pentagon. The solutions are given in terms of Bessel functions. From the temperature profiles that were obtained it is possible to deduce explicit formulas for the efficiency of the spine. The results that we derive here can be applied to the design of cooling systems of electronic circuits.

Proceedings Article
01 Jan 2006
TL;DR: An analysis of ELIMINO, a computer-mathematics research system that has been developed at the Chinese Academy of Sciences, and an overview of the Characteristic Sets Method are presented.
Abstract: This paper presents an analysis of ELIMINO, a computer-mathematics research system that has been developed at the Chinese Academy of Sciences. Also presented are ideas to improve the performance of ELIMINO and an overview of the Characteristic Sets Method.

Proceedings Article
01 Jan 2006
TL;DR: Using a help system to make easier the CAS-user connection, the problem of solving the equation of diffusion with variable coefficients is proposed and the inverse Laplace transform is obtained.
Abstract: Using a help system to make easier the CAS-user connection, we propose the problem of solving the equation of diffusion with variable coefficients. We transform the problem at the Laplace domain and we use the Bromwich integral and the residue theorem in order to do the inverse Laplace transform and then explicit solution is obtained.