scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Scientific Computing in 2008"



Proceedings Article
01 Jan 2008
TL;DR: In this paper, the Dirichlet problem for the Laplace equation in a star-like domain is considered and the explicit solution is given in terms of a Fourier series whose coefficients are determined by solving an infinite system of linear equations depending on the boundary data.
Abstract: We consider the Dirichlet problem for the Laplace equation in a starlike domain, i.e. a domain which is normal with respect to a suitable polar co-ordinates system. Such a domain can be interpreted as a non-isotropically stretched unit circle. We write down the explicit solution in terms of a Fourier series whose coefficients are determined by solving an infinite system of linear equations depending on the boundary data. Numerical experiments show that the same method works even if the considered starlike domain belongs to a two-fold Riemann surface.

11 citations


Proceedings Article
01 Jan 2008
TL;DR: A threaded pipe connection that has threads the flanks, roots, and crests thereof form thread seals when the connection is made up and that has a seal ring of resilient material that is trapped in a cavity between a groove in either the box or the pin and the thread seals to provide a secondary seal.
Abstract: A threaded pipe connection is disclosed that has threads the flanks, roots, and crests thereof form thread seals when the connection is made up and that has a seal ring of resilient material that is trapped in a cavity between a groove in either the box or the pin and the thread seals to provide a secondary seal.

7 citations


Proceedings Article
01 Jan 2008
TL;DR: Experiments comparing the performance of a GPU with that of a CPU for an astrophysical simulation, using NVIDIA’s CUDA programming interface for the first time are described.
Abstract: Graphics Processing Units (GPUs) are quickly becoming viable alternatives for certain simulations currently run on CPUs They are an attractive option because of their inherent parallelism: Like older vector-style supercomputers, they perform repeated calculations on large data sets by processing multiple elements at once Performance is further improved by their built-in mathematical functions This paper describes experiments comparing the performance of a GPU with that of a CPU for an astrophysical simulation, using NVIDIA’s CUDA programming interface for the

5 citations


Proceedings Article
01 Jan 2008
TL;DR: An hybrid method for predicting execution time of parallel applications is presented, which relies on both profile-based and historicbased predictions.
Abstract: Many researches are carried out in the domain of the execution time prediction for sequential or parallel applications. This data can be used by job scheduling algorithms present on grid or cluster infrastructures to improve their behavior. In real-time context, the prediction of execution time is a crucial data, which the respect of deadline constraints may depend on. Both domains introduce their own prediction models. In parallel job scheduling, historic-based models can be used to estimate the execution time of a job using an experience base of past executions of similar jobs. In real-time domain, the Worst Case Execution Time (WCET) of applications is notably computed from the profile of the applications. In this paper, an hybrid method for predicting execution time of parallel applications is presented. This method relies on both profile-based and historicbased predictions. Programs profiles are analyzed in order to decompose them into a set of basic blocks. The execution time of each block is determined using past executions of the programs. Then, a prediction of the overall execution time can be performed by applying historic-based predictions model to estimate the execution count of basic blocks.

4 citations


Proceedings Article
Ken A. Hawick1
01 Jan 2008
TL;DR: Use of matrix analysis techniques are presented to relate the shape and form of the eigenvalue spectrum of the adjacency matrices of complex networks to the phase transition of Kauffman NK.
Abstract: Complex network models such as Kauffman’s NK model have been shown to have interesting phase transitional properties as the connectivity is varied. Conventional network and graph analysis metrics concerning path-lengths, numbers of components and even circuits have been used to explore these transitions. This paper presents use of matrix analysis techniques to relate the shape and form of the eigenvalue spectrum of the adjacency matrices of complex networks to the phase transition. A number of computational experiments with different network realisations yield results for the Kauffman NK model at connectivity K = 1, 2, ..., 7 are these are discussed in terms of the transition atK = 2.

4 citations


Proceedings Article
01 Jan 2008
TL;DR: A new algorithm Check-And-Remove (CAR) is proposed with a better time complexity than the greedy algorithm presented in [1], and a new greedy algorithm whose time complexity is the same as that of the CAR algorithm is presented.
Abstract: The set-covering problem is an interesting problem in computational complexity theory. In [1], the setcovering problem has been proved to be NP hard and a greedy heuristic algorithm is presented to solve the problem. In [2], the set-covering problem is found to be equivalent to the problem of identifying redundant search engines on the Web, and finding efficient and effective practical algorithms to the problem becomes a key issue in building a vary large-scale Web meta-search engine. A new algorithm Check-And-Remove (CAR) is proposed in [2] with a better time complexity than the greedy algorithm presented in [1]. However, in some cases the cover set produced by the new algorithm is too large to be acceptable. We propose some changes to the data structure that improve the performance of both algorithms. We also present a new greedy algorithm whose time complexity is the same as that of the CAR algorithm. The experimental results show that our final greedy algorithm runs faster than the CAR algorithm and produces better results in all test cases.

4 citations



Proceedings Article
01 Oct 2008
TL;DR: A receding horizon (RH) controller is developed for the tracking control of wheeled mobile robots subject to nonholonomic constraints in environments without obstacles by using the OP (Quadratic Programming) method.
Abstract: In this study, a receding horizon (RH) controller is developed for the tracking control of wheeled mobile robots (WMRs) subject to nonholonomic constraints in environments without obstacles. The tracking-error kinematic model is first linearized at the equilibrium point. Then, it is transformed into an exact discrete form considering the time-delay. The control policy is derived from the optimization of a quadratic cost function, which penalizes the tracking error and control variables in each sampling time. The minimization problem is solved by using the OP (Quadratic Programming) method taking the current error state as the initial value and including the input constraints, which include the velocity and acceleration constraints to prevent the mobile robot from slipping. The performance of the control algorithm is verified via computer simulations with predefined trajectory showing that the strategy is feasible.

3 citations


Proceedings Article
01 Jan 2008
TL;DR: A methodology for modeling complex natural systems through Macroscopic Cellular Automata is presented and applied to lava flow simulation and an application related to Civil Defense purposes is briefly described and proposed as a development.
Abstract: Cellular Automata are parallel computational models which are capable to give rise to heterogeneous emergent behaviors notwithstanding simple local rules of evolution. In this review paper, a methodology for modeling complex natural systems through Macroscopic Cellular Automata is presented and applied to lava flow simulation. In particular, the 2001 Mt. Etna volcano Nicolosi (Italy) case study has been considered for model calibration, while the validation has been performed by considering further cases of study, which differ both in duration and emission rate. Parameter optimization was carried out by a Parallel Master-Slave Genetic Algorithm. Results have confirmed both the goodness of the simulation model and of the calibration algorithm. Eventually an application related to Civil Defense purposes is briefly described and proposed as a development.

3 citations


Proceedings Article
01 Jan 2008
TL;DR: This work shows how to design the value of the diffusion constant D for the random walks of Squares and Triangles over their respective regular lattice in two-dimensions by allowing movements to grid locations other than nearest neighbors.
Abstract: I show how to design the value of the diffusion constant D for the random walks of Squares and Triangles over their respective regular lattice in two-dimensions. By allowing movements to grid locations other than nearest neighbors, I can design the value of the diffusion constant D to a value larger that unity (the default) or to a value less than unity.

Proceedings Article
01 Jan 2008
TL;DR: This compact difference scheme can achieve fourth-order accuracy for both temporal and spatial variables, and it is unconditionally stable due to the favorable stability property of the boundary value methods.
Abstract: * This research is supported by the research grant RG-UL/07-08S/Y1/JXQ/FST, RG077/05-06S/SHW/FST and CG094/07-08S/SHW/FST from University of Macau. Abstract In this paper we combine the boundary value methods (for discretizing the temporal variable) and finite difference compact scheme (for discretizing the spatial variables) to numerically solve the two-dimensional heat equations. We firstly employ a fourth-order compact scheme to discretize the spatial derivatives. Then a linear system of ordinary differential equation is obtained. Then we apply a fourth-order scheme of boundary value method to approach this system. Therefore, this scheme can achieve fourth-order accuracy for both temporal and spatial variables, and it is unconditionally stable due to the favorable stability property of the boundary value methods. Numerical results are presented to illustrate the accuracy and efficiency of this compact difference scheme, compared with the classical second-order Crank-Nicolson scheme.


Proceedings Article
01 Jan 2008
TL;DR: The study comprises of testing carbon/epoxy composite beams with various embedded delaminations with a mechanical actuator and a scanning laser vibrometer as a sensor for recording the frequency response and analysing the acquired signatures based on Displacement and Curvature Mode Shapes.
Abstract: Composite materials are supplanting conventional metals in aerospace, automotive, civil and marine industries in modern times. However, despite these advantageous properties, they are prone to delamination or matrix cracking. Thus, necessitating the early detection of the crack or flaw before it initiates into a serious defect. An offline approach was commonly used where in the parts examined away from service/operation. This not only consumed a lot of time but risked damage to the part during operation and handling. A detailed understanding of the various proven methods and techniques and their applicability in the analysis of vibration signatures obtained from damaged structures under dynamic conditions is essential to develop a reliable Structural Health Monitoring System (SHMS). This paper includes Vibration based damage detection testing on Carbon/Epoxy composite beams. Such composites are commonly used in the aerospace and marine industry. This material type is gaining acceptance not only in the aerospace industry but also in the automotive and construction industries. The paper reports the processing of the vibration signatures from healthy and damaged composite beams upon excitation and analysis of the mode shapes acquired. The study comprises of testing carbon/epoxy composite beams with various embedded delaminations with a mechanical actuator and a scanning laser vibrometer (SLV) as a sensor for recording the frequency response and analysing the acquired signatures based on Displacement and Curvature Mode Shapes.




Proceedings Article
01 Jan 2008
TL;DR: A dot matrix display panel with a thin film transistor and the manufacturing method therefor, the panel being so constructed that a gate insulating layer and a semiconductor layer are provided as one laminated film substantially equal in the size thereto.
Abstract: A dot matrix display panel with a thin film transistor and the manufacturing method therefor, the panel being so constructed that a gate insulating layer and a semiconductor layer are provided as one laminated film substantially equal in the size thereto on an insulating substrate having a gate electrode and in a region of the substrate except for the peripheral portion thereof, and a source electrode and a drain electrode come into contact with the semiconductor layer in a region covering the gate electrode and gate insulating layer so as to constitute a thin film transistor array substrate, so that a display medium is sandwiched between the array substrate and the substrate having a transparent electrode.


Proceedings Article
01 Jan 2008
TL;DR: Two multilevel refinement algorithms for the capacitated clustering problem are presented, one using a simple tabu search while the other executes a standard local search procedure.
Abstract: This paper presents two multilevel refinement algorithms for the capacitated clustering problem. Multilevel refinement is a collaborative technique capable of significantly aiding the solution process for optimisation problems. The central methodologies of the technique are filtering solutions from the search space and reducing the level of problem detail to be considered at each level of the solution process. The first multilevel algorithm uses a simple tabu search while the other executes a standard local search procedure. Both algorithms demonstrate that the multilevel technique is capable of aiding the solution process for this combinatorial optimisation problem.

Proceedings Article
01 Jan 2008
TL;DR: A rankrevealing two-sided orthogonal decomposition method for solving the STLS problem is presented and it is shown that this algorithm computes theSTLS solution as accurate as the SVD method with less computation.
Abstract: In this paper, we present a rankrevealing two-sided orthogonal decomposition method for solving the STLS problem. An error analysis of the algorithm is given. Our numerical experiments show that this algorithm computes the STLS solution as accurate as the SVD method with less computation.

Proceedings Article
01 Jan 2008
TL;DR: In this article, the problem of cylindrical quantum wells with potential energy different from zero and with singularity of the energy on the axis of the cylinder was solved using methods of computer algebra.
Abstract: This paper present how to solve the problem of cylindrical quantum wells with potential energy different from zero and with singularity of the energy on the axis of the cylinder. The solution to the problem was obtained using methods of computer algebra. The results depend of Bessel and Kummer functions. This paper present energy levels and wave functions in some of the cases with an exactly form and in other cases with an approximated form, this form depended on the possibility of integrating the special functions and calculating the zeros of these functions. Here we can see the power of the method in the applications concerning complex problems of quantum mechanics, and the possibility of being able to apply this method in order to solve other problems in science and also in engineering.

Proceedings Article
01 Jan 2008
TL;DR: Results show that this alternate Hamiltonian can be used in place of the full Hamiltonian on some simple systems without loss of the dynamics of the system.
Abstract: A method of deriving a Hamiltonian from a tree that can be used for fast symplectic maps of N-body systems is presented. A full description of the Hamiltonian is given along with how the derivatives of that Hamiltonian can be used to implement a symplectic map based upon it. Results show that this alternate Hamiltonian can be used in place of the full Hamiltonian on some simple systems without loss of the dynamics of the system. Speed tests for how the method scales with particle count are also presented and show that the tree based Hamiltonian scales better than the O(N) of the standard Hamiltonian. In addition, even with the overhead of the tree, the new scheme can outperform the standard scheme with as few as 1000 particles in the integration. Because of the superior scaling, the tree based scheme achieves far superior performance when tens of thousands of particles, or more, are involved.

Proceedings Article
01 Jul 2008
TL;DR: This is the first attempt to extend the application of the BGK (Bhatnagaar-Gross-Krook) scheme to solve RANS equations with a turbulence model using finite difference method and a very good agreement is obtained.
Abstract: In this paper, a gas kinetic solver is developed for the Reynolds Average Navier-Stokes (RANS) equations in two-space dimensions. To our best knowledge, this is the first attempt to extend the application of the BGK (Bhatnagaar-Gross-Krook) scheme to solve RANS equations with a turbulence model using finite difference method. The convection flux terms which appear on the left hand side of the RANS equations are discretized by a semi-discrete finite difference method. Then, the resulting inviscid flux functions are approximated by gas-kinetic BGK scheme which is based on the BGK model of the approximate collisional Boltzmann equation. The cell interface values required by the inviscid flux functions are reconstructed to higher-order spatial accuracy via the MUSCL (Monotone Upstream-Centered Schemes for Conservation Laws) variable interpolation method coupled with a minmod limiter. As for the diffusion flux terms, they are discretized by a second-order central difference scheme. To account for the turbulence effect, a combined k-e / k-ω SST (Shear-Stress Transport) two-equation turbulence model is used in the solver. An explicit-type time integration method known as the modified fourth-order Runge-Kutta method is used to compute steady-state solutions. The computed results for a supersonic flow past a flat plate where the transition is artificially triggered at 50% of plate length are presented in this paper. Validating the computed results against existing analytical solutions and also comparing them with results from other well-known numerical schemes show that a very good agreement is obtained.

Proceedings Article
01 Jan 2008
TL;DR: In this paper, some insight is provided into the peculiarities of both the Cell Broadband Engine and the NVIDIA Tesla computing solutions, as regards their cost, performance, programmability and limitations, in order to target scientific computing.
Abstract: There are two multicore platforms that are currently concentrating an enormous attention due to their tremendous potential in terms of sustained performance: the Cell Broadband Engine (Cell BE from now on) and the NVIDIA Tesla computing solutions. The former is a recent heterogeneous chip-multiprocessor (CMP) architecture jointly developed by IBM, Sony and Toshiba to offer very high performance, especially on game and multimedia applications. In fact, it is the heart of the PlayStation 3. The latter are general-purpose GPUs (GPGPU) used as dataparallel computing devices based on the Computed Unified Device Architecture (CUDA) common to the latest NVIDIA GPUs. The common denominator is a multicore platform which provides an enormous potential performance benefit driven by a non-traditional programming model. In this paper we try to provide some insight into the peculiarities of both, as regards their cost, performance, programmability and limitations, in order to target scientific computing.