scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2006"


Book ChapterDOI
01 Jan 2006
TL;DR: This chapter discusses the design of the conceptual interfaces in hypre and illustrates their use with various examples, and discusses the data structures and parallel implementation of these interfaces.
Abstract: The hypre software library provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers. One of its attractive features is the provision of conceptual interfaces. These interfaces give application users a more natural means for describing their linear systems, and provide access to methods such as geometric multigrid which require additional information beyond just the matrix. This chapter discusses the design of the conceptual interfaces in hypre and illustrates their use with various examples. We discuss the data structures and parallel implementation of these interfaces. A brief overview of the solvers and preconditioners available through the interfaces is also given.

318 citations


Journal ArticleDOI
01 Mar 2006
TL;DR: In this article, the authors describe some of the algorithms that have been developed to perform Monte Carlo simulations in science and engineering, and present a survey of the Monte Carlo method's applicability and efficiency.
Abstract: Since 1953, researchers have applied the Monte Carlo method to a wide range of areas. Specialized algorithms have also been developed to extend the method's applicability and efficiency. The author describes some of the algorithms that have been developed to perform Monte Carlo simulations in science and engineering

109 citations


Journal ArticleDOI
01 Jan 2006
TL;DR: The authors describe the computer architectural choices that have been shaped by almost two decades of collaboration activity on parallel computers for computationally intensive calculations such as quantum chromo dynamics on the lattice.
Abstract: apeNEXT is the latest in the APE collaboration's series of parallel computers for computationally intensive calculations such as quantum chromo dynamics on the lattice. The authors describe the computer architectural choices that have been shaped by almost two decades of collaboration activity.

45 citations


Journal ArticleDOI
01 Aug 2006
TL;DR: This paper presents an engine that completely automates the prediction of metrics to support a better management of business operations.
Abstract: The ability to forecast metrics and performance indicators for business operations is crucial to proactively avoid abnormal situations, and to do effective business planning. However, expertise is typically required to drive each step of the prediction process. This is impractical when there are thousands of metrics to monitor. Fortunately, for business operations management, extreme accuracy is not required. It is usually enough to know when a metric is likely to go beyond the normal range of values. This gives opportunity for automation. In this paper, we present an engine that completely automates the prediction of metrics to support a better management of business operations.

40 citations


Book ChapterDOI
01 Jan 2006
TL;DR: Two applications are presented in this paper: variational data assimilation and adjoint sensitivity analysis of flood modeling using automatic differentiation tools.
Abstract: Flood modeling involves catchment scale hydrology and river hydraulics. Analysis and reduction of model uncertainties induce sensitivity analysis, reliable initial and boundary conditions, calibration of empirical parameters. A deterministic approach dealing with the aforementioned estimation and sensitivity analysis problems results in the need of computing the derivatives of a function of model output variables with respect to input variables. Modern automatic differentiation (ad) tools such as Tapenade provide an easier and safe way to fulfill this need. Two applications are presented in this paper: variational data assimilation and adjoint sensitivity analysis.

36 citations


Journal ArticleDOI
01 Jan 2006
TL;DR: With Ianus, a next-generation field-programmable gate array (FPGA)-based machine, the authors hope to build a system that can fully exploit the performance potential of FPGA devices.
Abstract: With Ianus, a next-generation field-programmable gate array (FPGA)-based machine, the authors hope to build a system that can fully exploit the performance potential of FPGA devices. A software platform that simplifies Ianus programming will extend its intended application range to a wide class of interesting and computationally demanding problems.

32 citations


Journal ArticleDOI
01 Sep 2006
TL;DR: Austin Peay State University's Department of Physics and Astronomy has reinvigorated its physics program by adding a required computational methods class and small computational components to classes across its curriculum.
Abstract: Austin Peay State University's Department of Physics and Astronomy has reinvigorated its physics program by adding a required computational methods class and small computational components to classes across its curriculum. A front-to-back problem management approach has required a change in the way the department assesses students' performance

17 citations


Journal ArticleDOI
01 Jun 2006
TL;DR: The kinetic equation for the Wigner function is employed as a model for dissipative quantum transport and applications to single barrier and double barrier structures are discussed.
Abstract: Coherent transport in mesoscopic devices is well described by the Schrodinger equation supplemented by open boundary conditions When electronic devices are operated at room temperature, however, a realistic transport model needs to include carrier scattering In this work, the kinetic equation for the Wigner function is employed as a model for dissipative quantum transport Carrier scattering is treated in an approximate manner through a Boltzmann collision operator The development of Monte-Carlo algorithms for this quantum kinetic equation is complicated by the fact that, as opposed to the semi-classical case, the integral kernel is no longer positive This so-called negative sign problem requires the introduction of new numerical techniques in order to obtain stable Monte-Carlo methods A particular method for the solution of the stationary Wigner equation is presented Applications to single barrier and double barrier structures are discussed

17 citations


Journal ArticleDOI
01 Jun 2006
TL;DR: A new global optimisation algorithm is presented and applied to the design of high-channel-count multichannel filters based on sampled Fibre Bragg Gratings (FBGs) and the results are compared with those obtained by a hybrid Genetic Algorithm (GA) and by the classical Sinc method.
Abstract: A new global optimisation algorithm is presented and applied to the design of high-channel-count multichannel filters based on sampled Fibre Bragg Gratings (FBGs). We focus on the realisation of particular designs corresponding to multichannel filters that consist of 16 and 38 totally reflective identical channels spaced 100 GHz. The results are compared with those obtained by a hybrid Genetic Algorithm (GA) and by the classical Sinc method.

16 citations


Journal ArticleDOI
01 Nov 2006
TL;DR: Numerically demonstrates the adaptive finite element method's performance and utility on 2D and 3D problems.
Abstract: Using multigrid solvers in the adaptive finite element method yields a powerful tool for solving large-scale partial differential equations that exhibit localized features such as singularities or shocks. In addition to describing the basic method and related theory, this article numerically demonstrates the method's performance and utility on 2D and 3D problems

14 citations


Journal ArticleDOI
01 Jun 2006
TL;DR: These efforts to test and expand the current state-of-the-art in eigenvalue solvers applied to the field of nanotechnology singled out the non-linear Conjugate Gradients methods as the backbone of their efforts for their previous success in predicting the electronic properties of large nanostructures.
Abstract: In this paper we report on our efforts to test and expand the current state-of-the-art in eigenvalue solvers applied to the field of nanotechnology. We singled out the non-linear Conjugate Gradients (CG) methods as the backbone of our efforts for their previous success in predicting the electronic properties of large nanostructures and made a library of three different solvers (two recent and one new) that we integrated into the Parallel Energy SCAN (PESCAN) code to perform a comparison. The methods and their implementation are tuned to the specifics of the physics problem. The main requirements are to be able to find (1) a few, approximately 4-10, of the (2) interior eigenstates, including (3) repeated eigenvalues, for (4) large Hermitian matrices.

Journal ArticleDOI
01 Sep 2006
TL;DR: The trade-offs in using high-level tools for parallel computing, focusing particularly on those that integrate with existing scientific computing software on the desktop, are described.
Abstract: In this article, I describe the trade-offs in using high-level tools for parallel computing, focusing particularly on those that integrate with existing scientific computing software on the desktop

Journal ArticleDOI
01 May 2006
TL;DR: Teachers should use technology as the catalyst to transform instruction into a learner-centered and inquiry-based education, one in which students construct knowledge through their own investigations.
Abstract: As technology is increasingly used in most facets of the workplace, it is imperative that primary and secondary schools and colleges help create a workforce capable of turning technological advancements into societal benefits. Such a workforce should possess strong backgrounds in math, science, and technology, and its numbers should be sustainable. Educators should use technology as the catalyst to transform instruction into a learner-centered and inquiry-based education, one in which students construct knowledge through their own investigations

Journal ArticleDOI
01 Aug 2006
TL;DR: A new digital annotation system with a client-server architecture, where the client is a plug-in for a standard web browser and servers are annotation repositories is presented.
Abstract: Digital annotation of multi-media documents adds personal information to them. Sharing of annotations among different users allows discussions and cooperative work. We discuss architectural solutions and storage schemas to the problem of annotating multimedia documents with multimedia objects. Annotations can refer to whole documents or single portions, but also to groups of objects in the same document. We present a new digital annotation system with a client-server architecture, where the client is a plug-in for a standard web browser and servers are annotation repositories. Annotations can be retrieved and filtered, based on their metadata descriptors, and possibly on their content.

Journal ArticleDOI
01 Jul 2006
TL;DR: To solve the discrete version of the stationary state Schrodinger equation to Numerov accuracy, the author uses boundary conditions at the limits of the computational domain that mimic an interval of infinite extent.
Abstract: To solve the discrete version of the stationary state Schrodinger equation to Numerov accuracy, the author uses boundary conditions at the limits of the computational domain that mimic an interval of infinite extent. He also describes methods for finding particle energies, scattering coefficients, and partial-wave phase shifts.

Journal ArticleDOI
01 Aug 2006
TL;DR: It is argued that the integration of interactive Virtual Reality environments with Database (DB) technology has the potential of providing on one side much flexibility and, on the other hand, of resulting in enhanced interfaces for accessing contents from digital archives.
Abstract: This paper deals with the development of interactive Virtual Reality (VR) environments. We argue that the integration of such environments with Database (DB) technology has the potential of providing on one side much flexibility and, on the other hand, of resulting in enhanced interfaces for accessing contents from digital archives. The paper discusses the main issues related to such integration. It also describes two projects related to the use of advanced tools for the dissemination of Cultural Heritage (CH) content. Within these projects an integrated framework has been developed that enhances conventional VR environments with DB interactions.

Journal ArticleDOI
01 Mar 2006
TL;DR: The scheduling problem that minimises both schedule length and switching activities for applications with loops on multiple functional unit architectures is studied and an algorithm, PRRS, is proposed, which attempts to minimise both switching activities and schedule length while performing scheduling and allocation simultaneously.
Abstract: Switching activity and schedule length are the two of the most important factors in power dissipation. This paper studies the scheduling problem that minimises both schedule length and switching activities for applications with loops on multiple functional unit architectures. We show that, to find a schedule that has the minimal switching activities among all minimum latency schedules with or without resource constraints is NP-complete. Although the minimum latency scheduling problem is polynomial time solvable if there is no resource constraint or only one functional unit (FU), the problem becomes NP-complete when switching activities are considered as the second constraint. An algorithm, Power Reduction Rotation Scheduling (PRRS), is proposed. The algorithm attempts to minimise both switching activities and schedule length while performing scheduling and allocation simultaneously. Compared with the list scheduling, PRRS shows an average of 20.1% reduction in schedule length and 52.2% reduction in bus switching activities. Our algorithm also shows better performance than the approach that considers scheduling and allocation in separate phases.

Journal ArticleDOI
01 Mar 2006
TL;DR: A new key agreement protocol based on a shared conference password is proposed, which provides an efficient algorithm and takes less computation cost to construct a secret communication channel and can be suitable for application in ad hoc networks.
Abstract: A new key agreement protocol based on a shared conference password is proposed in this paper. With this protocol, it provides an efficient algorithm and takes less computation cost to construct a secret communication channel. Besides, the honest participant can use passwords to authenticate the other participants. The proposed scheme is suitable for application in ad hoc networks. It also provides an efficient protocol to reconstruct a new session key when some members join or leave the conference.

Journal ArticleDOI
A. Azooz1
01 Jul 2006
TL;DR: Using Matlab's data-acquisition feature and a computer sound card, the author describes a method to obtain voltage and current data to plot the I-V characteristics of a semiconductor diode and a plasma Langmuir probe glow discharge without installing any additional hardware.
Abstract: Using Matlab's data-acquisition feature and a computer sound card, the author describes a method to obtain voltage and current data to plot the I-V characteristics of a semiconductor diode and a plasma Langmuir probe glow discharge without installing any additional hardware.

Journal ArticleDOI
Norman Chonacky1
01 Sep 2006
TL;DR: As someone who began using computers in physics instruction starting way back in 1970, I must confess that the excitement of that odd band of physics professors who were my fellow enthusiasts was somewhat misplaced.
Abstract: As someone who began using computers in physics instruction starting way back in 1970, I must confess that the excitement of that odd band of physics professors who were my fellow enthusiasts was somewhat misplaced. What then appeared to be wild dreams has become mundane practice, and the cutting edge of computer use, especially for numerical computations, has moved beyond the bounds that even our eager eyes had conceived as extreme

Journal ArticleDOI
01 Aug 2006
TL;DR: This paper introduces GridFS, a next generation I/O solution that can scale to hundreds or thousands of nodes and several hundreds of terabytes of storage with very highI/O and metadata throughput and goes a step further to eliminate runtime file access overheads as compared to other implementations on the same model.
Abstract: I/O has always been performance bottleneck for applications running on clusters. Most traditional storage architectures fail to meet the requirement of concurrent access to the same file that is posed by most High-Performance Computing (HPC) applications. Many parallel and cluster file systems are still plagued by metadata overheads and associated management complexities that prevail in read/write intensive scenarios. In this paper we introduce GridFS, a next generation I/O solution that can scale to hundreds or thousands of nodes and several hundreds of terabytes of storage with very high I/O and metadata throughput. It is based on the Object-based Storage Architecture (OSA) model and goes a step further to eliminate runtime file access overheads as compared to other implementations on the same model.

Journal ArticleDOI
01 Jun 2006
TL;DR: The experimental results obtained show that standard techniques are more robust than evolutionary algorithms, while the latter are more effective in terms of the standard metrics and function calls for the multiobjective problem.
Abstract: In this work, we compare evolutionary algorithms and standard optimisation methods on two circuit design problems: the parameter extraction of a device circuit model and the multiobjective optimisation of an operational transconductance amplifier. The comparison is made in terms of quality of the solutions and computational effort, that is, objective function evaluations needed to compute them. The experimental results obtained show that standard techniques are more robust than evolutionary algorithms, while the latter are more effective in terms of the standard metrics and function calls. In particular for the multiobjective problem, the observed Pareto front determined by evolutionary algorithms has a better spread of solutions with a larger number of non-dominated solutions when compared to the standard multiobjective techniques.

Journal ArticleDOI
01 Mar 2006
TL;DR: The ability of DNA-based computing for resolving the NP-complete problems is shown by demonstrating how to apply sticker in the sticker-based model for constructing solution space of DNA for the set-basis problem.
Abstract: In the paper, it is demonstrated how to apply sticker in the sticker-based model for constructing solution space of DNA for the set-basis problem and how to apply DNA operations in the Adleman-Lipton model to solve that problem from solution space of sticker. Furthermore, this work shows the ability of DNA-based computing for resolving the NP-complete problems.

Journal ArticleDOI
01 Mar 2006
TL;DR: The bound depicts accurately the overall blocking behaviours of HVOB networks and reveals the inherent relationships among blocking probability, network depth and network hardware cost and enables a desirable tradeoff to be made among them.
Abstract: A combination of horizontal expansion and vertical stacking of optical banyan (HVOB) is the general scheme for building banyan-based optical switching networks. The HVOB networks usually require either higher hardware cost or larger depth to guarantee the nonblocking property. In this paper, we analyse the blocking probabilities of HVOB networks with one extra stage and develop their upper bound. The bound depicts accurately the overall blocking behaviours of HVOB networks and reveals the inherent relationships among blocking probability, network depth and network hardware cost and enables a desirable tradeoff to be made among them.

Journal ArticleDOI
01 Aug 2006
TL;DR: In this paper, the basic idea of the 'what' and 'how' problem descriptions, as well as attributes and corresponding multimedia symbols, are considered and some examples of interface panels that use these attributes are described.
Abstract: A searching method based on 'what' and 'how' problem descriptions, within a special software component library, is presented. The 'what' problem description is based on a high-level representation of general features of initial and final data the problem can process or produce. The 'how' problem description is based on another high-level representation of the algorithmic features of the problem solution. In this paper, the basic idea of the 'what' and 'how' problem descriptions, as well as attributes and corresponding multimedia symbols, are considered. Some examples of interface panels that use these attributes are also described.

Journal ArticleDOI
01 Mar 2006
TL;DR: In this paper, a hierarchical grown bluetree (HGB) topology is proposed, where the nodes are added to the bluetrees level by level by growing up, so as to preserve shorter routing paths.
Abstract: Bluetooth is a promising technology for short range wireless communication and networking, mainly used as a replacement for connected cables. Since the Bluetooth specification only defines how to build piconet, several solutions have been proposed to construct a scatternet from the piconets in the literatures. A tree shaped scatternet is called the bluetree. In this paper, we present a method to generate the bluetree hierarchically; namely, the nodes are added into the bluetree level by level. This kind of Hierarchical Grown Bluetree (HGB) topology resolves the defects of the conventional bluetree. During growing up, HGB always remains balanced so as to preserve shorter routing paths. Besides, the links between siblings provide alternative paths for routing. As a result, the traffic load at parent nodes can be greatly improved and only two separate parts will be induced if a parent node is lost. The Bluetooth network therefore achieves better reliability.

Journal ArticleDOI
01 Jun 2006
TL;DR: The kinematics of nanoparticle-reinforced composite materials as a continuum media, the formulation of governing equations (fundamentals) and the statement of boundary conditions for multi-scale modelling of the material are described.
Abstract: Currently, research work modelling of interface phenomena of nanoparticle-reinforced composite materials, notably Carbon Nanotubes (CNT)-epoxy composites are investigated across the length scales. This paper describes the kinematics of nanoparticle-reinforced composite materials as a continuum media, the formulation of governing equations (fundamentals) and the statement of boundary conditions for multi-scale modelling of the material. The identification problem for the non-classical parameters of the model has been solved by experimental results and a method of conjugated gradients. The model has been validated to predict some basic mechanical properties of a polymeric matrix reinforced with nanoscale particles/fibres/tubes (including CNT) as a function of size and also dispersion of nanoparticles. The outcome of this paper is expected to have wide-ranging technical benefits with direct relevance to industry in the areas of transportation (aerospace, automotive, rail, maritime) and civil infrastructure development.

Journal ArticleDOI
01 Jun 2006
TL;DR: It is shown that relevant properties of nanocrystalline semiconductors containing a large fraction of high-energy GBs are quite distinct with respect to those of coarse-grained and bulk semiconductor.
Abstract: Nanocrystalline semiconductors display unique features compared to coarse-grained microstructures and even to their monocrystalline counterparts. We contend that such peculiarities are due to: (1) the extremely large fraction of atoms located at Grain Boundaries (GBs) and (2) the 'character distribution' of GBs, which are mostly high-energy, random interfaces. Initially, we study the structure of random GBs in nanocrystalline semiconductors by means of large-scale Molecular Dynamics (MD) simulations. Subsequently, the atomic structure and electronic properties of some typical high-energy GBs in Si- and C-based nanostructures are characterised by means of a semi-empirical tight-binding Hamiltonian. We show that relevant properties of nanocrystalline semiconductors containing a large fraction of high-energy GBs are quite distinct with respect to those of coarse-grained and bulk semiconductors.

Journal ArticleDOI
01 Mar 2006
TL;DR: From the experimental results, it is shown that the proposed method correctly identifies the fragments belonging to the same subimage and successfully collects them together to be a complete subimage which can be distributed into the different processing nodes for further processing.
Abstract: Uniform image partitioning based on spiral architecture plays an important role in parallel image processing in many aspects such as uniform data partitioning, load balancing, zero data exchange between the processing nodes et al. However, when the number of partitions is not the power of seven like 7, 49, every sub-image except one is split into a few fragments which are mixed together. We could not tell which fragments belong to which subimage. It is an unacceptable flaw to parallel image processing. This paper proposes a method to resolve the problem mentioned above. From the experimental results, it is shown that the proposed method correctly identifies the fragments belonging to the same subimage and successfully collects them together to be a complete subimage. Then, these subimages can be distributed into the different processing nodes for further processing.

Journal ArticleDOI
01 Aug 2006
TL;DR: This paper addresses the dynamic context of a collection of linked multimedia documents, of which the web is a perfect example, and concludes that the author of a web page cannot completely define that document's semantics.
Abstract: It is well known that interpretation depends on context, whether for a work of art, a piece of literature, or a natural language utterance. This paper addresses the dynamic context of a collection of linked multimedia documents, of which the web is a perfect example. Contextual document semantics emerge through identification of various users' browsing paths though this multimedia document collection. Some implications of our approach are that the author of a web page cannot completely define that document's semantics and that semantics can emerge through use.