scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 1998"


Journal ArticleDOI
TL;DR: In this article, a variational procedure is proposed and applied to optimize auxiliary bases for main group and transition metal atoms which are tested for more than 350 molecules and the RI approximation affects molecular MP2 energies by less than 60 μEh per atom and equilibrium distances by 0.2 pm.

2,392 citations


Journal ArticleDOI
TL;DR: All the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain feasibly could use to solve related problems.
Abstract: Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population and, second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation, and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the information-theoretic limit on how accurate any reconstruction can be and were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain feasibly could use to solve related problems.

632 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: This paper disprove the belief widespread within the computer graphics community that Catmull-Clark subdivision surfaces cannot be evaluated directly without explicitly subdividing and shows that the surface and all its derivatives can be evaluated in terms of a set of eigenbasis functions which depend only on the subdivision scheme.
Abstract: In this paper we disprove the belief widespread within the computer graphics community that Catmull-Clark subdivision surfaces cannot be evaluated directly without explicitly subdividing. We show that the surface and all its derivatives can be evaluated in terms of a set of eigenbasis functions which depend only on the subdivision scheme and we derive analytical expressions for these basis functions. In particular, on the regular part of the control mesh where Catmull-Clark surfaces are bi-cubic B-splines, the eigenbasis is equal to the power basis. Also, our technique is both easy to implement and efficient. We have used our implementation to compute high quality curvature plots of subdivision surfaces. The cost of our evaluation scheme is comparable to that of a bi-cubic spline. Therefore, our method allows many algorithms developed for parametric surfaces to be applied to Catmull-Clark subdivision surfaces. This makes subdivision surfaces an even more attractive tool for free-form surface modeling. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Curve, Surface, Solid, and Object Representations J.6 [Computer Applications]: Computer-Aided Engineering—Computer Aided Design (CAD)

584 citations


Journal ArticleDOI
TL;DR: This paper gives the first theoretical foundation for methods solving partial differential equations by collocation with (possibly radial) basis functions by collocated basis functions.

559 citations


Journal ArticleDOI
TL;DR: The RI‐J technique to approximate Coulomb interactions (by means of an auxiliary basis set approximation for the electron density) even shows superlinear speedup on distributed memory architectures.
Abstract: The parallelization of density functional treatments of molecular electronic energy and first-order gradients is described, and the performance is documented. The quadrature required for exchange correlation terms and the treatment of exact Coulomb interaction scales virtually linearly up to 100 nodes. The RI-J technique to approximate Coulomb interactions (by means of an auxiliary basis set approximation for the electron density) even shows superlinear speedup on distributed memory architectures. The bottleneck is then linear algebra. Demonstrative application examples include molecules with up to 300 atoms and 3000 basis functions that can now be treated in a few hours per geometry optimization cycle in C1 symmetry. © 1998 John Wiley & Sons, Inc. J Comput Chem 19: 1746–1757, 1998

480 citations


Journal ArticleDOI
TL;DR: In this paper, a meshless Galerkin finite element method (GFEM) based on Local Boundary Integral Equation (LBIE) has been proposed, which is quite general and easily applicable to non-homogeneous problems.
Abstract: The Galerkin finite element method (GFEM) owes its popularity to the local nature of nodal basis functions, i.e., the nodal basis function, when viewed globally, is non-zero only over a patch of elements connecting the node in question to its immediately neighboring nodes. The boundary element method (BEM), on the other hand, reduces the dimensionality of the problem by one, through involving the trial functions and their derivatives, only in the integrals over the global boundary of the domain; whereas, the GFEM involves the integration of the “energy” corresponding to the trial function over a patch of elements immediately surrounding the node. The GFEM leads to banded, sparse and symmetric matrices; the BEM based on the global boundary integral equation (GBIE) leads to full and unsymmetrical matrices. Because of the seemingly insurmountable difficulties associated with the automatic generation of element-meshes in GFEM, especially for 3-D problems, there has been a considerable interest in element free Galerkin methods (EFGM) in recent literature. However, the EFGMs still involve domain integrals over shadow elements and lead to difficulties in enforcing essential boundary conditions and in treating nonlinear problems. The object of the present paper is to present a new method that combines the advantageous features of all the three methods: GFEM, BEM and EFGM. It is a meshless method. It involves only boundary integration, however, over a local boundary centered at the node in question; it poses no difficulties in satisfying essential boundary conditions; it leads to banded and sparse system matrices; it uses the moving least squares (MLS) approximations. The method is based on a Local Boundary Integral Equation (LBIE) approach, which is quite general and easily applicable to nonlinear problems, and non-homogeneous domains. The concept of a “companion solution” is introduced so that the LBIE for the value of trial solution at the source point, inside the domain Ω of the given problem, involves only the trial function in the integral over the local boundary Ω s of a sub-domain Ω s centered at the node in question. This is in contrast to the traditional GBIE which involves the trial function as well as its gradient over the global boundary Γ of Ω. For source points that lie on Γ, the integrals over Ω s involve, on the other hand, both the trial function and its gradient. It is shown that the satisfaction of the essential as well as natural boundary conditions is quite simple and algorithmically very efficient in the present LBIE approach. In the example problems dealing with Laplace and Poisson's equations, high rates of convergence for the Sobolev norms ||·||0 and ||·||1 have been found. In essence, the present EF-LBIE (Element Free-Local Boundary Integral Equation) approach is found to be a simple, efficient, and attractive alternative to the EFG methods that have been extensively popularized in recent literature.

471 citations


Book ChapterDOI
29 Jun 1998
TL;DR: A reparameterization of the BRDF is proposed as a function of the halfangle and a difference angle instead of the usual parameterization in terms of angles of incidence and reflection, which reduces storage requirements for a large class of BRDFs.
Abstract: We describe an idea for making decomposition of Bidirectional Reflectance Distribution Functions into basis functions more efficient, by performing a change-of-variables transformation on the BRDFs. In particular, we propose a reparameterization of the BRDF as a function of the halfangle (i.e. the angle halfway between the directions of incidence and reflection) and a difference angle instead of the usual parameterization in terms of angles of incidence and reflection. Because features in common BRDFs, including specular and retroreflective peaks, are aligned with the transformed coordinate axes, the change of basis reduces storage requirements for a large class of BRDFs. We present results derived from analytic BRDFs and measured data.

297 citations


Journal ArticleDOI
TL;DR: In this paper, a finite element method is proposed for one dimensional interface problems involving discontinuities in the coefficients of the differential equations and the derivatives of the solutions, which is shown to be second order accurate in the infinity norm.

287 citations


Journal ArticleDOI
TL;DR: This paper addresses issues by initializing RBF networks with decision trees that define relatively pure regions in the instance space; each of these regions then determines one basis function.
Abstract: Successful implementations of radial-basis function (RBF) networks for classification tasks must deal with architectural issues, the burden of irrelevant attributes, scaling, and some other problems. This paper addresses these issues by initializing RBF networks with decision trees that define relatively pure regions in the instance space; each of these regions then determines one basis function. The resulting network is compact, easy to induce, and has favorable classification accuracy.

130 citations


Journal ArticleDOI
TL;DR: In this article, the adaptive finite element method was extended to fully self-consistent calculations of realistic materials, which is highly adaptive, sparse, parallel and suited for the O(N) methods, thanks to localized finite-element basis functions.
Abstract: The adaptive finite-element method proposed in our previous work [Phys. Rev. B 54 (1996) 7602] is extended to fully self-consistent calculations of realistic materials. Our method is highly adaptive, sparse, parallel, and suited for the O(N) methods, thanks to the localized finite-element basis functions. Accurate ionic forces can also be calculated within practical time usage. Applications to the structural properties of diamond, c-BN, and the C 60 molecule, and molecular dynamics within O(N 3 ) scaling are shown first, followed by detailed error analyses. Then the O(N) method based on the orbital formulation is realized within our approach.

125 citations


Journal ArticleDOI
TL;DR: Numerical examples are presented, showing that the considered DG method is of second- order accuracy in space and third-order accuracy in time, and the adaptive procedure is capable of updating the spatial meshes and the time steps when necessary, making the solutions reliable and the computation efficient.

Journal ArticleDOI
TL;DR: By iteratively combining two procedures, this work achieves a controlled way of training and modifying RBF networks, which balances accuracy, training time, and complexity of the resulting network.

Proceedings ArticleDOI
16 Dec 1998
TL;DR: In this paper, an intelligent tracking control architecture is proposed for a class of continuous-time nonlinear dynamic systems actuated by piezoelectric actuators, where an approximation function is introduced to compensate for effects of the hysteresis nonlinearities.
Abstract: An intelligent tracking control architecture is proposed for a class of continuous-time nonlinear dynamic systems actuated by piezoelectric actuators. Generally, hysteresis nonlinearity exists in the piezoelectric actuator, which may cause undesirable inaccuracy. Based on solutions of a general hysteresis model, an approximation function is introduced to compensate for effects of the hysteresis nonlinearities. This approximation function is implemented by fuzzy logic method, which is expressed as a series expansion of basis functions. Combining this approximation function with adaptive control techniques, an intelligent control algorithm is developed. As a result, global asymptotic stability of the system is established in the Lyapunov sense. Simulation results are included to demonstrate the control performance.

Journal ArticleDOI
TL;DR: The numerical wavelet-optimized finite difference method is extended to arbitrarily high order, so that one obtains, in effect, an adaptive grid and adaptive order numerical method which can achieve errors equivalent to errors obtained with a "spectrally accurate" numerical method.
Abstract: Wavelets detect information at different scales and at different locations throughout a computational domain. Furthermore, wavelets can detect the local polynomial content of computational data. Numerical methods are most efficient when the basis functions of the method are similar to the data present. By designing a numerical scheme in a completely adaptive manner around the data present in a computational domain, one can obtain optimal computational efficiency. This paper extends the numerical wavelet-optimized finite difference (WOFD) method to arbitrarily high order, so that one obtains, in effect, an adaptive grid and adaptive order numerical method which can achieve errors equivalent to errors obtained with a "spectrally accurate" numerical method.

Proceedings Article
01 Dec 1998
TL;DR: This work presents an algorithm for encoding a time series that does not require blocking the data and results in a shift invariant, spikelike representation that resembles coding in the cochlear nerve.
Abstract: A common way to represent a time series is to divide it into short-duration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the temporal alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time series that does not require blocking the data. The algorithm finds an efficient representation by inferring the best temporal positions for functions in a kernel basis. These can have arbitrary temporal extent and are not constrained to be orthogonal. This allows the model to capture structure in the signal that may occur at arbitrary temporal positions and preserves the relative temporal structure of underlying events. The model is shown to be equivalent to a very sparse and highly overcomplete basis. Under this model, the mapping from the data to the representation is nonlinear, but can be computed efficiently. This form also allows the use of existing methods for adapting the basis itself to data. This approach is applied to speech data and results in a shift invariant, spikelike representation that resembles coding in the cochlear nerve.

Journal ArticleDOI
TL;DR: In this study, the moving scene is decomposed into different regions with respect to their motion, by means of a pattern recognition scheme, using the median radial basis function (MRBF) neural network.
Abstract: Various approaches have been proposed for simultaneous optical flow estimation and segmentation in image sequences. In this study, the moving scene is decomposed into different regions with respect to their motion, by means of a pattern recognition scheme. The inputs of the proposed scheme are the feature vectors representing still image and motion information. Each class corresponds to a moving object. The classifier employed is the median radial basis function (MRBF) neural network. An error criterion function derived from the probability estimation theory and expressed as a function of the moving scene model is used as the cost function. Each basis function is activated by a certain image region. Marginal median and median of the absolute deviations from the median (MAD) estimators are employed for estimating the basis function parameters. The image regions associated with the basis functions are merged by the output units in order to identify moving objects.

Dissertation
01 Jan 1998
TL;DR: This thesis provides an analysis (a proof of convergence, together with bounds on approximation error) of temporal-difference learning in the context of autonomous (uncontrolled) systems as applied to the approximation of an infinite horizon discounted rewards and average and differential rewards.
Abstract: In principle, a wide variety of sequential decision problems--ranging from dynamic resource allocation in telecommunication networks to financial risk management--can be formulated in terms of stochastic control and solved by the algorithms of dynamic programming Such algorithms compute and store a value function, which evaluates expected future reward as a function of current state Unfortunately, exact computation of the value function typically requires time and storage that grow proportionately with the number of states, and consequently, the enormous state spaces that arise in practical applications render the algorithms intractable In this thesis, we study tractable methods that approximate the value function Our work builds on research in an area of artificial intelligence known as reinforcement learning A point of focus of this thesis is temporal-difference learning--a stochastic algorithm inspired to some extent by phenomena observed in animal behavior Given a selection of basis functions, the algorithm updates weights during simulation of the system such that the weighted combination of basis functions ultimately approximates a value function We provide an analysis (a proof of convergence, together with bounds on approximation error) of temporal-difference learning in the context of autonomous (uncontrolled) systems as applied to the approximation of (1) infinite horizon discounted rewards and (2) average and differential rewards As a special case of temporal-difference learning in a context involving control, we propose variants of the algorithm that generate approximate solutions to optimal stopping problems We analyze algorithms designed for several problem classes: (1) optimal stopping of a stationary mixing process with an infinite horizon and discounted rewards; (2) optimal stopping of an independent increments process with an infinite horizon and discounted rewards; (3) optimal stopping with a finite horizon and discounted rewards; (4) a zero-sum two-player stopping game with an infinite horizon and discounted rewards We also present a computational case study involving a complex optimal stopping problem that is representative of those arising in the financial derivatives industry In addition to algorithms for tuning basis function weights, we study an approach to basis function generation In particular, we explore the use of "scenarios" that are representative of the range of possible events in a system Each scenario is used to construct a basis function that maps states to future rewards contingent on the future realization of the scenario We derive, in the context of autonomous systems, a bound on the number of "representative scenarios" that suffices for uniformly accurate approximation of the value function The bound exhibits a dependence on a measure of "complexity" of the system that can often grow at a rate much slower that the state space size (Copies available exclusively from MIT Libraries Rm 14-0551, Cambridge, MA 02139-4307 Ph 617-253-5668; Fax 617-253-1690)

Journal ArticleDOI
TL;DR: A Bayesian framework for the analysis of radial basis functions (RBF) is proposed that accommodates uncertainty in the dimension of the model, and posterior densities are computed using reversible jump Markov chain Monte Carlo samplers.
Abstract: A Bayesian framework for the analysis of radial basis functions (RBF) is proposed that accommodates uncertainty in the dimension of the model. A distribution is defined over the space of all RBF models of a given basis function, and posterior densities are computed using reversible jump Markov chain Monte Carlo samplers (Green, 1995). This alleviates the need to select the architecture during the modeling process. The resulting networks are shown to adjust their size to the complexity of the data.

Journal ArticleDOI
TL;DR: A stereo correspondence method by minimizing intensity and gradient errors simultaneously by parameterizing the disparity function by hierarchical Gaussians to avoid local minima in the function minimization.
Abstract: We propose a stereo correspondence method by minimizing intensity and gradient errors simultaneously. In contrast to conventional use of image gradients, the gradients are applied in the deformed image space. Although a uniform smoothness constraint is imposed, it is applied only to nonfeature regions. To avoid local minima in the function minimization, we propose to parameterize the disparity function by hierarchical Gaussians. Both the uniqueness and the ordering constraints can be easily imposed in our minimization framework. Besides, we propose a method to estimate the disparity map and the camera response difference parameters simultaneously. Experiments with various real stereo images show robust performances of our algorithm.

Journal ArticleDOI
TL;DR: Algorithms for multiscale basis selection and feature extraction for pattern classification problems are presented and have been tested for classification and segmentation of one-dimensional radar signals and two-dimensional texture and document images.
Abstract: Algorithms for multiscale basis selection and feature extraction for pattern classification problems are presented. The basis selection algorithm is based on class separability measures rather than energy or entropy. At each level the "accumulated" tree-structured class separabilities obtained from the tree which includes a parent node and the one which includes its children are compared. The decomposition of the node (or subband) is performed (creating the children), if it provides larger combined separability. The suggested feature extraction algorithm focuses on dimensionality reduction of a multiscale feature space subject to maximum preservation of information useful for classification. At each level of decomposition, an optimal linear transform that preserves class separabilities and results in a reduced dimensional feature space is obtained. Classification and feature extraction is then performed at each scale and resulting "soft decisions" obtained for each area are integrated across scales. The suggested algorithms have been tested for classification and segmentation of one-dimensional (1-D) radar signals and two-dimensional (2-D) texture and document images. The same idea can be used for other tree structured local basis, e.g., local trigonometric basis functions, and even for nonorthogonal, redundant and composite basis dictionaries.

Journal ArticleDOI
TL;DR: In this paper, a generalized projection-based order-N method is presented for computing Wannier-like functions in nonorthogonal basis sets of spatially localized orbitals.
Abstract: We present a generalized projection-based order-N method which is applicable within nonorthogonal basis sets of spatially localized orbitals. The projection to the occupied subspace of a Hamiltonian, performed by means of a Chebyshev-polynomial representation of the density operator, allows the nonvariational computation of band-structure energies, density matrices, and forces for systems with nonvanishing gaps. Furthermore, the explicit application of the density operator to local basis functions gives a powerful method for the calculation of Wannier-like functions without using eigenstates. In this paper, we investigate such functions within models of diamond and fourfold-coordinated amorphous carbon starting from bonding pairs of hybrid orbitals. The resulting Wannier states are exponentially localized and show an ellipsoidal spatial dependence. These results are used to maximize the efficiency of a linear-scaling orthonormalization scheme for truncated Wannier functions. @S0163-1829~98!01611-7# I. INTRODUCTION One of the most exciting developments in computational solid-state physics during this decade has been the creation of effective quantum-mechanical order-N methods for the revelation of the electronic structure as well as the energetic relaxation of large model systems. With these techniques, both computational and memory efforts for computing bandstructure energies, total energies, forces, and related quantities scale only linearly with the number N of atoms in the system. As a consequence, this development has tremendously increased the applicability range of electronicstructure methods; in particular, ab initio procedures are now applicable to systems which a few years ago could only be investigated by means of empirical or semiempirical methods. Examples considered to date include giant single-shell fullerenes, multishell fullerenes, tubular systems, and large

Journal ArticleDOI
01 Feb 1998
TL;DR: Computational results on difficult industrial problems demonstrate that the use of energy minimal basis functions improves algebraic multigrid performance and yields a more robust multigrids algorithm than smoothed aggregation.
Abstract: We propose a fast iterative method to optimize coarse basis functions in algebraic multigrid by minimizing the sum of their energies, subject to the condition that linear combinations of the basis functions equal to given zero energy modes, and subject to restrictions on the supports of the coarse basis functions. The convergence rate of the minimization algorithm is bounded independently of the meshsize under usual assumptions on finite elements. The first iteration gives exactly the same basis functions as our earlier method using smoothed aggregation. The construction is presented for scalar problems as well as for linear elasticity. Computational results on difficult industrial problems demonstrate that the use of energy minimal basis functions improves algebraic multigrid performance and yields a more robust multigrid algorithm than smoothed aggregation.

Journal ArticleDOI
TL;DR: An unconventional and largely unknown finite-element pair, based on a modified combination of linear and constant basis functions, is shown to be a good compromise and to give good results for gravity-wave propagation.
Abstract: The finite-element spatial discretization of the linear shallow-water equations on unstructured triangular meshes is examined in the context of a semi-implicit temporal discretization. Triangular finite elements are attractive for ocean modeling because of their flexibility for representing irregular boundaries and for local mesh refinement. The semi-implicit scheme is beneficial because it slows the propagation of the high-frequency small-amplitude surface gravity waves, thereby circumventing a severe time step restriction. High-order computationally expensive finite elements are, however, of little benefit for the discretization of the terms responsible for rapidly propagating gravity waves in a semi-implicit formulation. Low-order velocity/surface-elevation finite-element combinations are therefore examined here. Ideally, the finite-element basis-function pair should adequately represent approximate geostrophic balance, avoid generating spurious computational modes, and give a consistent discretization of the governing equations. Existing finite-element combinations fail to simultaneously satisfy all of these requirements and consequently suffer to a greater or lesser extent from noise problems. An unconventional and largely unknown finite-element pair, based on a modified combination of linear and constant basis functions, is shown to be a good compromise and to give good results for gravity-wave propagation.

Journal ArticleDOI
TL;DR: An explicit formula for the dual basis functions of the Bernstein basis is derived and they are expressed as linear combinations of Bernstein polynomials.
Abstract: An explicit formula for the dual basis functions of the Bernstein basis is derived. The dual basis functions are expressed as linear combinations of Bernstein polynomials.

Journal ArticleDOI
TL;DR: In this paper, a 3D multiresolution analysis procedure similar to the finite-difference time-domain (FDTD) method is derived using a complete set of three-dimensional orthonormal bases of Haar scaling and wavelet functions.
Abstract: A three-dimensional (3-D) multiresolution analysis procedure similar to the finite-difference time-domain (FDTD) method is derived using a complete set of three-dimensional orthonormal bases of Haar scaling and wavelet functions. The expansion of the electric and the magnetic fields in these basis functions leads to the time iterative difference approximation of Maxwell's equations that is similar to the FDTD method. This technique effectively models realistic microwave passive components by virtue of its multiresolution property; the computational time is reduced approximately by half compared to the FDTD method. The proposed technique is validated by analyzing several 3-D rectangular resonators with inhomogeneous dielectric loading. It is also applied to the analyses of microwave passive devices with open boundaries such as microstrip low-pass filters and spiral inductors to extract their S-parameters and field distributions. The results of the proposed technique agree well with those of the traditional FDTD method.

Journal ArticleDOI
TL;DR: In this article, the authors show that the weights of a p-Bezier curve can be written as a combination of its control points and certain Bernstein-like trigonometric basis functions.

Journal ArticleDOI
TL;DR: These bases, which generalise the common FIR, Laguerre and two-parameter Kautz ones, are shown to be fundamental in the disc algebra provided a very mild condition on the choice of poles is satisfied.

Proceedings Article
01 Dec 1998
TL;DR: The statistics of natural monochromatic images decomposed using a multi-scale wavelet basis are examined to provide evidence for the hypothesis that early visual neural processing is well matched to these statistical properties of images.
Abstract: We examine the statistics of natural monochromatic images decomposed using a multi-scale wavelet basis. Although the coefficients of this representation are nearly decorrelated, they exhibit important higher-order statistical dependencies that cannot be eliminated with purely linear procssing. In particular, rectified coefficients corresponding to basis functions at neighboring spatial positions, orientations and scales are highly correlated. A method of removing these dependencies is to divide each coefficient by a weighted combination of its rectified neighbors. Several successful models of the steady -state behavior of neurons in primary visual cortex are based on such "divisive normalization" computations, and thus our analysis provides a theoretical justification for these models. Perhaps more importantly, the statistical measurements explicitly specify the weights that should be used in computing the normalization signal. We demonstrate that this weighting is qualitatively consistent with recent physiological experiments that characterize the suppressive effect of stimuli presented outside of the classical receptive field. Our observations thus provide evidence for the hypothesis that early visual neural processing is well matched to these statistical properties of images.

Journal ArticleDOI
TL;DR: In this article, an algorithm for the four-index transformation of electron repulsion integrals to a localized molecular orbital (MO) basis is presented, where thresholds are applied to the virtual space before and after the orthogonalizing projection to the occupied space, and small contributions in the transformation.
Abstract: An algorithm is presented for the four-index transformation of electron repulsion integrals to a localized molecular orbital (MO) basis. Unlike in most programs, the first two indices are transformed in a single step. This and the localization of the orbitals allows the efficient neglect of small contributions at several points in the algorithm, leading to significant time savings. Thresholds are applied to the following quantities: distant orbital pairs, the virtual space before and after the orthogonalizing projection to the occupied space, and small contributions in the transformation. A series of calculations on medium-sized molecules has been used to determine appropriate thresholds that keep the truncation errors small (below 0.01% of the correlation energy in most cases). Benchmarks for local second-order Moller–Plesset perturbation theory (MP2; i.e., MP2 with a localized MO basis in the occupied subspace) are presented for several large molecules with no symmetry, up to 975 contracted basis functions, and 60 atoms. These are among the largest MP2 calculations performed on a single processor. The computational time (with constant basis set) scales with a somewhat lower than cubic power of the molecular size, and the memory demand is moderate even for large molecules, making calculations that require a supercomputer for the traditional MP2 feasible on workstations. © 1998 John Wiley & Sons, Inc. J Comput Chem 19: 1241–1254, 1998

Journal ArticleDOI
TL;DR: In this paper, exponential basis functions preconvolved with the system waveform are used to convert measured transient decays to an ideal frequency-domain response that can be modeled more easily than arbitrary waveform data.
Abstract: Exponential basis functions preconvolved with the system waveform are used to convert measured transient decays to an ideal frequency-domain response that can be modeled more easily than arbitrary waveform data. Singular-value decomposition (SVD) of the basis functions are used to assess which specific EM waveform provides superior resolution of a range of exponential time constants that can be related to earth conductivities. The pulse shape, pulse length, transient sampling scheme, noise levels, and primary field removal used in practical EM systems all affect the resolution of time constants. Step response systems are more diagnostic of long time constants, and hence good conductors, than impulse response systems. The limited bandwidth of airborne EM systems compared with ground systems is improved when the response is sampled during the transmitter on time and gives better resolution of short time constants or fast decays.