scispace - formally typeset
Search or ask a question

Showing papers on "Basis function published in 1994"


Journal ArticleDOI
TL;DR: The authors first describe the general class of approximation spaces generated by translation of a function /spl psi/(x), and provide a full characterization of their basis functions, and present a general sampling theorem for computing the approximation of signals in these subspaces based on a simple consistency principle.
Abstract: The authors first describe the general class of approximation spaces generated by translation of a function /spl psi/(x), and provide a full characterization of their basis functions. They then present a general sampling theorem for computing the approximation of signals in these subspaces based on a simple consistency principle. The theory puts no restrictions on the system input which can be an arbitrary finite energy signal; bandlimitedness is not required. In contrast to previous approaches, this formulation allows for an independent specification of the sampling (analysis) and approximation (synthesis) spaces. In particular, when both spaces are identical, the theorem provides a simple procedure for obtaining the least squares approximation of a signal. They discuss the properties of this new sampling procedure and present some examples of applications involving bandlimited, and polynomial spline signal representations. They also define a spectral coherence function that measures the "similarity" between the sampling and approximation spaces, and derive a relative performance bound for the comparison with the least squares solution. >

414 citations


Journal ArticleDOI
TL;DR: The approximation capability to capture the fast changing system dynamics is enhanced and the range of the applicability of the method presented by Su et al. can be broadened.
Abstract: An adaptive tracking control architecture is proposed for a class of continuous-time nonlinear dynamic systems, for which an explicit linear parameterization of the uncertainty in the dynamics is either unknown or impossible. The architecture employs fuzzy systems, which are expressed as a series expansion of basis functions, to adaptively compensate for the plant nonlinearities. Global asymptotic stability of the algorithm is established in the Lyapunov sense, with tracking errors converging to a neighborhood of zero. Simulation results for an unstable nonlinear plant are included to demonstrate that incorporating the linguistic fuzzy information from human experts results in superior tracking performance. >

353 citations


01 Jan 1994
TL;DR: The approach explored here is guided by the best-basis paradigm which consists of three steps: select a "best" basis (or coordinate system) for the problem at hand from a library of bases from a fixed yet flexible set of bases that can capture the local features and provide an array of tools unifying the conventional techniques.
Abstract: Extracting relevant features from signals is important for signal analysis such as compression, noise removal, classification, or regression (prediction). Often, important features for these problems, such as edges, spikes, or transients, are characterized by local information in the time (space) domain and the frequency (wave number) domain. The conventional techniques are not efficient to extract features localized simultaneously in the time and frequency domains. These methods include: the Fourier transform for signal/noise separation, the Karhunen-Loeve transform for compression, and the linear discriminant analysis for classification. The features extracted by these methods are of global nature either in time or in frequency domain so that the interpretation of the results may not be straightforward. Moreover, some of them require solving the eigenvalue systems so that they are fragile to outliers or perturbations and are computationally expensive, i.e., $O(n\sp3$), where n is a dimensionality of a signal. The approach explored here is guided by the best-basis paradigm which consists of three steps: (1) select a "best" basis (or coordinate system) for the problem at hand from a library of bases (a fixed yet flexible set of bases consisting of wavelets, wavelet packets, local trigonometric bases, and the autocorrelation functions of wavelets), (2) sort the coordinates (features) by "importance" for the problem at hand and discard "unimportant" coordinates, and (3) use these survived coordinates to solve the problem at hand. What is "best" and "important" clearly depends on the problem: for example, minimizing a description length (or entropy) is important for signal compression whereas maximizing class separation (or relative entropy among classes) is important for classification. These bases "fill the gap" between the standard Euclidean basis and the Fourier basis so that they can capture the local features and provide an array of tools unifying the conventional techniques. Moreover, these tools provide efficient numerical algorithms, e.g., $O(n$ (log n) $\sp{p}),$ where p = 0, 1, 2, depending on the basis. In present thesis, these methods have been applied usefully to a variety of problems: simultaneous noise suppression and signal compression, classification, regression, multiscale edge detection and representation, and extraction of geological information from acoustic waveforms.

259 citations


Journal ArticleDOI
TL;DR: It is demonstrated, through theory and examples, how it is possible to construct directly and noniteratively a feedforward neural network to approximate arbitrary linear ordinary differential equations.

218 citations


Journal ArticleDOI
TL;DR: A new model for producing controlled spatial deformations, which the user defines a set of constraint points, giving a desired displacement and radius of influence for each, is presented, which is term Simple Constrained Deformations (Scodef).
Abstract: Deformations are a powerful tool for shape modeling and design. We present a new model for producing controlled spatial deformations, which we term Simple Constrained Deformations (Scodef). The user defines a set of constraint points, giving a desired displacement and radius of influence for each. Each constraint point determines a local B-spline basis function centered at the constraint point, falling to zero for points beyond the radius. The deformed image of any point in space is a blend of these basis functions, using a projection matrix computed to satisfy the constraints. The deformation operates on the whole space regardless of the representation of the objects embedded inside the space. The constraints directly influence the final shape of the deformed objects, and this shape can be fine-tuned by adjusting the radius of influence of each constraint point. The computations required by the technique can be done very efficiently, and real-time interactive deformation editing on current workstations is possible.

154 citations


Proceedings ArticleDOI
11 Oct 1994
TL;DR: An extension to the `best-basis' method to construct an orthonormal basis which maximizes a class separability for signal classification problems is described, and a method to extract signal component from data consisting of signal and textured background is described.
Abstract: We describe an extension to the `best-basis' method to construct an orthonormal basis which maximizes a class separability for signal classification problems This algorithm reduces the dimensionality of these problems by using basis functions which are well localized in time- frequency plane as feature extractors We tested our method using two synthetic datasets: extracted features (expansion coefficients of input signals in these basis functions), supplied them to the conventional pattern classifiers, then computed the misclassification rates These examples show the superiority of our method over the direct application of these classifiers on the input signals As a further application, we also describe a method to extract signal component from data consisting of signal and textured background

152 citations


Journal ArticleDOI
TL;DR: It is demonstrated, through theory and numerical examples, how it is possible to directly construct a feedforward neural network to approximate nonlinear ordinary differential equations without the need for training.

152 citations


Journal ArticleDOI
TL;DR: In this article, a new inversion algorithm for the simultaneous reconstruction of permittivity and conductivity recasts the nonlinear inversion as the solution of a coupled set of linear equations.
Abstract: A new inversion algorithm for the simultaneous reconstruction of permittivity and conductivity recasts the nonlinear inversion as the solution of a coupled set of linear equations. The algorithm is iterative and proceeds through the minimization of two cost functions. At the initial step the data are matched through the reconstruction of the radiating or minimum norm scattering currents; subsequent steps refine the nonradiating scattering currents and the material properties inside the scatterer. Two types of basis functions are constructed for the nonradiating currents: “invisible” (global) basis functions, which are appropriate for discrete measurements and nonradiating (local) basis functions, which are useful in studying the limit of continuous measurements. Reconstructions of square cylinders from multiple source receiver measurements at a single frequency show that the method can handle large contrasts in material properties.

142 citations


Journal ArticleDOI
TL;DR: In this paper, a domain decomposition technique for the differential quadrature method is proposed to analyse truss and frame structures where the whole structural domain is represented by a collection of simple element subdomains connected together at specific nodal points.

141 citations


Book ChapterDOI
14 Dec 1994
TL;DR: In this paper, a least squares identification method is studied that estimates a finite number of expansion coefficients in the series expansion of a transfer function, where the expansion is in terms of generalized basis functions.
Abstract: A least squares identification method is studied that estimates a finite number of expansion coefficients in the series expansion of a transfer function, where the expansion is in terms of generalized basis functions. The basis functions are orthogonal in H/sub 2/ and generalize the pulse, Laguerre and Kautz (1954) bases. The construction of the basis is considered and bias and variance expressions of the identification algorithm are discussed. The basis induces a new transformation (Hambo transform) of signals and systems, for which state space expressions are derived. >

141 citations


Journal ArticleDOI
TL;DR: An evolutionary neural network training algorithm is proposed for radial basis function (RBF) networks that appear to have better generalization performance on the Mackey-Glass time series than corresponding networks whose centers are determined by k-means clustering.
Abstract: An evolutionary neural network training algorithm is proposed for radial basis function (RBF) networks. The locations of basis function centers are not directly encoded in a genetic string, but are governed by space-filling curves whose parameters evolve genetically. This encoding causes each group of codetermined basis functions to evolve to fit a region of the input space. A network produced from this encoding is evaluated by training its output connections only. Networks produced by this evolutionary algorithm appear to have better generalization performance on the Mackey-Glass time series than corresponding networks whose centers are determined by k-means clustering. >

Journal ArticleDOI
TL;DR: A second-order nonhierarchic system optimization algorithm developed in earlier studies is modified in this study to provide for individual constraint/state modeling and a significant reduction in the number of system analyses required for optimization is observed as compared with conventional optimization using the generalized reduced-gradient method.
Abstract: This paper reports on the effectiveness of a nonhierarchic system optimization algorithm in application to complex coupled systems problems. A second-order nonhierarchic system optimization algorithm developed in earlier studies is modified in this study to provide for individual constraint/state modeling. A cumulative constraint formulation was used in previous implementation studies. The test problems in this study are each complex coupled systems. Complex coupled systems require an iterative solution strategy to evaluate system states. Nonhierarchic algorithm development is driven by these types of problems, and their study is imperative. The algorithm successfully optimizes each of the complex coupled systems. A significant reduction in the number of system analyses required for optimization is observed as compared with conventional optimization using the generalized reduced-gradient method. in the design database. The design database stores design site information generated during the subspace optimizations. A quadratic polynomial approximation to the design is formed using the strategy of Vanderplaats. 6 A weighted least-squares solution strategy is employed to solve for the second-order terms in Vanderplaats' strategy. Exact data in the design data- base are more heavily weighted in the least-squares solution procedure. The resulting quadratic polynomial forms the basis function of accumulated approximation replacing the linear basis used in the original formulation. In Renaud and Gabriele5 improved convergence is observed for the welded beam test problem. The improved convergence is attributed to the im- proved accuracy of cumulative constraint approximations when using second-order-based approximating functions. Additional studies using the second-order-based coordina- tion procedure of system approximation indicated that replac- ing the cumulative constraints with their component con- straints may improve algorithm performance. Implementation of the second-order-based coordination procedure of system approximation was less effective in reducing cycling when applied to the Golinski speed reducer problem. The speed reducer cumulative constraints were composed of a large num- ber of individual constraints as compared with the cumulative constraints in the welded beam test problem. With a larger number of individual constraints assigned to a cumulative constraint, it will more likely undergo a change in its active set during the coordination procedure. It is difficult to approxi- mate these changes in the cumulative constraints. Inaccurate cumulative constraint approximations reduce algorithm per- formance and delay convergence. Approximating individual constraints/states in the coor-

Journal ArticleDOI
TL;DR: In this paper, two methods for the determination of scattering length density profiles from specular reflectivity data are described, one based on cubic splines and the other based on a series of sine and cosine terms.
Abstract: Two methods for the determination of scattering length density profiles from specular reflectivity data are described. Both kinematical and dynamical theory can be used for calculating the reflectivity. In the first method, the scattering density is parameterized using cubic splines. The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. The method is a further development of the two-step approach of Pedersen [J. Appl. Cryst. (1992), 25, 129–145]. The second approach is based on a method introduced by Singh, Tirrell & Bates [J. Appl. Cryst. (1993), 26, 650–659] for analyzing reflectivity data from periodic profiles. In this approach, the profile is expressed as a series of sine and cosine terms. Several new features have been introduced in the method, of which the most important is the inclusion of a smoothness constraint, which reduces the coefficients of the higher harmonics in the Fourier series. This makes it possible to apply the method also to aperiodic profiles. For the analysis of neutron reflectivity data, the instrumental smearing of the model reflectivity is important and a method for fast calculation of smeared reflectivity curves is described. The two methods of analyzing reflectivity data have been applied to sets of simulated data based on examples from the literature, including an amphiphilic monolayer and block copolymer thin films. The two methods work equally well in most situations and are able to recover the original profiles. In general, the method using splines as the basis functions is better suited to aperiodic than to periodic structures, whereas the sine/cosine basis is well suited to periodic and nearly periodic structures.

Journal ArticleDOI
TL;DR: In this paper, the authors present a method for calculating dielectric matrices of periodic systems using a product basis, which, in the linear-muffin-tin-orbital formalism, consists of products of orbitals.
Abstract: We present a method for calculating dielectric matrices of periodic systems. Unlike the conventional method, which uses a plane-wave basis, the present method employs a product basis, which, in the linear-muffin-tin-orbital formalism, consists of products of orbitals. The method can be used for any system, including sp as well as narrow band systems. We demonstrate the applicability of our method by calculating the energy-loss spectra of Ni and Si, including local-field effects that require the full dielectric matrix. Good agreement with experiment is found. The small number of basis functions makes the method suitable for self-energy calculations within the GW approximation, without making the so-called plasmon-pole approximation for the dielectric matrix.

Proceedings ArticleDOI
17 Oct 1994
TL;DR: This paper explores the use of multi-dimensional trees to provide spatial and temporal efficiencies in imaging large data sets and compares the hierarchical model to actual data values, and the second compares the pixel values of images produced by different parameter settings.
Abstract: This paper explores the use of multi-dimensional trees to provide spatial and temporal efficiencies in imaging large data sets. Each node of the tree contains a model of the data in terms of a fixed number of basis functions, a measure of the error in that model, and a measure of the importance of the data in the region covered by the node. A divide-and-conquer algorithm permits efficient computation of these quantities at all nodes of the tree. The flexible design permits various sets of basis functions, error criteria, and importance criteria to be implemented easily. Selective traversal of the tree provides images in acceptable time, by drawing nodes that cover a large volume as single objects when the approximation error and/or importance are low, and descending to finer detail otherwise. Trees over very large datasets can be pruned by the same criterion to provide data representations of acceptable size and accuracy. Compression and traversal are controlled by a user-defined combination of modeling error and data importance. For imaging decisions additional parameters are considered, including grid location, allowed time, and projected screen area. To analyse results, two evaluation metrics are used: the first compares the hierarchical model to actual data values, and the second compares the pixel values of images produced by different parameter settings.

Journal ArticleDOI
TL;DR: In this paper, a novel approach to reduce the matrix size associated with the method of moments (MoM) solution of the problem of electromagnetic scattering from arbitrary shaped closed bodies is presented.
Abstract: A novel approach to reducing the matrix size associated with the method of moments (MoM) solution of the problem of electromagnetic scattering from arbitrary shaped closed bodies is presented. The key step in this approach is to represent the scattered field in terms of a series of beams produced by multipole sources located in a complex space. On the scatterer boundary, the fields generated by these multipole sources resemble the Gabor basis functions. By utilizing the properties of the Gabor series, guidelines for selecting the orders as well as locations of the multipole sources are developed. It is shown that the present approach not only reduces the number of unknowns, but also generates a generalized impedance matrix with a banded structure and a low condition number. The accuracy of the proposed method is verified by comparing the numerical results with those derived by using the method of moments. >

Journal ArticleDOI
TL;DR: Simulation results indicate superior vibration attenuation compared to the minimum-energy forcing function, especially when some error in natural frequency exists.
Abstract: Forcing functions are developed to produce vibration-free motions in flexible systems. These forcing functions are constructed from ramped sinusoid basis functions so as to minimize excitation in a range of frequencies surrounding the system natural frequency. Frequency domain attributes of a particular ramped sinusoid forcing function are compared with corresponding attributes of a minimum-energy optimal input and an impulse-filtered ramp signal. A closed-loop control system is developed that utilizes each of these forcing functions to generate a reference profile, and also feeds the particular forcing function forward directly to the system to enhance closed-loop bandwidth. The use of a direct feedforward signal in the closed-loop implementation results in significantly faster response times when the closed-loop bandwidth is otherwise constrained to be very low. Simulation results indicate that residual vibration has been nearly eliminated for all three forcing functions, even when some error in natural frequency exists. However, the ramped sinusoid input provides the most control over spectral energy near resonance and uses the least amount of energy to achieve vibration-free motions. >

Book ChapterDOI
01 Jan 1994
TL;DR: This paper will show how an approach based on regularization theory leads to develop a family of approximation techniques, including Radial Basis Functions, and some tensor product and additive splines, and how this fairly classical approach has to be extended in order to cope with special features of the problem of learning of examples.
Abstract: This paper consists of two parts. In the first part we consider the problem of learning from examples in the setting of the theory of the approximation of multivariate functions from sparse data. We first will show how an approach based on regularization theory leads to develop a family of approximation techniques, including Radial Basis Functions, and some tensor product and additive splines. Then we will show how this fairly classical approach has to be extended in order to cope with special features of the problem of learning of examples, such as high dimensionality and strong anisotropics. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, such as some forms of Projection Pursuit Regression.

Journal ArticleDOI
TL;DR: Induced current distributions on conducting bodies of arbitrary shape modelled by NURBS surfaces are obtained by using a moment method approach to solve an electric field integral equation (EFIE) by applying the Cox-de Boor transformation algorithm.
Abstract: Induced current distributions on conducting bodies of arbitrary shape modelled by NURBS (non uniform rational B-splines) surfaces are obtained by using a moment method approach to solve an electric field integral equation (EFIE). The NURBS surfaces are expanded in terms of Bezier patches by applying the Cox-de Boor transformation algorithm. This transformation is justified because Bezier patches are numerically more stable than NURBS surfaces. New basis functions have been developed which extend over pairs of Bezier patches. These basis functions can be considered as a generalization of "rooftop" functions. The method is applied to obtain RCS values of several objects modelled with NURBS surfaces. Good agreement with results from other methods is observed. The method is efficient and versatile because it uses geometrical modelling tools that are quite powerful. >

Patent
07 Dec 1994
TL;DR: In this paper, a priori information about the nature of the expected signals is used to obtain an approximation of the signal using a set of pre-selected basis functions and is stored off-line a memory.
Abstract: A method and apparatus (28) is disclosed for efficient processing of NMR echo trains (fig.3) in well logging. A priori information about the nature of the expected signals is used to obtain an approximation of the signal using a set of pre-selected basis functions (fig.5). A singular decomposition (SVD) is applied to a matrix incorporating information about the basis functions and is stored off-line a memory. During the actual measurement, the apparatus estimates a parameter related to the SNR of the received NMR echo trains and uses it to determine a signal approximation model in conjunction with the SVD of the basis function matrix. This approximation is used to determine in real time attributes (fig. 8) of the earth formation being investigated.

Journal ArticleDOI
TL;DR: In this article, a hybrid vector finite element method was used for full-wave analysis of lossy dielectric waveguides, where edge elements and first-order nodal finite element basis functions were used to span the transverse and the z components of the electric field, respectively.
Abstract: This paper presents a full-wave analysis of lossy dielectric waveguides using a hybrid vector finite element method. To avoid the occurrences of spurious modes in the formulation, edge elements and first-order nodal finite element basis functions are used to span the transverse and the z components of the electric field, respectively. Furthermore, the direct matrix solution technique with minimum degree of reordering has been combined with the modified Lanczos algorithm to solve for the resultant sparse generalized eigenmatrix equation efficiently. >

Journal ArticleDOI
TL;DR: In this paper, the dependence of the radiation efficiencies and radiation mode shapes on the number of degrees of freedom permitted in the derivation of a radiation operator is investigated for a baffled finite rectangular plate.
Abstract: The modal‐style approach for representing the exterior radiation characteristics of structures generally seeks to find a set of orthogonal functions, or radiation modes, that diagonalize a discretized radiation operator in the exterior domain of the structure. The choice of basis functions for the modal representation is arbitrary, though use of the structural modes of vibration tends to provide some physical insight. The radiation modes are found through an eigenanalysis or singular‐value decomposition analysis of the radiation operator. The eigenvalue or singular value associated with a given radiation mode is directly proportional to the radiation efficiency of that radiation mode. In this paper, the dependency of the radiation efficiencies and radiation mode shapes on the number of degrees of freedom permitted in the derivation of the radiation operator is investigated for a baffled finite rectangular plate. The accuracy of the acoustic modal representation depends on the number of degrees of freedom in the radiation operator, with the least efficient radiation modes converging slowest. Further, the rate of convergence is dependent on the particular basis function selection. The convergence behavior has significant impact on those applications of the exterior acoustic modal approach that seek to exploit the least efficient radiation modes.

Journal ArticleDOI
TL;DR: In this article, an efficient method to compute the 2D and 3D capacitance matrices of multiconductor interconnects in a multilayered dielectric medium is presented.
Abstract: An efficient method to compute the 2-D and 3-D capacitance matrices of multiconductor interconnects in a multilayered dielectric medium is presented. The method is based on an integral equation approach and assumes the quasi-static condition. It is applicable to conductors of arbitrary polygonal shape embedded in a multilayered dielectric medium with possible ground planes on the top or bottom of the dielectric layers. The computation time required to evaluate the space-domain Green's function for the multilayered medium, which involves an infinite summation, has been greatly reduced by obtaining a closed-form expression, which is derived by approximating the Green's function using a finite number of images in the spectral domain. Then the corresponding space-domain Green's functions are obtained using the proper closed-form integrations. In both 2-D and 3-D cases, the unknown surface charge density is represented by pulse basis functions, and the delta testing function (point matching) is used to solve the integral equation. The elements of the resulting matrix are computed using the closed-form formulation, avoiding any numerical integration. The presented method is compared with other published results and showed good agreement. Finally, the equivalent microstrip crossover capacitance is computed to illustrate the use of a combination of 2-D and 3-D Green's functions. >

Journal ArticleDOI
TL;DR: The author shows that modern multilevel algorithms can be considered as standard iterative methods over the semidefinite sy...
Abstract: For the representation of piecewise d-linear functions instead of the usual finite element basis, a generating system is introduced that contains the nodal basis functions of the finest level and of all coarser levels of discretization. This approach enables the author to work directly with multilevel decompositions of a function.For a partial differential equation, the Galerkin scheme based on this generating system results in a semidefinite matrix equation that has in the one-dimensional (1D) case only about twice, in the two-dimensional (2D) case about $4/3$ times, and in the three-dimensional (3D) case about $8/7$ times as many unknowns as the usual system. Furthermore, the semidefinite system possesses not just one, but many solutions. However, the unique solution of the usual definite finite element problem can be easily computed from every solution of the semidefinite problem. The author shows that modern multilevel algorithms can be considered as standard iterative methods over the semidefinite sy...

Journal ArticleDOI
TL;DR: In this article, the authors provide lattice sum formulas for upper frame bounds that provide guidance in choosing lattice parameters that yield the most snug frame at a stipulated density of basis functions.
Abstract: In the early 1960s research into radar signal synthesis produced important formulas describing the action of the two-dimensional Fourier transform on auto- and crossambiguity surfaces. When coupled with the Poisson Summation formula, these results become applicable to the theory of Weyl-Heisenberg systems, in the form of lattice sum formulas that relate the energy of the discrete crossambiguity function of two signals f and g over a lattice with the inner product of the discrete autoambiguity functions of f and g over a "complementary" lattice. These lattice sum formulas provide a framework for a new proof of a result of N.J. Munch characterizing tight frames and for establishing an important relationship between l1-summability (condition A) of the discrete ambiguity function of g over a lattice and properties of the Weyl-Heisenberg system of g over the complementary lattice. This condition leads to formulas for upper frame bounds that appear simpler than those previously published and provide guidance in choosing lattice parameters that yield the most snug frame at a stipulated density of basis functions.

Journal ArticleDOI
01 Feb 1994
TL;DR: In this article, a continuum multi-configurational dynamical theory of electron transfer (ET) reactions in a chemical solute immersed in a polar solvent is developed, where the solute wave function is represented as a CI expansion.
Abstract: The continuum multi-configurational dynamical theory of electron transfer (ET) reactions in a chemical solute immersed in a polar solvent is developed. The solute wave function is represented as a CI expansion. The corresponding decomposition of the solute charge density generates a set of dynamical variables, the discrete medium coordinates. A new expression for the free energy surface in terms of these coordinates is derived. The stochastic equations of motion derived earlier are shown to be invariant under unitary transformations of orbitals used to build the CI expansion provided the latter is complete over the corresponding orbital subspace, and also under general linear transformations of the bases employed in expanding the charge density. The interrelation between the present general treatment and the reduced theory applied previously in terms of the two-level ET model is investigated. Finally, the explicit expression for the screening potential of medium electrons is derived in the electronic Born-Oppenheimer approximation (fast (slow) electronic timescale for solvent (solute)). The theory leads to a self-consistent scheme for practical calculations of rate constants for ET reactions involving complex solutes. Illustrative test calculations for two-level ET systems are presented, and the importance of proper boundary conditions for realistic molecular cavities is demonstrated.

Journal ArticleDOI
TL;DR: In this article, the combination of the curl-curl form of the vector Helmholtz equation with a local radiation boundary condition (RBC) is used to eliminate spurious nonzero eigenvalues in the spectrum of the matrix operator.
Abstract: A formulation is proposed for electromagnetic scattering from two-dimensional heterogeneous structures that illustrates the combination of the curl-curl form of the vector Helmholtz equation with a local radiation boundary condition (RBC). To eliminate spurious nonzero eigenvalues in the spectrum of the matrix operator, vector basis functions incorporating the Nedelec constraints are employed. Basis functions of linear and quadratic order are presented, and approximations made necessary by the use of the local RBC are discussed. Results obtained with linear-tangential/quadratic normal vector basis functions exhibit excellent agreement with exact solutions for layered circular cylinder geometries, and demonstrate that abrupt jump discontinuities in the normal field components at material interfaces can be accurately modeled. The vector 2D formulation illustrates the features necessary for a general three-dimensional implementation. >

Journal ArticleDOI
TL;DR: In this paper, two methods for the determination of scattering length density profiles from specular reflectivity data are discussed: cubic splines and series of sine and cosine terms.
Abstract: Two methods for the determination of scattering length density profiles from specular reflectivity data are discussed. For either method kinematical or dynamical theory can be used to calculate the reflectivity. In the first method the scattering density is parametrized using cubic splines. The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients of the higher harmonics in the Fourier series. The two methods work equally well in most situations, and they are able to recover the original profiles. In general, the method using splines as the basis functions is better suited for aperiodic than for periodic structures, whereas the sine/cosine basis is well suited for periodic and nearly periodic structures.

Proceedings ArticleDOI
30 Oct 1994
TL;DR: For 3D positron emission tomography, the3D algebraic reconstruction technique using blobs can reach comparable or even better duality than the 3D filtered backprojection method after only one cycle through the projection data.
Abstract: Incorporation of spherically-symmetric volume elements (blobs), instead of the conventional voxels, into iterative image reconstruction algorithms, has been found in the authors' previous studies to lead to significant improvement in the quality of the reconstructed images Furthermore, for 3D positron emission tomography, the 3D algebraic reconstruction technique using blobs can reach comparable or even better duality than the 3D filtered backprojection method after only one cycle through the projection data The only shortcoming of the blob reconstruction is an increased computational demand, because of the overlapping nature of the blobs These encouraging results mere obtained in the authors' previous studies for the case when the blobs were placed on the same 3D simple cubic grid used for voxel basis functions For basis functions which are spherically-symmetric, there are more advantageous arrangements of the 3D grid, enabling a more isotropic distribution of the spherical functions in the 3D space and a, better packing efficiency of the image spectrum A good arrangement is the body centered cubic grid The authors' studies confirmed that, when using this type of 3D grid, the number of grid points can be effectively reduced, decreasing the computational and memory demands while preserving the quality of the reconstructed images >

Journal ArticleDOI
TL;DR: In this paper, a numerically efficient technique for the calculation of the method of moments (MoM) impedance matrix is presented for planar periodic structures rendered in arbitrary triangular discretizations and embedded in layer media.
Abstract: A numerically efficient technique for the calculation of the method of moments (MoM) impedance matrix is presented for planar periodic structures rendered in arbitrary triangular discretizations and embedded in layer media. The technique is based on the MoM applied to a mixed potential integral equation (MPIE) in conjunction with triangular-domain basis functions. Rapid convergence of the matrix elements is achieved with a hybrid spectral/spatial decomposition of Green's functions. Transformations of the spectral to spatial Green's functions for layered media are performed by using complex images. Examples demonstrating the numerical efficiency and accuracy of the method are given. >