scispace - formally typeset
Search or ask a question

Showing papers on "Basis (linear algebra) published in 2018"


Journal ArticleDOI
TL;DR: A non-intrusive reduced basis (RB) method is proposed for parametrized nonlinear structural analysis undergoing large deformations and with elasto-plastic constitutive relations, and the Gaussian process regression is used to approximate the projection coefficients.

180 citations


Journal ArticleDOI
TL;DR: A powerful graph-theoretic representation of the energy flow polynomials is established which allows for efficient algorithms for their computation and achieves excellent performance on three representative jet tagging problems: quark/gluon discrimination, boosted W tagging, and boosted top tagging.
Abstract: We introduce the energy flow polynomials: a complete set of jet substructure observables which form a discrete linear basis for all infrared- and collinear-safe observables. Energy flow polynomials are multiparticle energy correlators with specific angular structures that are a direct consequence of infrared and collinear safety. We establish a powerful graph-theoretic representation of the energy flow polynomials which allows us to design efficient algorithms for their computation. Many common jet observables are exact linear combinations of energy flow polynomials, and we demonstrate the linear spanning nature of the energy flow basis by performing regression for several common jet observables. Using linear classification with energy flow polynomials, we achieve excellent performance on three representative jet tagging problems: quark/gluon discrimination, boosted W tagging, and boosted top tagging. The energy flow basis provides a systematic framework for complete investigations of jet substructure using linear methods.

158 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a constraint energy minimization to construct multiscale spaces for GMsFEM, which is performed in the oversampling domain, which can handle non-decaying components of the local minimizers.

151 citations


OtherDOI
27 Apr 2018

136 citations


Journal ArticleDOI
TL;DR: The auxiliary model based orthogonal matching pursuit algorithm can simultaneously identify parameters and orders of the Hammerstein system, and has a high efficient identification performance.

100 citations


Proceedings ArticleDOI
01 Jul 2018
TL;DR: Zhang et al. as discussed by the authors proposed a Doubly Aligned Incomplete Multi-view Clustering algorithm (DAIMC) based on weighted semi-nonnegative matrix factorization (semi-NMF).
Abstract: Nowadays, multi-view clustering has attracted more and more attention. To date, almost all the previous studies assume that views are complete. However, in reality, it is often the case that each view may contain some missing instances. Such incompleteness makes it impossible to directly use traditional multi-view clustering methods. In this paper, we propose a Doubly Aligned Incomplete Multi-view Clustering algorithm (DAIMC) based on weighted semi-nonnegative matrix factorization (semi-NMF). Specifically, on the one hand, DAIMC utilizes the given instance alignment information to learn a common latent feature matrix for all the views. On the other hand, DAIMC establishes a consensus basis matrix with the help of $L_{2,1}$-Norm regularized regression for reducing the influence of missing instances. Consequently, compared with existing methods, besides inheriting the strength of semi-NMF with ability to handle negative entries, DAIMC has two unique advantages: 1) solving the incomplete view problem by introducing a respective weight matrix for each view, making it able to easily adapt to the case with more than two views; 2) reducing the influence of view incompleteness on clustering by enforcing the basis matrices of individual views being aligned with the help of regression. Experiments on four real-world datasets demonstrate its advantages.

96 citations


Journal ArticleDOI
TL;DR: In this article, the tools of intersection theory are introduced to the study of Feynman integrals, which allows for a new way of projecting integrals onto a basis, and the authors consider the Baikov representation of maximal cuts in arbitrary space-time dimension.
Abstract: We introduce the tools of intersection theory to the study of Feynman integrals, which allows for a new way of projecting integrals onto a basis. In order to illustrate this technique, we consider the Baikov representation of maximal cuts in arbitrary space-time dimension. We introduce a minimal basis of differential forms with logarithmic singularities on the boundaries of the corresponding integration cycles. We give an algorithm for computing a basis decomposition of an arbitrary maximal cut using so-called intersection numbers and describe two alternative ways of computing them. Furthermore, we show how to obtain Pfaffian systems of differential equations for the basis integrals using the same technique. All the steps are illustrated on the example of a two-loop non-planar triangle diagram with a massive loop.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a natural extension of this space to non-planar pentagon functions, which provides the basis for their pentagon bootstrap program. And they use this method to evaluate the symbols of two non-trivial nonplanar five-particle integrals, up to and including the finite part.
Abstract: In Phys. Rev. Lett. 116 (2016) 062001, the space of planar pentagon functions that describes all two-loop on-shell five-particle scattering amplitudes was introduced. In the present paper we present a natural extension of this space to non-planar pentagon functions. This provides the basis for our pentagon bootstrap program. We classify the relevant functions up to weight four, which is relevant for two-loop scattering amplitudes. We constrain the first entry of the symbol of the functions using information on branch cuts. Drawing on an analogy from the planar case, we introduce a conjectural second-entry condition on the symbol. We then show that the information on the function space, when complemented with some additional insights, can be used to efficiently bootstrap individual Feynman integrals. The extra information is read off of Mellin-Barnes representations of the integrals, either by evaluating simple asymptotic limits, or by taking discontinuities in the kinematic variables. We use this method to evaluate the symbols of two non-trivial non-planar five-particle integrals, up to and including the finite part.

92 citations


Journal ArticleDOI
TL;DR: In this paper, the conformal bootstrap equations in Mellin space and the anomalous dimensions and OPE coefficients of large spin double trace operators are analyzed in terms of continuous Hahn polynomials, and explicit expressions as an asymptotic expansion in inverse conformal spin to any order, reproducing the contribution of any primary operator and its descendants in the crossed channel are derived.
Abstract: We set up the conventional conformal bootstrap equations in Mellin space and analyse the anomalous dimensions and OPE coefficients of large spin double trace operators. By decomposing the equations in terms of continuous Hahn polynomials, we derive explicit expressions as an asymptotic expansion in inverse conformal spin to any order, reproducing the contribution of any primary operator and its descendants in the crossed channel. The expressions are in terms of known mathematical functions and involve generalized Bernoulli (Norlund) polynomials and the Mack polynomials and enable us to derive certain universal properties. Comparing with the recently introduced reformulated equations in terms of crossing symmetric tree level exchange Witten diagrams, we show that to leading order in anomalous dimension but to all orders in inverse conformal spin, the equations are the same as in the conventional formulation. At the next order, the polynomial ambiguity in the Witten diagram basis is needed for the equivalence and we derive the necessary constraints for the same.

88 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied general properties of the conformal basis, the space of wave functions in ($d+2$)-dimensional Minkowski space that are primaries of the Lorentz group $SO(1,d+1).
Abstract: We study general properties of the conformal basis, the space of wave functions in ($d+2$)-dimensional Minkowski space that are primaries of the Lorentz group $SO(1,d+1)$. Scattering amplitudes written in this basis have the same symmetry as $d$-dimensional conformal correlators. We translate the optical theorem, which is a direct consequence of unitarity, into the conformal basis. In the particular case of a tree-level exchange diagram, the optical theorem takes the form of a conformal block decomposition on the principal continuous series, with operator product expansion (OPE) coefficients being the three-point coupling written in the same basis. We further discuss the relation between the massless conformal basis and the bulk point singularity in $\mathrm{AdS}/\mathrm{CFT}$. Some three- and four-point amplitudes in ($2+1$) dimensions are explicitly computed in this basis to demonstrate these results.

85 citations


Journal ArticleDOI
TL;DR: The new version of the backward substitution method (BSM) for simulating transfer in anisotropic and inhomogeneous media governed by linear and fully nonlinear advection–diffusion-reaction equations (ADREs) is presented and is extended to general fully non linear ADREs in combination with the quasilinearization technique.

Journal ArticleDOI
TL;DR: This review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, C CSD(T), and multireference configuration interaction methods, and the emphasis is on work done by the author's research group.
Abstract: Because the one-electron basis set limit is difficult to reach in correlated post- Hartree–Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree–Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree–Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Moller–Plesset perturbat...

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, the action matrix method is used to make polynomial solvers based on Grobner bases faster, by careful selection of the monomial bases, which leads to more efficient solvers in many cases.
Abstract: Many computer vision applications require robust estimation of the underlying geometry, in terms of camera motion and 3D structure of the scene. These robust methods often rely on running minimal solvers in a RANSAC framework. In this paper we show how we can make polynomial solvers based on the action matrix method faster, by careful selection of the monomial bases. These monomial bases have traditionally been based on a Grobner basis for the polynomial ideal. Here we describe how we can enumerate all such bases in an efficient way. We also show that going beyond Grobner bases leads to more efficient solvers in many cases. We present a novel basis sampling scheme that we evaluate on a number of problems.

Journal ArticleDOI
TL;DR: It is argued that the use of function products can have a wide‐reaching effect in extending the power of functional maps in a variety of applications, in particular by enabling the transfer of high‐frequency functions without changing the representation size or complexity.
Abstract: In this paper, we consider the problem of information transfer across shapes and propose an extension to the widely used functional map representation. Our main observation is that in addition to the vector space structure of the functional spaces, which has been heavily exploited in the functional map framework, the functional algebra (i.e., the ability to take pointwise products of functions) can significantly extend the power of this framework. Equipped with this observation, we show how to improve one of the key applications of functional maps, namely transferring real-valued functions without conversion to point-to-point correspondences. We demonstrate through extensive experiments that by decomposing a given function into a linear combination consisting not only of basis functions but also of their pointwise products, both the representation power and the quality of the function transfer can be improved significantly. Our modification, while computationally simple, allows us to achieve higher transfer accuracy while keeping the size of the basis and the functional map fixed. We also analyze the computational complexity of optimally representing functions through linear combinations of products in a given basis and prove NP-completeness in some general cases. Finally, we argue that the use of function products can have a wide-reaching effect in extending the power of functional maps in a variety of applications, in particular by enabling the transfer of high-frequency functions without changing the representation size or complexity.

Journal ArticleDOI
TL;DR: Three new suboptimal, but feasible, algorithms are introduced: a new search for balances following a constrained principal component approach, a new approach based on the relation between the variation matrix and the Aitchison distance, and the hierarchical cluster analysis of variables.
Abstract: Compositional data analysis requires selecting an orthonormal basis with which to work on coordinates. In most cases this selection is based on a data driven criterion. Principal component analysis provides bases that are, in general, functions of all the original parts, each with a different weight hindering their interpretation. For interpretative purposes, it would be better to have each basis component as a ratio or balance of the geometric means of two groups of parts, leaving irrelevant parts with a zero weight. This is the role of principal balances, defined as a sequence of orthonormal balances which successively maximize the explained variance in a data set. The new algorithm to compute principal balances requires an exhaustive search along all the possible sets of orthonormal balances. To reduce computational time, the sets of possible partitions for up to 15 parts are stored. Two other suboptimal, but feasible, algorithms are also introduced: (i) a new search for balances following a constrained principal component approach and (ii) the hierarchical cluster analysis of variables. The latter is a new approach based on the relation between the variation matrix and the Aitchison distance. The properties and performance of these three algorithms are illustrated using a typical data set of geochemical compositions and a simulation exercise.

Journal ArticleDOI
TL;DR: In this article, the Cumulative Distribution Transform (CDT) is proposed for pattern representation that interprets patterns as probability density functions, and has special properties with regards to classification.

Journal ArticleDOI
TL;DR: In this article, the performance of Gaussian basis sets for density functional theory based calculations of core electron spectroscopies is assessed and the convergence of core-electron binding energies and core-excitation energies using a range of basis sets, including split-valence, correlation consistent, polarisation consistent and individual gauge for localized orbitals basis sets is studied.
Abstract: The performance of gaussian basis sets for density functional theory based calculations of core electron spectroscopies is assessed. The convergence of core-electron binding energies and core-excitation energies using a range of basis sets, including split-valence, correlation consistent, polarisation consistent and individual gauge for localized orbitals basis sets is studied. For Δself-consistent field calculations of core-electron binding energies and core-excitation energies of first row elements, relatively small basis sets can accurately reproduce the values of much larger basis sets, with the IGLO basis sets performing particularly well. Calculations for the K-edge of second row elements are more challenging and of the smaller basis sets, pcSseg-2 has the best performance. For the correlation-consistent basis sets, inclusion of core-valence correlation functions is important, with the cc-pCVTZ basis set giving accurate results. Time-dependent density functional theory based calculations of core-excitation energies show less sensitivity to the basis set with relatively small basis sets, such as pcSseg-1 or pcSseg-2, reproducing the values for much larger basis sets accurately. In contrast, time-dependent density functional theory calculations of X-ray emission energies are highly dependent on the basis set, but the IGLO-II, IGLO-III and pcSseg-2 basis sets provide a good level of accuracy.

Journal ArticleDOI
TL;DR: This paper introduces reduced basis spaces not only for the state and adjoint variable but also for the distributed control variable and proposes two different error estimation procedures that provide rigorous bounds for the error in the optimal control and the associated cost functional.
Abstract: In this paper, we consider the efficient and reliable solution of distributed optimal control problems governed by parametrized elliptic partial differential equations. The reduced basis method is used as a low-dimensional surrogate model to solve the optimal control problem. To this end, we introduce reduced basis spaces not only for the state and adjoint variable but also for the distributed control variable. We also propose two different error estimation procedures that provide rigorous bounds for the error in the optimal control and the associated cost functional. The reduced basis optimal control problem and associated a posteriori error bounds can be efficiently evaluated in an offline–online computational procedure, thus making our approach relevant in the many-query or real-time context. We compare our bounds with a previously proposed bound based on the Banach–Necas–Babuska theory and present numerical results for two model problems: a Graetz flow problem and a heat transfer problem. Finally, we also apply and test the performance of our newly proposed bound on a hyperthermia treatment planning problem.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an adaptive algorithm and enrich in selected regions with large residuals, which can achieve a three order of magnitude error reduction, which is better than the previous methods.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: A generalized classification of vector control techniques that combines various principles of vectors control and speed control approaches is offered.
Abstract: This paper reviews permanent magnet synchronous motor vector control techniques and existing classifications of such techniques. This paper offers a generalized classification of vector control techniques that combines various principles of vector control and speed control approaches. Methods listed in the classification are characterized by their basic qualitative characteristics. On the basis of comparative analysis of techniques recommendations are given on their practical application.

Journal ArticleDOI
TL;DR: This paper revisits sampling approximation in the LCT domain to introduce a generalized approximation operator and derives an exact closed-form expression for the integrated squared error that occurs when a signal is approximated by a basis of shifted, scaled, and chirp-modulated versions of a generating function in theLCT domain.
Abstract: In this paper, we consider the performance of sampling associated with the linear canonical transform (LCT), which generalizes a large number of classical integral transforms and fundamental operations linked to signal processing and optics. First, we revisit sampling approximation in the LCT domain to introduce a generalized approximation operator. Then, we derive an exact closed-form expression for the integrated squared error that occurs when a signal is approximated by a basis of shifted, scaled, and chirp-modulated versions of a generating function in the LCT domain. Several basic properties of the approximation error are presented. The derived results can be applied to a wide variety of sampling approximation schemes in the LCT domain. Finally, experimental examples are given to illustrate the theoretical derivations.

Journal ArticleDOI
TL;DR: The prior on the unknown function can be described in terms of its approximability by finite-dimensional reduced model spaces, e.g., finite elements or trigonometric polynomials, as well as reduced basis spaces which are designed to match the solution manifold more closely.
Abstract: We consider the problem of optimal recovery of an unknown function u in a Hilbert space V from measurements of the form j (u), j = 1,. .. , m, where the j are known linear functionals on V. We are motivated by the setting where u is a solution to a PDE with some unknown parameters , therefore lying on a certain manifold contained in V. Following the approach adopted in [9, 3], the prior on the unknown function can be described in terms of its approximability by finite-dimensional reduced model spaces (V n) n≥1 where dim(V n) = n. Examples of such spaces include classical approximation spaces, e.g. finite elements or trigonometric polynomials, as well as reduced basis spaces which are designed to match the solution manifold more closely. The error bounds for optimal recovery under such priors are of the form µ(V n , W m)e n , where e n is the accuracy of the reduced model V n and µ(V n , W m) is the inverse of an inf-sup constant that describe the angle between V n and the space W m spanned by the Riesz representers of (1 ,. .. , m). This paper addresses the problem of properly selecting the measurement func-tionals, in order to control at best the stability constant µ(V n , W m), for a given reduced model space V n. Assuming that the j can be picked from a given dictionary D we introduce and analyze greedy algorithms that perform a sub-optimal selection in reasonable computational time. We study the particular case of dictionaries that consist either of point value evaluations or local averages, as idealized models for sensors in physical systems. Our theoretical analysis and greedy algorithms may therefore be used in order to optimize the position of such sensors.

Journal ArticleDOI
TL;DR: Model Order Reduction, roughly speaking, can be summarized in two parts: an offline and an online part: one constructs the database of solutions, or snapshots, for proper selected parameters and the other uses the database for a fast evaluation for a new parameter.
Abstract: Model Order Reduction, roughly speaking, can be summarized in two parts: an offline and an online part. In the first, one constructs the database of solutions, or snapshots, for proper selected parameters. In the latter the database is used for a fast evaluation of the quatity of interest for a new parameter (see for example Schilders, Van der Vorst, and Rommes 2008; Hesthaven et al. 2016). Choices can be made either for the selection of the parameters in the construction of the database or on how the database is used to approximate the manifold of the snapshots.

Journal ArticleDOI
TL;DR: It is shown that machine learning can be used to predict such adaptive basis sets using local geometrical information only, and various properties of standard DFT calculations can be easily obtained at much lower costs, including nuclear gradients.
Abstract: It is chemically intuitive that an optimal atom centered basis set must adapt to its atomic environment, for example by polarizing toward nearby atoms. Adaptive basis sets of small size can be significantly more accurate than traditional atom centered basis sets of the same size. The small size and well conditioned nature of these basis sets leads to large saving in computational cost, in particular in a linear scaling framework. Here, it is shown that machine learning can be used to predict such adaptive basis sets using local geometrical information only. As a result, various properties of standard DFT calculations can be easily obtained at much lower costs, including nuclear gradients. In our approach, a rotationally invariant parametrization of the basis is obtained by employing a potential anchored on neighboring atoms to ultimately construct a rotation matrix that turns a traditional atom centered basis set into a suitable adaptive basis set. The method is demonstrated using MD simulations of liquid...

Posted Content
TL;DR: In this article, a model reduction approach for problems with coherent structures that propagate over time such as convection-dominated flows and wave-type phenomena is presented. But the model reduction method is not suitable for high dimensional features that require high-dimensional approximation spaces.
Abstract: This work presents a model reduction approach for problems with coherent structures that propagate over time such as convection-dominated flows and wave-type phenomena. Traditional model reduction methods have difficulties with these transport-dominated problems because propagating coherent structures typically introduce high-dimensional features that require high-dimensional approximation spaces. The approach proposed in this work exploits the locality in space and time of propagating coherent structures to derive efficient reduced models. Full-model solutions are approximated locally in time via local reduced spaces that are adapted with basis updates during time stepping. The basis updates are derived from querying the full model at a few selected spatial coordinates. A core contribution of this work is an adaptive sampling scheme for selecting at which components to query the full model to compute basis updates. The presented analysis shows that, in probability, the more local the coherent structure is in space, the fewer full-model samples are required to adapt the reduced basis with the proposed adaptive sampling scheme. Numerical results on benchmark examples with interacting wave-type structures and time-varying transport speeds and on a model combustor of a single-element rocket engine demonstrate the wide applicability of the proposed approach and runtime speedups of up to one order of magnitude compared to full models and traditional reduced models.

Journal ArticleDOI
25 Jul 2018-Symmetry
TL;DR: The metric dimension and metric basis of 2D lattices of alpha-boron nanotubes are computed.
Abstract: Concepts of resolving set and metric basis has enjoyed a lot of success because of multi-purpose applications both in computer and mathematical sciences. For a connected graph G(V,E) a subset W of V(G) is a resolving set for G if every two vertices of G have distinct representations with respect to W. A resolving set of minimum cardinality is called a metric basis for graph G and this minimum cardinality is known as metric dimension of G. Boron nanotubes with different lattice structures, radii and chirality’s have attracted attention due to their transport properties, electronic structure and structural stability. In the present article, we compute the metric dimension and metric basis of 2D lattices of alpha-boron nanotubes.

Journal ArticleDOI
TL;DR: A certified reduced basis approach for the strong- and weak-constraint four-dimensional variational (4D-Var) data assimilation problem for a parametrized PDE model to generate reduced order approximations for the state, adjoint, initial condition, and model error.
Abstract: We propose a certified reduced basis approach for the strong- and weak-constraint four-dimensional variational (4D-Var) data assimilation problem for a parametrized PDE model. While the standard strong-constraint 4D-Var approach uses the given observational data to estimate only the unknown initial condition of the model, the weak-constraint 4D-Var formulation additionally provides an estimate for the model error and thus can deal with imperfect models. Since the model error is a distributed function in both space and time, the 4D-Var formulation leads to a large-scale optimization problem for every given parameter instance of the PDE model. To solve the problem efficiently, various reduced order approaches have therefore been proposed in the recent past. Here, we employ the reduced basis method to generate reduced order approximations for the state, adjoint, initial condition, and model error. Our main contribution is the development of efficiently computable a posteriori upper bounds for the error of the reduced basis approximation with respect to the underlying high-dimensional 4D-Var problem. Numerical results are conducted to test the validity of our approach.

Posted Content
TL;DR: In this paper, the problem of retrieving the most relevant labels for a given input when the size of the output space is very large is considered, and a statistical and analytical basis for using surrogate losses is established.
Abstract: We consider the problem of retrieving the most relevant labels for a given input when the size of the output space is very large. Retrieval methods are modeled as set-valued classifiers which output a small set of classes for each input, and a mistake is made if the label is not in the output set. Despite its practical importance, a statistically principled, yet practical solution to this problem is largely missing. To this end, we first define a family of surrogate losses and show that they are calibrated and convex under certain conditions on the loss parameters and data distribution, thereby establishing a statistical and analytical basis for using these losses. Furthermore, we identify a particularly intuitive class of loss functions in the aforementioned family and show that they are amenable to practical implementation in the large output space setting (i.e. computation is possible without evaluating scores of all labels) by developing a technique called Stochastic Negative Mining. We also provide generalization error bounds for the losses in the family. Finally, we conduct experiments which demonstrate that Stochastic Negative Mining yields benefits over commonly used negative sampling approaches.

Journal ArticleDOI
TL;DR: A reliable reduced-order model (ROM) for fast frequency sweep in time-harmonic Maxwell’s equations by means of the reduced-basis method is detailed and emphasis is placed on a fast evaluation of the ROM error measure and on providing a reliable convergence criterion.
Abstract: A reliable reduced-order model (ROM) for fast frequency sweep in time-harmonic Maxwell’s equations by means of the reduced-basis method is detailed Taking frequency as a parameter, the electromagnetic field in microwave circuits does not arbitrarily vary as frequency changes, but evolves on a very low-dimensional manifold Approximating this low-dimensional manifold by a low dimension subspace, namely, reduced-basis space, gives rise to an ROM for fast frequency sweep in microwave circuits This avoids carrying out time-consuming finite-element analysis for each frequency in the band of interest The behavior of the solutions to Maxwell’s equations as a function of the frequency parameter is studied and highlighted As a result, a compact reduced-basis space for efficient model-order reduction is proposed In this paper, the reduced-basis space is composed of two parts: 1) eigenmodes hit in the frequency band of interest, which form an orthogonal, fundamental set that describes the natural oscillating dynamics of the electromagnetic field and 2) whatever else electromagnetic fields, sampled in the frequency band of interest, that are needed to achieve convergence in the reduced-basis approximation The reduced-basis method aims not only to find out a reduced-basis space in an efficient way, but also to certify the reliability of the approximation carried out Emphasis is placed on a fast evaluation of the ROM error measure and on providing a reliable convergence criterion This approach is applied to both narrowband resonating structures and wideband nonresonanting devices in order to show the capabilities of the method in real-life applications

Journal ArticleDOI
TL;DR: This work supports the recently described hypothesis that the OPB problem is both a method and a basis set effect, and predicts that the symmetric OPB ν9 is the predicted to be the second-brightest transition, and it will be observed very close to 775 cm-1.
Abstract: Truncated, correlated, wave function methods either produce imaginary frequencies (in the extreme case) or nonphysically low frequencies in out-of-plane motions for carbon and adjacent atoms when the carbon atoms engage in π bonding. Cyclopropenylidene is viewed as the simplest aromatic hydrocarbon, and the present as well as previous theoretical studies have shown that this simple molecule exhibits this behavior in the two out-of-plane bends (OPBs). This nonphysical behavior has been treated by removing nearly linear dependent basis functions according to eigenvalues of the overlap matrix, by employing basis sets where the spd space saturatation is balanced with higher angular momentum functions, by including basis set superposition/incompleteness error (BSSE/BSIE) corrections, or by combining standard correlation methods with explicitly correlated methods to produce hybrid potential surfaces. However, this work supports the recently described hypothesis that the OPB problem is both a method and a basis ...