scispace - formally typeset
Search or ask a question

Showing papers in "Foundations of Computational Mathematics in 2010"


Journal ArticleDOI
TL;DR: New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error.
Abstract: Deterministic Galerkin approximations of a class of second order elliptic PDEs with random coefficients on a bounded domain D⊂ℝd are introduced and their convergence rates are estimated. The approximations are based on expansions of the random diffusion coefficients in L 2(D)-orthogonal bases, and on viewing the coefficients of these expansions as random parameters y=y(ω)=(y i (ω)). This yields an equivalent parametric deterministic PDE whose solution u(x,y) is a function of both the space variable x∈D and the in general countably many parameters y. We establish new regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to $V=H^{1}_{0}(D)$. These results lead to analytic estimates on the V norms of the coefficients (which are functions of x) in a so-called “generalized polynomial chaos” (gpc) expansion of u. Convergence estimates of approximations of u by best N-term truncated V valued polynomials in the variable y∈U are established. These estimates are of the form N −r , where the rate of convergence r depends only on the decay of the random input expansion. It is shown that r exceeds the benchmark rate 1/2 afforded by Monte Carlo simulations with N “samples” (i.e., deterministic solves) under mild smoothness conditions on the random diffusion coefficients. A class of fully discrete approximations is obtained by Galerkin approximation from a hierarchic family $\{V_{l}\}_{l=0}^{\infty}\subset V$of finite element spaces in D of the coefficients in the N-term truncated gpc expansions of u(x,y). In contrast to previous works, the level l of spatial resolution is adapted to the gpc coefficient. New regularity theorems describing the smoothness properties of the solution u as a map from y∈U=(−1,1)∞ to a smoothness space W⊂V are established leading to analytic estimates on the W norms of the gpc coefficients and on their space discretization error. The space W coincides with $H^{2}(D)\cap H^{1}_{0}(D)$in the case where D is a smooth or convex domain. Our analysis shows that in realistic settings a convergence rate $N_{\mathrm{dof}}^{-s}$in terms of the total number of degrees of freedom N dof can be obtained. Here the rate s is determined by both the best N-term approximation rate r and the approximation order of the space discretization in D.

322 citations


Journal ArticleDOI
TL;DR: Two stability results for Lipschitz functions on triangulable, compact metric spaces are proved and applications of both to problems in systems biology are considered.
Abstract: We prove two stability results for Lipschitz functions on triangulable, compact metric spaces and consider applications of both to problems in systems biology. Given two functions, the first result is formulated in terms of the Wasserstein distance between their persistence diagrams and the second in terms of their total persistence.

316 citations


Journal ArticleDOI
TL;DR: In this article, the rank and border rank of symmetric tensors were studied using geometric methods. And the rank of a polynomial is obtained by considering the singularities of the hypersurface defined by the polynomials.
Abstract: Motivated by questions arising in signal processing, computational complexity, and other areas, we study the ranks and border ranks of symmetric tensors using geometric methods. We provide improved lower bounds for the rank of a symmetric tensor (i.e., a homogeneous polynomial) obtained by considering the singularities of the hypersurface defined by the polynomial. We obtain normal forms for polynomials of border rank up to five, and compute or bound the ranks of several classes of polynomials, including monomials, the determinant, and the permanent.

244 citations


Journal ArticleDOI
TL;DR: This paper develops the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results and implies a simpler optimized version of the deterministic compressed sensing method previously developed in.
Abstract: We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length N≫k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of $\hat{\mathbf{A}}$, and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem (Gilbert et al. in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method (Gilbert et al. in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).

170 citations


Journal ArticleDOI
TL;DR: It is shown that in the case of Gaussian measurements, ℓ1 minimization recovers the signal well from inaccurate measurements, thus improving the result from Candes et al. (Commun. Pure Appl. Math. 59:1207–1223, 2006).
Abstract: In compressed sensing, we seek to gain information about a vector x∈ℝN from d ≪ N nonadaptive linear measurements. Candes, Donoho, Tao et al. (see, e.g., Candes, Proc. Intl. Congress Math., Madrid, 2006; Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006; Donoho, IEEE Trans. Inf. Theory 52:1289–1306, 2006) proposed to seek a good approximation to x via l 1 minimization. In this paper, we show that in the case of Gaussian measurements, l 1 minimization recovers the signal well from inaccurate measurements, thus improving the result from Candes et al. (Commun. Pure Appl. Math. 59:1207–1223, 2006). We also show that this numerically friendly algorithm (see Candes et al., Commun. Pure Appl. Math. 59:1207–1223, 2006) with overwhelming probability recovers the signal with accuracy, comparable to the accuracy of the best k-term approximation in the Euclidean norm when k∼d/ln N.

135 citations


Journal Article
TL;DR: This paper defines a new transport metric that interpolates between the quadratic Wasserstein and the Fisher–Rao metrics and generalizes optimal transport to measures with different masses and proposes a numerical scheme making use of first-order proximal splitting methods.
Abstract: This paper defines a new transport metric over the space of non-negative measures. This metric interpolates between the quadratic Wasserstein and the Fisher-Rao metrics and generalizes optimal transport to measures with different masses. It is defined as a generalization of the dynamical formulation of optimal transport of Benamou and Brenier, by introducing a source term in the continuity equation. The influence of this source term is measured using the Fisher-Rao metric, and is averaged with the transportation term. This gives rise to a convex variational problem defining our metric. Our first contribution is a proof of the existence of geodesics (i.e. solutions to this variational problem). We then show that (generalized) optimal transport and Fisher-Rao metrics are obtained as limiting cases of our metric. Our last theoretical contribution is a proof that geodesics between mixtures of sufficiently close Diracs are made of translating mixtures of Diracs. Lastly, we propose a numerical scheme making use of first order proximal splitting methods and we show an application of this new distance to image interpolation.

110 citations


Journal ArticleDOI
TL;DR: This article emphasizes algebraic structures (groups and Hopf algebras of trees) that have recently received much attention also in the non-numerical literature and presents interesting relationships among them.
Abstract: B-series are a fundamental tool in practical and theoretical aspects of numerical integrators for ordinary differential equations. A composition law for B-series permits an elegant derivation of order conditions, and a substitution law gives much insight into modified differential equations of backward error analysis. These two laws give rise to algebraic structures (groups and Hopf algebras of trees) that have recently received much attention also in the non-numerical literature. This article emphasizes these algebraic structures and presents interesting relationships among them.

106 citations


Journal ArticleDOI
TL;DR: This work describes the linear subspaces of energy-preserving and Hamiltonian modified vector fields which admit a B-series, their finite-dimensional truncations, and their annihilators.
Abstract: B-series are a powerful tool in the analysis of Runge–Kutta numerical integrators and some of their generalizations (“B-series methods”). A general goal is to understand what structure-preservation can be achieved with B-series and to design practical numerical methods that preserve such structures. B-series of Hamiltonian vector fields have a rich algebraic structure that arises naturally in the study of symplectic or energy-preserving B-series methods and is developed in detail here. We study the linear subspaces of energy-preserving and Hamiltonian modified vector fields which admit a B-series, their finite-dimensional truncations, and their annihilators. We characterize the manifolds of B-series that are conjugate to Hamiltonian and conjugate to energy-preserving and describe the relationships of all these spaces.

71 citations


Journal ArticleDOI
TL;DR: A recursive definition of the neural response and associated derived kernel is given that can be used in a variety of application domains such as classification of images, strings of text and genomics data.
Abstract: We propose a natural image representation, the neural response, motivated by the neuroscience of the visual cortex. The inner product defined by the neural response leads to a similarity measure between functions which we call the derived kernel. Based on a hierarchical architecture, we give a recursive definition of the neural response and associated derived kernel. The derived kernel can be used in a variety of application domains such as classification of images, strings of text and genomics data.

66 citations


Journal ArticleDOI
TL;DR: This paper presents a parameter choice strategy, called the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function, which adaptively achieves the best error rate.
Abstract: The regularization parameter choice is a fundamental problem in Learning Theory since the performance of most supervised algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge needed to choose the regularization parameter in order to obtain good learning rates. In this paper we present a parameter choice strategy, called the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieves the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. As a straightforward corollary we can immediately derive adaptive parameter choices for various kernel methods recently studied. Numerical experiments with the proposed parameter choice rules are also presented.

60 citations


Journal ArticleDOI
TL;DR: B-series is used to analyze multiscale numerical integrators that are able to approximate not only the simplest, lowest-order averaged equation but also its high-order counterparts.
Abstract: We show how B-series may be used to derive in a systematic way the analytical expressions of the high-order stroboscopic averaged equations that approximate the slow dynamics of highly oscillatory systems. For first-order systems we give explicitly the form of the averaged systems with $\mathcal{O}(\epsilon^{j})$errors, j=1,2,3 (2π e denotes the period of the fast oscillations). For second-order systems with large $\mathcal{O}(\epsilon^{-1})$forces, we give the explicit form of the averaged systems with $\mathcal{O}(\epsilon^{j})$errors, j=1,2. A variant of the Fermi–Pasta–Ulam model and the inverted Kapitsa pendulum are used as illustrations. For the former it is shown that our approach establishes the adiabatic invariance of the oscillatory energy. Finally we use B-series to analyze multiscale numerical integrators that implement the method of averaging. We construct integrators that are able to approximate not only the simplest, lowest-order averaged equation but also its high-order counterparts.

Journal ArticleDOI
TL;DR: Cubic Schrödinger equations with small initial data (or small nonlinearity) and their spectral semi-discretizations in space are analyzed and it is shown that along both the solution of the nonlinear Schr Ödinger equation as well as the solutions of the semi-Discretized equation the actions of the linear SchröDinger equation are approximately conserved over long times.
Abstract: Cubic Schrodinger equations with small initial data (or small nonlinearity) and their spectral semi-discretizations in space are analyzed. It is shown that along both the solution of the nonlinear Schrodinger equation as well as the solution of the semi-discretized equation the actions of the linear Schrodinger equation are approximately conserved over long times. This also allows us to show approximate conservation of energy and momentum along the solution of the semi-discretized equation over long times. These results are obtained by analyzing a modulated Fourier expansion in time. They are valid in arbitrary spatial dimension.

Journal ArticleDOI
TL;DR: In this paper, the conservation properties of a full discretization via a spectral semi-discretization in space and a Lie-Trotter splitting in time for cubic Schrodinger equations with small initial data (or small nonlinearity) are studied.
Abstract: Conservation properties of a full discretization via a spectral semi-discretization in space and a Lie–Trotter splitting in time for cubic Schrodinger equations with small initial data (or small nonlinearity) are studied. The approximate conservation of the actions of the linear Schrodinger equation, energy, and momentum over long times is shown using modulated Fourier expansions. The results are valid in arbitrary spatial dimension.

Journal ArticleDOI
TL;DR: By establishing an iterative thresholding algorithm for discrete free-discontinuity problems, this paper provides new insights on properties of minimizing solutions thereof.
Abstract: Free-discontinuity problems describe situations where the solution of interest is defined by a function and a lower-dimensional set consisting of the discontinuities of the function. Hence, the derivative of the solution is assumed to be a ‘small’ function almost everywhere except on sets where it concentrates as a singular measure. This is the case, for instance, in crack detection from fracture mechanics or in certain digital image segmentation problems. If we discretize such situations for numerical purposes, the free-discontinuity problem in the discrete setting can be re-formulated as that of finding a derivative vector with small components at all but a few entries that exceed a certain threshold. This problem is similar to those encountered in the field of ‘sparse recovery’, where vectors with a small number of dominating components in absolute value are recovered from a few given linear measurements via the minimization of related energy functionals. Several iterative thresholding algorithms that intertwine gradient-type iterations with thresholding steps have been designed to recover sparse solutions in this setting. It is natural to wonder if and/or how such algorithms can be used towards solving discrete free-discontinuity problems. The current paper explores this connection, and, by establishing an iterative thresholding algorithm for discrete free-discontinuity problems, provides new insights on properties of minimizing solutions thereof.

Journal ArticleDOI
TL;DR: A loss of asymptotic order is observed, but in the most relevant cases the overall asymaptotic order remains higher than a truncated asymPTotic expansion at similar computational effort.
Abstract: We propose a variant of the numerical method of steepest descent for oscillatory integrals by using a low-cost explicit polynomial approximation of the paths of steepest descent. A loss of asymptotic order is observed, but in the most relevant cases the overall asymptotic order remains higher than a truncated asymptotic expansion at similar computational effort. Theoretical results based on number theory underpinning the mechanisms behind this effect are presented.

Journal ArticleDOI
TL;DR: This paper forms and proves a real analogue of Toda’s theorem, and is able to relate the computational hardness of two extremely well-studied problems in algorithmic semi-algebraic geometry: the problem of deciding sentences in the first-order theory of the reals with a constant number of quantifier alternations, and that of computing Betti numbers of semi- algebraic sets.
Abstract: Toda (in SIAM J Comput 20(5):865–877, 1991) proved in 1989 that the (discrete) polynomial time hierarchy, PH, is contained in the class P #P , namely the class of languages that can be decided by a Turing machine in polynomial time given access to an oracle with the power to compute a function in the counting complexity class #P This result, which illustrates the power of counting, is considered to be a seminal result in computational complexity theory An analogous result in the complexity theory over the reals (in the sense of Blum–Shub–Smale real machines in Bull Am Math Soc (NS) 21(1): 1–46, 1989) has been missing so far In this paper we formulate and prove a real analogue of Toda’s theorem Unlike Toda’s proof in the discrete case, which relied on sophisticated combinatorial arguments, our proof is topological in nature As a consequence of our techniques, we are also able to relate the computational hardness of two extremely well-studied problems in algorithmic semi-algebraic geometry: the problem of deciding sentences in the first-order theory of the reals with a constant number of quantifier alternations, and that of computing Betti numbers of semi-algebraic sets We obtain a polynomial time reduction of the compact version of the first problem to the second This latter result may be of independent interest to researchers in algorithmic semi-algebraic geometry

Journal ArticleDOI
TL;DR: A new method for computing the Chow rings of flag varieties and to construct the integral cohomologies of Lie groups in terms of Schubert classes is introduced.
Abstract: Based on the basis theorem of Bruhat–Chevalley (in Algebraic Groups and Their Generalizations: Classical Methods, Proceedings of Symposia in Pure Mathematics, vol. 56 (part 1), pp. 1–26, AMS, Providence, 1994) and the formula for multiplying Schubert classes obtained in (Duan, Invent. Math. 159:407–436, 2005) and programmed in (Duan and Zhao, Int. J. Algebra Comput. 16:1197–1210, 2006), we introduce a new method for computing the Chow rings of flag varieties (resp. the integral cohomology of homogeneous spaces). The method and results of this paper have been extended in (Duan and Zhao, arXiv:math.AT/0801.2444 and arXiv:math.AT/0711.2541) to obtain the integral cohomology rings of all complete flag manifolds, and to construct the integral cohomologies of Lie groups in terms of Schubert classes.

Journal ArticleDOI
TL;DR: In this article, the boundary measures of compact subsets of the d-dimensional Euclidean space, which are closely related to Federer's curvature measures, are studied.
Abstract: We study the boundary measures of compact subsets of the d-dimensional Euclidean space, which are closely related to Federer’s curvature measures. We show that they can be computed efficiently for point clouds and suggest that these measures can be used for geometric inference. The main contribution of this work is the proof of a quantitative stability theorem for boundary measures using tools of convex analysis and geometric measure theory. As a corollary we obtain a stability result for Federer’s curvature measures of a compact set, showing that they can be reliably estimated from point-cloud approximations.

Journal ArticleDOI
TL;DR: The convergence of a time discretisation with variable time steps is shown for a class of doubly nonlinear evolution equations of second order and proves existence of a weak solution.
Abstract: The convergence of a time discretisation with variable time steps is shown for a class of doubly nonlinear evolution equations of second order. This also proves existence of a weak solution. The operator acting on the zero-order term is assumed to be the sum of a linear, bounded, symmetric, strongly positive operator and a nonlinear operator that fulfils a certain growth and a Holder-type continuity condition. The operator acting on the first-order time derivative is a nonlinear hemicontinuous operator that fulfils a certain growth condition and is (up to some shift) monotone and coercive.

Journal ArticleDOI
TL;DR: A new algorithm for obtaining rigorous results concerning the existence of chaotic invariant sets of dynamical systems generated by non-autonomous, time-periodic differential equations is presented, based on a new theoretical approach to the computation of the homology of the Poincaré map.
Abstract: A new algorithm for obtaining rigorous results concerning the existence of chaotic invariant sets of dynamical systems generated by non-autonomous, time-periodic differential equations is presented. Unlike all other algorithms the presented algorithm does not require the numerical integration of the solutions and as a consequence it is insensitive to the rapid error growth in the case of long integration. The result is based on a new theoretical approach to the computation of the homology of the Poincare map. A concrete numerical example concerning a time-periodic differential equation in the complex plane is provided.

Journal ArticleDOI
TL;DR: This work shows how to approximate the feasible region of structured convex optimization problems by a family of convex sets with explicitly given and efficient (if the accuracy of the approximation is moderate) self-concordant barriers.
Abstract: We show how to approximate the feasible region of structured convex optimization problems by a family of convex sets with explicitly given and efficient (if the accuracy of the approximation is moderate) self-concordant barriers. This approach extends the reach of the modern theory of interior-point methods, and lays the foundation for new ways to treat structured convex optimization problems with a very large number of constraints. Moreover, our approach provides a strong connection from the theory of self-concordant barriers to the combinatorial optimization literature on solving packing and covering problems.

Journal ArticleDOI
TL;DR: In this article, a non-standard wave atom form was proposed for the boundary integral equation of acoustic scattering in two dimensions, and the authors showed that this form provides a compression of the acoustic single-and double-layer potentials with wave number k as O(k)-by-O(k) matrices with C e δ k 1+δ non-negligible entries, with δ>0 arbitrarily small, and e the desired accuracy.
Abstract: This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases (Bradie et al. in Appl. Comput. Harmon. Anal. 1:94–99, 1993). Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the non-standard wave atom form, provides a compression of the acoustic single- and double-layer potentials with wave number k as O(k)-by-O(k) matrices with C e δ k 1+δ non-negligible entries, with δ>0 arbitrarily small, and e the desired accuracy. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that allow this result to be written as a theorem are a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling (wavelength of the wave packet) ∼ (essential diameter)2. Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.

Journal ArticleDOI
TL;DR: This paper proves that some other higher moments are also finite, and shows that the variance is polynomial in the size of the input.
Abstract: In the forthcoming paper of Beltran and Pardo, the average complexity of linear homotopy methods to solve polynomial equations with random initial input (in a sense to be described below) was proven to be finite, and even polynomial in the size of the input. In this paper, we prove that some other higher moments are also finite. In particular, we show that the variance is polynomial in the size of the input.

Journal ArticleDOI
TL;DR: It is shown that there exist matrices for which the rate of convergence is strictly quadratic, and there exists a neighborhood of Tk which is invariant under Wilkinson’s shift strategy, which is the union of such quadratically convergent sequences.
Abstract: One of the most widely used methods for eigenvalue computation is the QR iteration with Wilkinson’s shift: Here, the shift s is the eigenvalue of the bottom 2×2 principal minor closest to the corner entry It has been a long-standing question whether the rate of convergence of the algorithm is always cubic In contrast, we show that there exist matrices for which the rate of convergence is strictly quadratic More precisely, let $T_{ {\mathcal {X}}}$be the 3×3 matrix having only two nonzero entries $(T_{ {\mathcal {X}}})_{12}=(T_{ {\mathcal {X}}})_{21}=1$and let ${\mathcal {T}}_{\varLambda }$be the set of real, symmetric tridiagonal matrices with the same spectrum as $T_{ {\mathcal {X}}}$ There exists a neighborhood $\boldsymbol {{\mathcal {U}}}\subset {\mathcal {T}}_{\varLambda }$of $T_{ {\mathcal {X}}}$which is invariant under Wilkinson’s shift strategy with the following properties For $T_{0}\in \boldsymbol {{\mathcal {U}}}$, the sequence of iterates (T k ) exhibits either strictly quadratic or strictly cubic convergence to zero of the entry (T k )23 In fact, quadratic convergence occurs exactly when $\lim T_{k}=T_{ {\mathcal {X}}}$ Let $\boldsymbol {{\mathcal {X}}}$be the union of such quadratically convergent sequences (T k ): The set $\boldsymbol {{\mathcal {X}}}$has Hausdorff dimension 1 and is a union of disjoint arcs $\boldsymbol {{\mathcal {X}}}^{\sigma}$meeting at $T_{ {\mathcal {X}}}$, where σ ranges over a Cantor set


Journal ArticleDOI
TL;DR: This work identifies a family of toric patches with trapezoidal shape, each of which has linear precision, and classifies the homogeneous polynomials in three variables whose toric polar linear system defines a Cremona transformation.
Abstract: We classify the homogeneous polynomials in three variables whose toric polar linear system defines a Cremona transformation. This classification includes, as a proper subset, the classification of toric surface patches from geometric modeling which have linear precision. Besides the well-known tensor product patches and Bezier triangles, we identify a family of toric patches with trapezoidal shape, each of which has linear precision. Furthermore, Bezier triangles and tensor product patches are special cases of trapezoidal patches.

Journal ArticleDOI
TL;DR: An efficient strategy is proposed, which makes use of parallel computations on multiple machines and refines the estimate gradually, and it is proved that under certain assumptions the result of computations converges to the exact result as the precision of calculations increases.
Abstract: An automated general purpose method is introduced for computing a rigorous estimate of a bounded region in ℝ n whose points satisfy a given property. The method is based on calculations conducted in interval arithmetic and the constructed approximation is built of rectangular boxes of variable sizes. An efficient strategy is proposed, which makes use of parallel computations on multiple machines and refines the estimate gradually. It is proved that under certain assumptions the result of computations converges to the exact result as the precision of calculations increases. The time complexity of the algorithm is analyzed, and the effectiveness of this approach is illustrated by constructing a lower bound on the set of parameters for which an overcompensatory nonlinear Leslie population model exhibits more than one attractor, which is of interest from the biological point of view. This paper is accompanied by efficient and flexible software written in C++ whose source code is freely available at http://www.pawelpilarczyk.com/parallel/ .

Journal Article
TL;DR: One of the most widely used methods for eigenvalue computation is the QR iteration with Wilkinson's shift: here, the shift s is the eigen value of the bottom 2 2 principal minor closest to the corne...
Abstract: One of the most widely used methods for eigenvalue computation is the QR iteration with Wilkinson's shift: Here, the shift s is the eigenvalue of the bottom 2 2 principal minor closest to the corne...