scispace - formally typeset
Search or ask a question
Author

Hongxia Yin

Other affiliations: Chinese Academy of Sciences
Bio: Hongxia Yin is an academic researcher from Minnesota State University, Mankato. The author has contributed to research in topics: Smoothing & Newton's method. The author has an hindex of 6, co-authored 18 publications receiving 295 citations. Previous affiliations of Hongxia Yin include Chinese Academy of Sciences.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper shows that the sufficient descent condition is actually not needed in the convergence analyses of conjugate gradient methods, and convergence results on the Fletcher--Reeves- and Polak--Ribiere-type methods are established in the absence of the necessary condition.
Abstract: Recently, important contributions on convergence studies of conjugate gradient methods were made by Gilbert and Nocedal [SIAM J. Optim., 2 (1992), pp. 21--42]. They introduce a "sufficient descent condition" to establish global convergence results. Although this condition is not needed in the convergence analyses of Newton and quasi-Newton methods, Gilbert and Nocedal hint that the sufficient descent condition, which was enforced by their two-stage line search algorithm, may be crucial for ensuring the global convergence of conjugate gradient methods. This paper shows that the sufficient descent condition is actually not needed in the convergence analyses of conjugate gradient methods. Consequently, convergence results on the Fletcher--Reeves- and Polak--Ribiere-type methods are established in the absence of the sufficient descent condition. To show the differences between the convergence properties of Fletcher--Reeves- and Polak--Ribiere-type methods, two examples are constructed, showing that neither the boundedness of the level set nor the restriction $\beta_k \geq 0$ can be relaxed for the Polak--Ribiere-type methods.

212 citations

Journal ArticleDOI
TL;DR: The numerical results show that the active-set strategy results in a modified Armijo gradient or Gauss-Newton like methods requiring less than a quarter of the gradients, as compared to the use of these methods without this strategy.
Abstract: We present a new active-set strategy which can be used in conjunction with exponential (entropic) smoothing for solving large-scale minimax problems arising from the discretization of semi-infinite minimax problems. The main effect of the active-set strategy is to dramatically reduce the number of gradient calculations needed in the optimization. Discretization of multidimensional domains gives rise to minimax problems with thousands of component functions. We present an application to minimizing the sum of squares of the Lagrange polynomials to find good points for polynomial interpolation on the unit sphere in ℝ3. Our numerical results show that the active-set strategy results in a modified Armijo gradient or Gauss-Newton like methods requiring less than a quarter of the gradients, as compared to the use of these methods without our active-set strategy. Finally, we show how this strategy can be incorporated in an algorithm for solving semi-infinite minimax problems.

31 citations

Journal ArticleDOI
TL;DR: This paper proves local quadratic convergence of their proposed Newton-type method for shape-preserving interpolation by viewing it as a semismooth Newton method and presents a modification of the method which has global quadRatic convergence.
Abstract: In 1986, Irvine, Marin, and Smith proposed a Newton-type method for shape-preserving interpolation and, based on numerical experience, conjectured its quadratic convergence. In this paper, we prove local quadratic convergence of their method by viewing it as a semismooth Newton method. We also present a modification of the method which has global quadratic convergence. Numerical examples illustrate the results.

20 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the least l 2-norm solution for a possibly inconsistent system of nonlinear inequalities, where the objective function of the problem is only first-order continuously differentiable.
Abstract: In this paper, we consider the least l 2-norm solution for a possibly inconsistent system of nonlinear inequalities. The objective function of the problem is only first-order continuously differentiable. By introducing a new smoothing function, the problem is approximated by a family of parameterized optimization problems with twice continuously differentiable objective functions. Then a Levenberg–Marquardt algorithm is proposed to solve the parameterized smooth optimization problems. It is proved that the algorithm either terminates finitely at a solution of the original inequality problem or generates an infinite sequence. In the latter case, the infinite sequence converges to a least l 2-norm solution of the inequality problem. The local quadratic convergence of the algorithm was produced under some conditions.

17 citations

Journal ArticleDOI
TL;DR: It is shown that f is a strongly semismooth function if g is continuous and B is affine with respect to t and stronglySemismooth withrespect to x, i.e., B(x, t) = u(x)t + v(x), where u and v are two strongly Semismooth functions in ℝn.
Abstract: As shown by an example, the integral function f : {\bb R}n → {\bb R}, defined by f(x) e ∫ab[B(x, t)]+g(t) dt, may not be a strongly semismooth function, even if g(t) ≡ 1 and B is a quadratic polynomial with respect to t and infinitely many times smooth with respect to x. We show that f is a strongly semismooth function if g is continuous and B is affine with respect to t and strongly semismooth with respect to x, i.e., B(x, t) e u(x)t + v(x), where u and v are two strongly semismooth functions in {\bb R}n. We also show that f is not a piecewise smooth function if u and v are two linearly independent linear functions, g is continuous and g n 0 in [a, b], and n ≥ 2. We apply the first result to the edge convex minimum norm network interpolation problem, which is a two-dimensional interpolation problem.

16 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed and an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion is obtained.
Abstract: A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed With exact line search, our method reduces to a nonlinear version of the Hestenes--Stiefel conjugate gradient scheme For any (inexact) line search, our scheme satisfies the descent condition ${\bf g}_k^{\sf T} {\bf d}_k \le -\frac{7}{8} \|{\bf g}_k\|^2$ Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions A new line search scheme is developed that is efficient and highly accurate Efficiency is achieved by exploiting properties of linear interpolants in a neighborhood of a local minimizer High accuracy is achieved by using a convergence criterion, which we call the ``approximate Wolfe'' conditions, obtained by replacing the sufficient decrease criterion in the Wolfe conditions with an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion Numerical comparisons are given with both L-BFGS and conjugate gradient methods using the unconstrained optimization problems in the CUTE library

936 citations

01 Jan 2005
TL;DR: In this article, the development of dierent versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties, is reviewed, with a focus on the convergence properties of the dierent methods.
Abstract: This paper reviews the development of dierent versions of nonlinear conjugate gradient methods, with special attention given to global convergence properties.

775 citations

Journal ArticleDOI
TL;DR: A new conjugacy condition is proposed, which considers an inexact line search scheme but reduces to the old one if the line search is exact, and two nonlinear conjugate gradient methods are constructed.
Abstract: Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems.

353 citations