scispace - formally typeset
Search or ask a question
Book

Linear complementarity, linear and nonlinear programming

01 Jan 1988-
About: The article was published on 1988-01-01 and is currently open access. It has received 1012 citations till now. The article focuses on the topics: Mixed complementarity problem & Complementarity theory.
Citations
More filters
Proceedings ArticleDOI
01 Aug 2018
TL;DR: A sparse coding dictionary updating algorithm with BB method is proposed: combined with the two-point step size method (or BB method) proposed by Barzilai and Borwein, the Lagrange dual method is applied to solve the problem of step size adjustment.
Abstract: In recent years, more dictionary learning algorithms have been proposed, and the Lagrange dual method achieves better performance. It is an optimization algorithm with the gradient descent methods, and therefore it is inevitable to deal with the step size problem. In this paper, we first introduced the Lagrange dual method thoroughly, and then the step size problem is discussed in detail. Next, a sparse coding dictionary updating algorithm with BB method is proposed: combined with the two-point step size method (or BB method) proposed by Barzilai and Borwein, the Lagrange dual method is applied to solve the problem of step size adjustment. The proposed algorithm is superior to other methods in many data sets as shown by the experimental results.

Additional excerpts

  • ...The Active Set (AS) algorithm maintain a active set at each point while iteration goes to reduces the complexity of the search [9]....

    [...]

Posted Content
TL;DR: In this article, the authors introduce a new notion of equivalence between linear complementarity problems that sets the basis to translate the powerful tools of smooth bifurcation theory to this class of models.
Abstract: Many systems of interest to control engineering can be modeled by linear complementarity problems. We introduce a new notion of equivalence between linear complementarity problems that sets the basis to translate the powerful tools of smooth bifurcation theory to this class of models. Leveraging this notion of equivalence, we introduce new tools to analyze, classify, and design non-smooth bifurcations in linear complementarity problems and their interconnection.
Proceedings ArticleDOI
20 Oct 2014
TL;DR: In this article, a parametric variational principle and finite-element time-domain scheme is proposed for nonlinear constitutive laws in electromagnetics, such as Kerr medium, ferroelectric and ferromagnetic hysteresis.
Abstract: A novel method for nonlinear Maxwell's equations is proposed in this paper. This method is based on the parametric variational principle and finite-element time-domain scheme. Unlike conventional nonlinear methods based on iteration, the proposed method treats the nonlinear constitutive relations as a set of linear complementary problems. This method is effective in handling nonlinear constitutive laws in electromagnetics, such as Kerr medium, ferroelectric and ferromagnetic hysteresis.

Cites background or methods from "Linear complementarity, linear and ..."

  • ...Instead of performing time-consuming iterations at each time step as in conventional schemes, the proposed method deals with a series of standard linear complementarity problems [3], which can be efficiently solved by a number of mature mathematical tools [4]....

    [...]

  • ...Several programming methods are available for this well-studied problem [4]....

    [...]

Posted Content
TL;DR: In this paper, several formulations of the cone regression problem are considered and, focusing on the particular case of concave regression as example, several algorithms are analyzed and compared both qualitatively and quantitatively through numerical simulations.
Abstract: Cone regression is a particular case of quadratic programming that minimizes a weighted sum of squared residuals under a set of linear inequality constraints. Several important statistical problems such as isotonic, concave regression or ANOVA under partial orderings, just to name a few, can be considered as particular instances of the cone regression problem. Given its relevance in Statistics, this paper aims to address the fundamentals of cone regression from a theoretical and practical point of view. Several formulations of the cone regression problem are considered and, focusing on the particular case of concave regression as example, several algorithms are analyzed and compared both qualitatively and quantitatively through numerical simulations. Several improvements to enhance numerical stability and bound the computational cost are proposed. For each analyzed algorithm, the pseudo-code and its corresponding code in Scilab are provided. The results from this study demonstrate that the choice of the optimization approach strongly impacts the numerical performances. It is also shown that methods are not currently available to solve efficiently cone regression problems with large dimension (more than many thousands of points). We suggest further research to fill this gap by exploiting and adapting classical multi-scale strategy to compute an approximate solution.
Book ChapterDOI
01 Jan 2001
TL;DR: Several optimization problems arise in nature and they are known, mainly for historical reasons, as principles as mentioned in this paper, such as minimum potential energy in statics, maximum dissipation principle in dissipative media and the least action principle in dynamics.
Abstract: Optimization deals with the determination of the extremum or the extrema of a given function over the space where the function is defined or over a subset of it. Several optimization problems arise in nature and they are known, mainly for historical reasons, as principles. The principles of minimum potential energy in statics, the maximum dissipation principle in dissipative media and the least action principle in dynamics are some examples (see, among others, [Hamel, 1949], [Lippmann, 1972], [Cohn and Maier, 1979], [de Freitas, 1984], [de Freitas and Smith, 1985], [Panagiotopoulos, 1985], [Hartmann, 1985], [Sewell, 1987], [Bazant and Cedolin, 1991]). Furthermore, mathematical optimization is tightly connected with optimal structural design, control and identification. Applications include contemporary questions in biomechanics, like the understanding of the inner structure in bones [Wainwright and et.al., 1982] or of the shape in trees [Mattheck, 1997].