scispace - formally typeset
Search or ask a question
Book

Linear complementarity, linear and nonlinear programming

01 Jan 1988-
About: The article was published on 1988-01-01 and is currently open access. It has received 1012 citations till now. The article focuses on the topics: Mixed complementarity problem & Complementarity theory.
Citations
More filters
Posted Content
TL;DR: It is shown that a local minimum of a cubic polynomial can be efficiently found by solving semidefinite programs of size linear in the number of variables, and that the set of second-order points of any cubic poynomial is a spectrahedron.
Abstract: We consider the notions of (i) critical points, (ii) second-order points, (iii) local minima, and (iv) strict local minima for multivariate polynomials. For each type of point, and as a function of the degree of the polynomial, we study the complexity of deciding (1) if a given point is of that type, and (2) if a polynomial has a point of that type. Our results characterize the complexity of these two questions for all degrees left open by prior literature. Our main contributions reveal that many of these questions turn out to be tractable for cubic polynomials. In particular, we present an efficiently-checkable necessary and sufficient condition for local minimality of a point for a cubic polynomial. We also show that a local minimum of a cubic polynomial can be efficiently found by solving semidefinite programs of size linear in the number of variables. By contrast, we show that it is strongly NP-hard to decide if a cubic polynomial has a critical point. We also prove that the set of second-order points of any cubic polynomial is a spectrahedron, and conversely that any spectrahedron is the projection of the set of second-order points of a cubic polynomial. In our final section, we briefly present a potential application of finding local minima of cubic polynomials to the design of a third-order Newton method.

6 citations

Proceedings ArticleDOI
16 Apr 2018
TL;DR: This paper proposes a robust tomography-based loss inference method capable of accurately inferring all link loss rates even when the network system may change dynamically, and results strongly confirm the promising performance of the proposed approach.
Abstract: This paper addresses the problem of inferring link loss rates based on network performance tomography in noisy network systems. Since the network tomography emerged, all existing tomography-based methods are limited to the basic condition that both network topologies and end-to-end routes must be absolutely accurate, which in most cases is impractical, especially for large-scale heterogeneous networks. To overcome the impracticability of tomography-based methods, we propose a robust tomography-based loss inference method capable of accurately inferring all link loss rates even when the network system may change dynamically. The new method first measures the end-to-end loss rates of selected paths to reduce the probing cost, and then calculates an upper bound for the loss rate of each link using the measurement results. Finally, it finds all the link loss rates that most closely conform to the measurement results within their upper bounds. Compared with two traditional loss inference methods (with and without path selection, respectively), the results strongly confirm the promising performance of our proposed approach.

6 citations


Cites methods from "Linear complementarity, linear and ..."

  • ...(8) can be efficiently solved by MATLAB tools through the active-set (line search) algorithm described in [16]....

    [...]

Journal ArticleDOI
TL;DR: In this article , the authors consider a large system of Lotka-Volterra equations where the interactions between the various species are a realization of a random matrix and provide conditions to have a unique equilibrium and present a heuristics to compute the number of surviving species.
Abstract: Lotka-Volterra (LV) equations play a key role in the mathematical modeling of various ecological, biological and chemical systems. When the number of species (or, depending on the viewpoint, chemical components) becomes large, basic but fundamental questions such as computing the number of surviving species still lack theoretical answers. In this paper, we consider a large system of LV equations where the interactions between the various species are a realization of a random matrix. We provide conditions to have a unique equilibrium and present a heuristics to compute the number of surviving species. This heuristics combines arguments from Random Matrix Theory, mathematical optimization (LCP), and standard extreme value theory. Numerical simulations, together with an empirical study where the strength of interactions evolves with time, illustrate the accuracy and scope of the results.

6 citations

Journal ArticleDOI
TL;DR: In this article, a survey of results on the order-field property of 2-player and multi-player stochastic games and their mixtures is presented, and it is shown that certain new subclasses and mixtures of n-person games can be solved via LCP formulations.
Abstract: We briefly survey some results on the orderfield property of 2-player and multi-player stochastic games and their mixtures. Some of these classes of stochastic games can be solved by formulating them as a Linear Complementarity Problem (LCP) or (Generalized) Vertical Linear Complementarity Problem (VLCP). We discuss some of these results and prove that certain new subclasses and mixtures of multi-player (or n-person) stochastic games can be solved via LCP formulations.

5 citations

Dissertation
01 Jan 2017
TL;DR: The conclusion was that using SVMs is an effective way to assess for the possible presence of interactions, before searching for them explicitly.
Abstract: One of the biggest challenges in psychiatric genetics is examining the effects of interactions between genetic variants on the aetiologies of complex disorders. Current techniques involve looking at linear combinations of the variants, as considering all the possible combinations of interactions is computationally unfeasible. The work in this thesis attempted to address this problem by using a machine learning model called a Support Vector Machine (SVM). These algorithms are capable of either building linear models or using kernel methods to consider the effects of interactions. The dataset used for all of the experiments was taken from a study looking into sufferers of treatment-resistant schizophrenia receiving the medication, Clozapine, with controls taken from the Wellcome Trust Case/Control Consortium study. The first experiment used information from the individual Single Nucleotide Polymorphisms (SNPs) as inputs to the SVMs, and compared the results with a technique called a polygenic score, a linear combination of the risk contributions of the SNPs that provides a single risk score for each individual. When more SNPs were entered into the models, one of the non-linear kernels provided better results than the linear SVMs. The second experiment attempted to explain this behaviour by using simulated phenotypes made from different contributions of main effects and interactions. The results strongly suggested that interactions were making a contribution. The final experiment looked at using risk scores made from gene sets. The models identified a set involved in synaptic development that has been previously implicated in schizophrenia, and when the scores from the individual genes were entered, the non-linear kernels again showed improvement, suggesting that there are interactions occurring between these genes. The conclusion was that using SVMs is an effective way to assess for the possible presence of interactions, before searching for them explicitly.

5 citations