scispace - formally typeset
Open AccessBook

Nonlinear Programming

TLDR
It is shown that if A is closed for all k → x x, k → y y, where ( k A ∈ ) k y x , then ( ) A ∉ y x .
Abstract
Part 1 (if): Assume that Z is closed. We must show that if A is closed for all k → x x , k → y y , where ( k A ∈ ) k y x , then ( ) A ∈ y x . By the definition of Z being closed, we know that all points arbitrarily close to Z are in Z. Let k → x x , k → y y , and ( k A ∈ ) k y x . Now, for any ε > 0, there exists an N such that for all k ≥ N we have || || k ε − < x x , || || k ε − < y y which implies that ( ) , x y is arbitrarily close to Z, so ( ) , x y ∈ Z and ( ) A ∈ y x . Thus, A is closed.

read more

Citations
More filters
Journal ArticleDOI

A Tutorial on Support Vector Machines for Pattern Recognition

TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Journal ArticleDOI

A tutorial on support vector regression

TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Journal ArticleDOI

Convergence of a block coordinate descent method for nondifferentiable minimization

TL;DR: In this article, the convergence properties of a block coordinate descent method applied to minimize a non-convex function f(x1,.., x 2, N 3 ) with certain separability and regularity properties were studied.
Book ChapterDOI

A Generalized Representer Theorem

TL;DR: The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.
Proceedings ArticleDOI

Regularized multi--task learning

TL;DR: An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.
References
More filters

How good is the simplex algorithm

TL;DR: By constructing long 'increasing' paths on appropriate convex polytopes, it is shown that the simplex algorithm for linear programs is not a 'good algorithm' in the sense of J. Edmonds.