scispace - formally typeset
Search or ask a question

Showing papers on "Monotone polygon published in 2017"


Journal ArticleDOI
TL;DR: In this paper, a nice-behaved fixed-point equation for solving monotone inclusions with three operators is proposed, which employs resolvent and forward operators, one at a time, in succession.
Abstract: Operator-splitting methods convert optimization and inclusion problems into fixed-point equations; when applied to convex optimization and monotone inclusion problems, the equations given by operator-splitting methods are often easy to solve by standard techniques. The hard part of this conversion, then, is to design nicely behaved fixed-point equations. In this paper, we design a new, and thus far, the only nicely behaved fixed-point equation for solving monotone inclusions with three operators; the equation employs resolvent and forward operators, one at a time, in succession. We show that our new equation extends the Douglas-Rachford and forward-backward equations; we prove that standard methods for solving the equation converge; and we give two accelerated methods for solving the equation.

202 citations


Journal ArticleDOI
TL;DR: In this paper, a (1 − c/e)-approximation algorithm was proposed for the problem of maximizing a monotone increasing submodular function subject to a single matroid constraint.
Abstract: We design new approximation algorithms for the problems of optimizing submodular and supermodular functions subject to a single matroid constraint. Specifically, we consider the case in which we wish to maximize a monotone increasing submodular function or minimize a monotone decreasing supermodular function with a bounded total curvature c. Intuitively, the parameter c represents how nonlinear a function f is: when c = 0, f is linear, while for c = 1, f may be an arbitrary monotone increasing submodular function. For the case of submodular maximization with total curvature c, we obtain a (1 − c/e)-approximation—the first improvement over the greedy algorithm of of Conforti and Cornuejols from 1984, which holds for a cardinality constraint, as well as a recent analogous result for an arbitrary matroid constraint. Our approach is based on modifications of the continuous greedy algorithm and nonoblivious local search, and allows us to approximately maximize the sum of a nonnegative, monotone increasing subm...

130 citations


Journal ArticleDOI
TL;DR: Several modified hybrid projection methods for solving common solutions to variational inequality problems involving monotone and Lipschitz continuous operators using differently constructed half-spaces are proposed.
Abstract: In this paper we propose several modified hybrid projection methods for solving common solutions to variational inequality problems involving monotone and Lipschitz continuous operators. Based on differently constructed half-spaces, the proposed methods reduce the number of projections onto feasible sets as well as the number of values of operators needed to be computed. Strong convergence theorems are established under standard assumptions imposed on the operators. An extension of the proposed algorithm to a system of generalized equilibrium problems is considered and numerical experiments are also presented.

124 citations


Posted Content
TL;DR: In this paper, the authors show that projected gradient descent can achieve a constant approximation guarantee for maximizing continuous submodular functions with convex constraints, i.e., a function is defined as an expectation over a family of submodul functions with an unknown distribution.
Abstract: In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints. More specifically, we prove that for monotone continuous DR-submodular functions, all fixed points of projected gradient ascent provide a factor $1/2$ approximation to the global maxima. We also study stochastic gradient and mirror methods and show that after $\mathcal{O}(1/\epsilon^2)$ iterations these methods reach solutions which achieve in expectation objective values exceeding $(\frac{\text{OPT}}{2}-\epsilon)$. An immediate application of our results is to maximize submodular functions that are defined stochastically, i.e. the submodular function is defined as an expectation over a family of submodular functions with an unknown distribution. We will show how stochastic gradient methods are naturally well-suited for this setting, leading to a factor $1/2$ approximation when the function is monotone. In particular, it allows us to approximately maximize discrete, monotone submodular optimization problems via projected gradient descent on a continuous relaxation, directly connecting the discrete and continuous domains. Finally, experiments on real data demonstrate that our projected gradient methods consistently achieve the best utility compared to other continuous baselines while remaining competitive in terms of computational effort.

97 citations


Journal ArticleDOI
TL;DR: In this article, a stochastic accelerated mirror-prox (SAMP) method was proposed for solving a class of monotone variational inequalities (SVI), which is based on a multi-step acceleration scheme.
Abstract: We propose a novel stochastic method, namely the stochastic accelerated mirror-prox (SAMP) method, for solving a class of monotone stochastic variational inequalities (SVI). The main idea of the proposed algorithm is to incorporate a multi-step acceleration scheme into the stochastic mirror-prox method. The developed SAMP method computes weak solutions with the optimal iteration complexity for SVIs. In particular, if the operator in SVI consists of the stochastic gradient of a smooth function, the iteration complexity of the SAMP method can be accelerated in terms of their dependence on the Lipschitz constant of the smooth function. For SVIs with bounded feasible sets, the bound of the iteration complexity of the SAMP method depends on the diameter of the feasible set. For unbounded SVIs, we adopt the modified gap function introduced by Monteiro and Svaiter for solving monotone inclusion, and show that the iteration complexity of the SAMP method depends on the distance from the initial point to the set of strong solutions. It is worth noting that our study also significantly improves a few existing complexity results for solving deterministic variational inequality problems. We demonstrate the advantages of the SAMP method over some existing algorithms through our preliminary numerical experiments.

96 citations


Journal ArticleDOI
TL;DR: A new method of lower and upper solutions which is used to study the multi-point boundary value problem of nonlinear fractional differential equations with mixed fractional derivatives and p -Laplacian operator is proposed.

94 citations


Posted Content
TL;DR: In this paper, generalized derivative concepts useful in deriving necessary optimality conditions and numerical algorithms for non-differentiable optimization problems in inverse problems, imaging, and PDE-constrained optimization are discussed.
Abstract: These lecture notes for a graduate course cover generalized derivative concepts useful in deriving necessary optimality conditions and numerical algorithms for nondifferentiable optimization problems in inverse problems, imaging, and PDE-constrained optimization. Treated are convex functions and subdifferentials, Fenchel duality, monotone operators and resolvents, Moreau--Yosida regularization, proximal point and (some) first-order splitting methods, Clarke subdifferentials, and semismooth Newton methods. The required background from functional analysis and calculus of variations is also briefly summarized.

86 citations


Journal ArticleDOI
TL;DR: This work proposes a new splitting technique, namely Asymmetric Forward–Backward–Adjoint splitting, for solving monotone inclusions involving three terms, a maximally monotones, a cocoercive and a bounded linear operator.
Abstract: In this work we propose a new splitting technique, namely Asymmetric Forward–Backward–Adjoint splitting, for solving monotone inclusions involving three terms, a maximally monotone, a cocoercive and a bounded linear operator. Our scheme can not be recovered from existing operator splitting methods, while classical methods like Douglas–Rachford and Forward–Backward splitting are special cases of the new algorithm. Asymmetric preconditioning is the main feature of Asymmetric Forward–Backward–Adjoint splitting, that allows us to unify, extend and shed light on the connections between many seemingly unrelated primal-dual algorithms for solving structured convex optimization problems proposed in recent years. One important special case leads to a Douglas–Rachford type scheme that includes a third cocoercive operator.

81 citations


Journal ArticleDOI
TL;DR: A regularized smoothed SA (RSSA) scheme wherein the stepsize, smoothing, and regularization parameters are reduced after every iteration at a prescribed rate, and it is shown that the algorithm generates iterates that converge to the least norm solution in an almost sure sense.
Abstract: Traditionally, most stochastic approximation (SA) schemes for stochastic variational inequality (SVI) problems have required the underlying mapping to be either strongly monotone or monotone and Lipschitz continuous. In contrast, we consider SVIs with merely monotone and non-Lipschitzian maps. We develop a regularized smoothed SA (RSSA) scheme wherein the stepsize, smoothing, and regularization parameters are reduced after every iteration at a prescribed rate. Under suitable assumptions on the sequences, we show that the algorithm generates iterates that converge to the least norm solution in an almost sure sense, extending the results in Koshal et al. (IEEE Trans Autom Control 58(3):594–609, 2013) to the non-Lipschitzian regime. Additionally, we provide rate estimates that relate iterates to their counterparts derived from a smoothed Tikhonov trajectory associated with a deterministic problem. To derive non-asymptotic rate statements, we develop a variant of the RSSA scheme, denoted by aRSSA $$_r$$ , in which we employ a weighted iterate-averaging, parameterized by a scalar r where $$r = 1$$ provides us with the standard averaging scheme. The main contributions are threefold: (i) when $$r<1$$ and the parameter sequences are chosen appropriately, we show that the averaged sequence converges to the least norm solution almost surely and a suitably defined gap function diminishes at an approximate rate $$\mathcal{O}({1}\slash {\root 6 \of {k}})$$ after k steps; (ii) when $$r<1$$ , and smoothing and regularization are suppressed, the gap function admits the rate $$\mathcal{O}({1}\slash {\sqrt{k}})$$ , thus improving the rate $$\mathcal{O}(\ln (k)/\sqrt{k})$$ under standard averaging; and (iii) we develop a window-based variant of this scheme that also displays the optimal rate for $$r < 1$$ . Notably, we prove the superiority of the scheme with $$r < 1$$ with its counterpart with $$r=1$$ in terms of the constant factor of the error bound when the size of the averaging window is sufficiently large. We present the performance of the developed schemes on a stochastic Nash–Cournot game with merely monotone and non-Lipschitzian maps.

78 citations


Journal ArticleDOI
TL;DR: In this article, the authors established the existence of traveling waves and spreading speeds for time-space periodic monotone systems with monostable structure via the Poincare maps approach combined with an evolution viewpoint.

68 citations


Journal ArticleDOI
TL;DR: The aim in this paper is to study strong convergence results for L-Lipschitz continuous monotone variational inequality but L is unknown using a combination of subgradient extra-gradient method and viscosity approximation method with adoption of Armijo-like step size rule in infinite dimensional real Hilbert spaces.
Abstract: Our aim in this paper is to study strong convergence results for L-Lipschitz continuous monotone variational inequality but L is unknown using a combination of subgradient extra-gradient method and viscosity approximation method with adoption of Armijo-like step size rule in infinite dimensional real Hilbert spaces. Our results are obtained under mild conditions on the iterative parameters. We apply our result to nonlinear Hammerstein integral equations and finally provide some numerical experiments to illustrate our proposed algorithm.

Journal ArticleDOI
TL;DR: In this paper, the authors prove a strong convergence result for finding a zero of the sum of two monotone operators, with one of the two operators being co-coercive using an iterative method which is a combination of Nesterov's acceleration scheme and Haugazeau's algorithm in real Hilbert spaces.
Abstract: Our interest in this paper is to prove a strong convergence result for finding a zero of the sum of two monotone operators, with one of the two operators being co-coercive using an iterative method which is a combination of Nesterov’s acceleration scheme and Haugazeau’s algorithm in real Hilbert spaces. Our numerical results show that the proposed algorithm converges faster than the un-accelerated Haugazeau’s algorithm.

Journal ArticleDOI
TL;DR: A new framework for sequential multiblock component methods is presented that relies on a new version of regularized generalized canonical correlation analysis (RGCCA) where various scheme functions and shrinkage constants are considered.
Abstract: A new framework for sequential multiblock component methods is presented This framework relies on a new version of regularized generalized canonical correlation analysis (RGCCA) where various scheme functions and shrinkage constants are considered Two types of between block connections are considered: blocks are either fully connected or connected to the superblock (concatenation of all blocks) The proposed iterative algorithm is monotone convergent and guarantees obtaining at convergence a stationary point of RGCCA In some cases, the solution of RGCCA is the first eigenvalue/eigenvector of a certain matrix For the scheme functions x, [Formula: see text], [Formula: see text] or [Formula: see text] and shrinkage constants 0 or 1, many multiblock component methods are recovered

Journal ArticleDOI
TL;DR: A Hadamard type fractional integro-differential equation on infinite intervals is considered using monotone iterative technique and not only gets the existence of positive solutions, but also seeks the positive minimal and maximal solutions and gets two explicit monotones which converge to the extremal solutions.

Proceedings ArticleDOI
19 Jun 2017
TL;DR: In particular, the authors showed that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution, and tight approximation guarantees for maximization of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions.
Abstract: In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.

Journal ArticleDOI
TL;DR: A derivative-free iterative scheme that uses the residual vector as search direction for solving large-scale systems of nonlinear monotone equations is presented and computational experiments show that the new algorithm is computationally efficient.
Abstract: A derivative-free iterative scheme that uses the residual vector as search direction for solving large-scale systems of nonlinear monotone equations is presented. It is closely related to two recently proposed spectral residual methods for nonlinear systems which use a nonmonotone line-search globalization strategy and a step-size based on the Barzilai-Borwein choice. The global convergence analysis is presented. In order to study the numerical behavior of the algorithm, it is included an extensive series of numerical experiments. Our computational experiments show that the new algorithm is computationally efficient.

Journal ArticleDOI
TL;DR: In this paper, it was shown that in the (possibly inconsistent) convex feasibility setting, the shadow sequence remains bounded and its weak cluster points solve a best approximation problem, and a more general sufficient condition for weak convergence in the general case is presented.
Abstract: The Douglas---Rachford algorithm is a very popular splitting technique for finding a zero of the sum of two maximally monotone operators. The behaviour of the algorithm remains mysterious in the general inconsistent case, i.e., when the sum problem has no zeros. However, more than a decade ago, it was shown that in the (possibly inconsistent) convex feasibility setting, the shadow sequence remains bounded and its weak cluster points solve a best approximation problem. In this paper, we advance the understanding of the inconsistent case significantly by providing a complete proof of the full weak convergence in the convex feasibility setting. In fact, a more general sufficient condition for the weak convergence in the general case is presented. Our proof relies on a new convergence principle for Fejer monotone sequences. Numerous examples illustrate our results.

Proceedings ArticleDOI
16 Jan 2017
TL;DR: A novel framework of Prophet Inequalities for combinatorial valuation functions is introduced and a variant of the Correlation Gap Lemma for non-monotone submodular functions is shown.
Abstract: We introduce a novel framework of Prophet Inequalities for combinatorial valuation functions. For a (non-monotone) submodular objective function over an arbitrary matroid feasibility constraint, we give an O(1)-competitive algorithm. For a monotone subadditive objective function over an arbitrary downward-closed feasibility constraint, we give an O(log n log2r)-competitive algorithm (where r is the cardinality of the largest feasible subset).Inspired by the proof of our subadditive prophet inequality, we also obtain an O(log n · log2r)-competitive algorithm for the Secretary Problem with a monotone subadditive objective function subject to an arbitrary downward-closed feasibility constraint. Even for the special case of a cardinality feasibility constraint, our algorithm circumvents an [EQUATION] lower bound by Bateni, Hajiaghayi, and Zadimoghaddam [10] in a restricted query model.En route to our submodular prophet inequality, we prove a technical result of independent interest: we show a variant of the Correlation Gap Lemma [14, 1] for non-monotone submodular functions.

Journal ArticleDOI
TL;DR: A review of numerical methods for strongly nonlinear PDEs with an emphasis on convex and non-convex fully nonlinear equations and the convergence to viscosity solutions can be found in this article.
Abstract: We review the construction and analysis of numerical methods for strongly nonlinear PDEs, with an emphasis on convex and non-convex fully nonlinear equations and the convergence to viscosity solutions. We begin by describing a fundamental result in this area which states that stable, consistent and monotone schemes converge as the discretization parameter tends to zero. We review methodologies to construct finite difference, finite element and semi-Lagrangian schemes that satisfy these criteria, and, in addition, discuss some rather novel tools that have paved the way to derive rates of convergence within this framework.

Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this article, a lower bound of Ω(n 1/3 ) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n→ {0, 1} is monotone versus far from monotonicity was shown.
Abstract: We prove a lower bound of Ω(n1/3) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n→ {0,1} is monotone versus far from monotone. This improves the recent lower bound of Ω(n1/4) for the same problem by Belovs and Blais (STOC'16). Our result builds on a new family of random Boolean functions that can be viewed as a two-level extension of Talagrand's random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for two-sided, adaptive algorithms and a lower bound of Ω(n) for one-sided, non-adaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM'16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017).

Journal ArticleDOI
TL;DR: In this article, a forward-backward splitting algorithm based on Bregman distances for composite minimization problems in general reflexive Banach spaces is proposed and convergence is established using the notion of variable quasi-Bregman monotone sequences.
Abstract: We propose a forward-backward splitting algorithm based on Bregman distances for composite minimization problems in general reflexive Banach spaces. The convergence is established using the notion of variable quasi-Bregman monotone sequences. Various examples are discussed, including some in Euclidean spaces, where new algorithms are obtained.

Journal ArticleDOI
TL;DR: In this paper, the proximal point algorithm is generalized to complete CAT(0) spaces, and it is shown that the sequence generated by this algorithm converges to a zero of the monotone operator.
Abstract: In this paper, we generalize monotone operators, their resolvents and the proximal point algorithm to complete CAT(0) spaces. We study some properties of monotone operators and their resolvents. We show that the sequence generated by the inexact proximal point algorithm -converges to a zero of the monotone operator in complete CAT(0) spaces. A strong convergence (convergence in metric) result is also presented. Finally, we consider two important special cases of monotone operators and we prove that they satisfy the range condition (see Section 4 for the definition), which guarantees the existence of the sequence generated by the proximal point algorithm.

Journal ArticleDOI
Pontus Giselsson1
TL;DR: In this article, the authors show that the convergence rate for Douglas-Rachford splitting is not tight, meaning that no problem from the considered class converges exactly with that rate.
Abstract: Recently, several authors have shown local and global convergence rate results for Douglas–Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas–Rachford operator is contractive.

Proceedings ArticleDOI
16 Jan 2017
TL;DR: In this article, the adaptive and non-adaptive strategies for submodular functions were studied in a stochastic setting, where elements are not all active and we only get value from active elements.
Abstract: Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint.In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.

Journal ArticleDOI
TL;DR: In this article, a class of explicit balanced schemes for stochastic differential equations with coefficients of superlinearly growth satisfying a global monotone condition is introduced, and some numerical results are presented.

Proceedings Article
01 Jan 2017
TL;DR: It is proved that PONSS can achieve a better approximation ratio under some assumption such as i.i.d. noise distribution, and the empirical results on influence maximization and sparse regression problems show the superior performance of P ONSS.
Abstract: The problem of selecting the best $k$-element subset from a universe is involved in many applications. While previous studies assumed a noise-free environment or a noisy monotone submodular objective function, this paper considers a more realistic and general situation where the evaluation of a subset is a noisy monotone function (not necessarily submodular), with both multiplicative and additive noises. To understand the impact of the noise, we firstly show the approximation ratio of the greedy algorithm and POSS, two powerful algorithms for noise-free subset selection, in the noisy environments. We then propose to incorporate a noise-aware strategy into POSS, resulting in the new PONSS algorithm. We prove that PONSS can achieve a better approximation ratio under some assumption such as i.i.d. noise distribution. The empirical results on influence maximization and sparse regression problems show the superior performance of PONSS.

Journal ArticleDOI
TL;DR: In this article, it was shown that the left-monotone martingale coupling is optimal for any given performance function satisfying the Spence-Mirrlees condition, without assuming additional structural conditions on the marginals.

Journal ArticleDOI
01 Jan 2017
TL;DR: This letter shows that a large class of codesign problems have a common structure, as they are described by two posets, representing functionality, and resources, as the codesign constraints can be expressed as two maps in opposite directions between the two poset.
Abstract: Co-design problems in the field of robotics involve the tradeoff of “resources” usage, such as cost, execution time, and energy, with mission performance, under recursive constraints that involve energetics, mechanics, computation, and communication. This letter shows that a large class of codesign problems have a common structure, as they are described by two posets, representing functionality, and resources. The codesign constraints can be expressed as two maps in opposite directions between the two posets. Finding the most resource-economical feasible solution is equivalent to finding the least fixed point of the composition of those two maps. If the two maps are monotone, results from order theory allow concluding uniqueness and systematically deriving an optimal design or a certificate for infeasibility.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: In this paper, the authors focus on applications in machine learning, optimization, and control that call for the resilient selection of a few elements, e.g. features, sensors, or leaders, against a number of adversarial denial of service attacks or failures.
Abstract: In this paper, we focus on applications in machine learning, optimization, and control that call for the resilient selection of a few elements, e.g. features, sensors, or leaders, against a number of adversarial denial-of-service attacks or failures. In general, such resilient optimization problems are hard, and cannot be solved exactly in polynomial time, even though they often involve objective functions that are monotone and submodular. Notwithstanding, in this paper we provide the first scalable algorithm for their approximate solution, that is valid for any number of attacks or failures, and which, for functions with low curvature, guarantees superior approximation performance. Notably, the curvature has been known to tighten approximations for several non-resilient maximization problems, yet its effect on resilient maximization had hitherto been unknown. We complement our theoretical analyses with supporting empirical evaluations.

Journal ArticleDOI
TL;DR: The projection neurod dynamic model is proved to be stable in the sense of Lyapunov, globally convergent, globally asymptotically stable, and globally exponentially stable and it is shown that, the new neurodynamic model is effective to solve the nonconvex optimization problems.
Abstract: In this paper, a neurodynamic model is given to solve nonlinear pseudo-monotone projection equation. Under pseudo-monotonicity condition and Lipschitz continuous condition, the projection neurodynamic model is proved to be stable in the sense of Lyapunov, globally convergent, globally asymptotically stable, and globally exponentially stable. Also, we show that, our new neurodynamic model is effective to solve the nonconvex optimization problems. Moreover, since monotonicity is a special case of pseudo-monotonicity and also since a co-coercive mapping is Lipschitz continuous and monotone, and a strongly pseudo-monotone mapping is pseudo-monotone, the neurodynamic model can be applied to solve a broader classes of constrained optimization problems related to variational inequalities, pseudo-convex optimization problem, linear and nonlinear complementarity problems, and linear and convex quadratic programming problems. Finally, several illustrative examples are stated to demonstrate the effectiveness and efficiency of our new neurodynamic model.