scispace - formally typeset
Search or ask a question

Showing papers on "Strongly monotone published in 2019"


Journal ArticleDOI
TL;DR: This paper designs novel center-free distributed GNE seeking algorithms for equality and inequality affine coupling constraints, respectively, and proves their convergence by showing that the two algorithms can be seen as specific instances of preconditioned proximal point algorithms for finding zeros of monotone operators.
Abstract: In this paper, we investigate distributed generalized Nash equilibrium (GNE) computation of monotone games with affine coupling constraints. Each player can only utilize its local objective function, local feasible set, and a local block of the coupling constraint, and can only communicate with its neighbors. We assume the game has monotone pseudo-subdifferential without Lipschitz continuity restrictions. We design novel center-free distributed GNE seeking algorithms for equality and inequality affine coupling constraints, respectively. A proximal alternating direction method of multipliers is proposed for the equality case, while for the inequality case, a parallel splitting type algorithm is proposed. In both algorithms, the GNE seeking task is decomposed into a sequential Nash equilibrium (NE) computation of regularized subgames and distributed update of multipliers and auxiliary variables, based on local data and local communication. Our two double-layer GNE algorithms need not specify the inner loop NE seeking algorithm, and moreover, only require that the strongly monotone subgames are inexactly solved. We prove their convergence by showing that the two algorithms can be seen as specific instances of preconditioned proximal point algorithms for finding zeros of monotone operators. Applications and numerical simulations are given for illustration.

75 citations


Journal ArticleDOI
TL;DR: The lower iteration complexity bounds for finding the saddle point of a strongly convex and strongly concave saddle point problem: $\min_x\max_yF(x,y)$ are studied to be either pure first-order methods or methods using proximal mappings.
Abstract: In this paper, we study the lower iteration complexity bounds for finding the saddle point of a strongly convex and strongly concave saddle point problem: $\min_x\max_yF(x,y)$. We restrict the classes of algorithms in our investigation to be either pure first-order methods or methods using proximal mappings. The existing lower bound result for this type of problems is obtained via the framework of strongly monotone variational inequality problems, which corresponds to the case where the gradient Lipschitz constants ($L_x, L_y$ and $L_{xy}$) and strong convexity/concavity constants ($\mu_x$ and $\mu_y$) are uniform with respect to variables $x$ and $y$. However, specific to the min-max saddle point problem these parameters are naturally different. Therefore, one is led to finding the best possible lower iteration complexity bounds, specific to the min-max saddle point models. In this paper we present the following results. For the class of pure first-order algorithms, our lower iteration complexity bound is $\Omega\left(\sqrt{\frac{L_x}{\mu_x}+\frac{L_{xy}^2}{\mu_x\mu_y}+\frac{L_y}{\mu_y}}\cdot\ln\left(\frac{1}{\epsilon}\right)\right)$, where the term $\frac{L_{xy}^2}{\mu_x\mu_y}$ explains how the coupling influences the iteration complexity. Under several special parameter regimes, this lower bound has been achieved by corresponding optimal algorithms. However, whether or not the bound under the general parameter regime is optimal remains open. Additionally, for the special case of bilinear coupling problems, given the availability of certain proximal operators, a lower bound of $\Omega\left(\sqrt{\frac{L_{xy}^2}{\mu_x\mu_y}+1}\cdot\ln(\frac{1}{\epsilon})\right)$ is established in this paper, and optimal algorithms have already been developed in the literature.

69 citations


Journal ArticleDOI
TL;DR: This work proposes a novel distributed payoff-based algorithm, where each agent uses information only about its cost value and the constraint value with its associated dual multiplier, and proves convergence to Nash equilibria under significantly weaker assumptions, not requiring a potential function.
Abstract: We consider multiagent decision making where each agent optimizes its convex cost function subject to individual and coupling constraints. The constraint sets are compact convex subsets of a Euclidean space. To learn Nash equilibria, we propose a novel distributed payoff-based algorithm, where each agent uses information only about its cost value and the constraint value with its associated dual multiplier. We prove convergence of this algorithm to a Nash equilibrium, under the assumption that the game admits a strictly convex potential function. In the absence of coupling constraints, we prove convergence to Nash equilibria under significantly weaker assumptions, not requiring a potential function. Namely, strict monotonicity of the game mapping is sufficient for convergence. We also derive the convergence rate of the algorithm for strongly monotone game maps.

59 citations


Posted Content
TL;DR: In this article, a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities, is proposed.
Abstract: In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities.

29 citations


Journal ArticleDOI
TL;DR: The case, when one operator is Lipschitz continuous but not necessarily a subdifferential operator and the other operator is strongly monotone, arises in optimization methods based on primal–dual approaches, and new linear convergence results are provided.
Abstract: The Douglas–Rachford method is a popular splitting technique for finding a zero of the sum of two subdifferential operators of proper, closed, and convex functions and, more generally, two maximally monotone operators. Recent results concerned with linear rates of convergence of the method require additional properties of the underlying monotone operators, such as strong monotonicity and cocoercivity. In this paper, we study the case, when one operator is Lipschitz continuous but not necessarily a subdifferential operator and the other operator is strongly monotone. This situation arises in optimization methods based on primal–dual approaches. We provide new linear convergence results in this setting.

26 citations


Posted Content
TL;DR: This work provides a distributed algorithm to learn a Nash equilibrium in a class of non-cooperative games with strongly monotone mappings and unconstrained action sets and proves geometric convergence to a Nashilibrium.
Abstract: We provide a distributed algorithm to learn a Nash equilibrium in a class of non-cooperative games with strongly monotone mappings and unconstrained action sets Each player has access to her own smooth local cost function and can communicate to her neighbors in some undirected graph We consider a distributed communication-based gradient algorithm For this procedure, we prove geometric convergence to a Nash equilibrium In contrast to our previous works [15], [16], where the proposed algorithms required two parameters to be set up and the analysis was based on a so called augmented game mapping, the procedure in this work corresponds to a standard distributed gradient play and, thus, only one constant step size parameter needs to be chosen appropriately to guarantee fast convergence to a game solution Moreover, we provide a rigorous comparison between the convergence rate of the proposed distributed gradient play and the rate of the GRANE algorithm presented in [15] It allows us to demonstrate that the distributed gradient play outperforms the GRANE in terms of convergence speed

19 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose new subgradient extragradient methods for finding a solution of a strongly monotone equilibrium problem over the solution set of another strongly nonconvex equilibrium problem.
Abstract: In this paper, we propose new subgradient extragradient methods for finding a solution of a strongly monotone equilibrium problem over the solution set of another monotone equilibrium problem which...

17 citations


Posted Content
TL;DR: In this article, a class of elliptic variational-hemivariational inequalities in a abstract Banach space for which the concept of well-posedness in the sense of Tykhonov was introduced.
Abstract: We consider a class of elliptic variational-hemivaria\-tional inequalities in a abstract Banach space for which we introduce the concept of well-posedness in the sense of Tykhonov. We characterize the well-posedness in terms of metric properties of a family of associated sets. Our results, which provide necessary and sufficient conditions for the well-posedness of inequalities under consideration, are valid under mild assumptions on the data. Their proofs are based on arguments of monotonicity, lower semicontinuity and properties of the Clarke directional derivative. For well-posed inequalities we also prove a continuous dependence result of the solution with respect to the data. We illustrate our abstract results in the study of one-dimensional examples, then we focus on some relevant particular cases, including variational-hemivariational inequalities with strongly monotone operators. Finally, we consider a model variational-hemivariational inequality which arises in Contact Mechanics for which we discuss its well-posedness and provide the corresponding mechanical interpretations.

16 citations


Posted Content
TL;DR: In this paper, the authors consider a smooth flow in the phase space and prove that it is either pseudo-ordered or convergent to equilibria with respect to the initial data from an open and dense subset of phase space.
Abstract: We consider a $C^{1,\alpha}$ smooth flow in $\mathbb{R}^n$ which is "strongly monotone" with respect to a cone $C$ of rank $k$, a closed set that contains a linear subspace of dimension $k$ and no linear subspaces of higher dimension. We prove that orbits with initial data from an open and dense subset of the phase space are either pseudo-ordered or convergent to equilibria. This covers the celebrated Hirsch's Generic Convergence Theorem in the case $k=1$, yields a generic Poincare-Bendixson Theorem for the case $k=2$, and holds true with arbitrary dimension $k$. Our approach involves the ergodic argument using the $k$-exponential separation and the associated $k$-Lyapunov exponent (that reduces to the first Lyapunov exponent if $k=1$).

15 citations


Proceedings ArticleDOI
08 Dec 2019
TL;DR: This work develops amongst the first proximal-point algorithms with variable sample-sizes (PPAWSS), where increasingly accurate solutions of strongly monotone SVIs are obtained via (VS-Ave) at every step, which allows for achieving a sublinear convergence rate that matches that obtained for deterministicmonotone VIs.
Abstract: We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a closed and convex set. In strongly monotone regimes, we present a variable sample-size averaging scheme (VS-Ave) that achieves a linear rate with an optimal oracle complexity. In addition, the iteration complexity is shown to display a muted dependence on the condition number compared with standard variance-reduced projection schemes. To contend with merely monotone maps, we develop amongst the first proximal-point algorithms with variable sample-sizes (PPAWSS), where increasingly accurate solutions of strongly monotone SVIs are obtained via (VS-Ave) at every step. This allows for achieving a sublinear convergence rate that matches that obtained for deterministic monotone VIs. Preliminary numerical evidence suggests that the schemes compares well with competing schemes.

15 citations


Journal ArticleDOI
TL;DR: In this paper, a class of abstract mixed variational problems governed by a strongly monotone Lipschitz continuous operator is studied, and a general convergence result is proved.
Abstract: The present paper concerns a class of abstract mixed variational problems governed by a strongly monotone Lipschitz continuous operator. With the existence and uniqueness results in the literature for the problem under consideration, we prove a general convergence result, which shows the continuous dependence of the solution with respect to the data by using arguments of monotonicity, compactness, lower semicontinuity and Mosco convergence. Then we consider an associated optimal control problem for which we prove the existence of optimal pairs. The mathematical tools developed in this paper are useful in the analysis and control of a large class of boundary value problems which, in a weak formulation, lead to mixed variational problems. To provide an example, we illustrate our results in the study of a mathematical model which describes the equilibrium of an elastic body in frictional contact with a foundation.

Journal ArticleDOI
TL;DR: A strongly convergent algorithm for solving a bilevel split variational inequality problem (BSVIP) involving a strongly monotone mapping in the upper-level problem and pseudomonotone mappings in the lower-level one is proposed and analyzed.
Abstract: In this paper, we propose Linesearch methods for solving a bilevel split variational inequality problem (BSVIP) involving a strongly monotone mapping in the upper-level problem and pseudomonotone mappings in the lower-level one. A strongly convergent algorithm for such a BSVIP is proposed and analyzed.

Posted Content
TL;DR: In this article, the strong averaging principle for slow-fast stochastic partial differential equations with locally monotone coefficients was proved for a large class of examples, such as the stochastically porous medium equation, the Stochastic $p$-Laplace equation, and the 2D Navier-Stokes equation, where the main techniques are based on time discretization and variational approach to SPDEs.
Abstract: This paper is devoted to proving the strong averaging principle for slow-fast stochastic partial differential equations with locally monotone coefficients, where the slow component is a stochastic partial differential equations with locally monotone coefficients and the fast component is a stochastic partial differential equations (SPDEs) with strongly monotone coefficients. The result is applicable to a large class of examples, such as the stochastic porous medium equation, the stochastic $p$-Laplace equation, the stochastic Burgers type equation and the stochastic 2D Navier-Stokes equation, which are the nonlinear stochastic partial differential equations. The main techniques are based on time discretization and the variational approach to SPDEs.

Journal ArticleDOI
TL;DR: This paper revisited the numerical approach to variational inequality problems involving strongly monotone and Lipschitz continuous operators by a variant of projected reflected gradients, and showed that the variational inequalities can be solved by a projected reflected gradient.
Abstract: In this paper, we revisit the numerical approach to variational inequality problems involving strongly monotone and Lipschitz continuous operators by a variant of projected reflected gradie...

10 May 2019
TL;DR: In this article, a class of elliptic variational-hemivariational inequalities in a abstract Banach space for which the concept of well-posedness in the sense of Tykhonov was introduced.
Abstract: We consider a class of elliptic variational-hemivaria\-tional inequalities in a abstract Banach space for which we introduce the concept of well-posedness in the sense of Tykhonov. We characterize the well-posedness in terms of metric properties of a family of associated sets. Our results, which provide necessary and sufficient conditions for the well-posedness of inequalities under consideration, are valid under mild assumptions on the data. Their proofs are based on arguments of monotonicity, lower semicontinuity and properties of the Clarke directional derivative. For well-posed inequalities we also prove a continuous dependence result of the solution with respect to the data. We illustrate our abstract results in the study of one-dimensional examples, then we focus on some relevant particular cases, including variational-hemivariational inequalities with strongly monotone operators. Finally, we consider a model variational-hemivariational inequality which arises in Contact Mechanics for which we discuss its well-posedness and provide the corresponding mechanical interpretations.

Journal ArticleDOI
TL;DR: A new convergence analysis of a variable metric forward–backward splitting algorithm with extended relaxation parameters in real Hilbert spaces is presented and it is proved that this algorithm is weakly convergent when certain weak conditions are imposed upon the relaxation parameters.
Abstract: The forward–backward splitting algorithm is a popular operator-splitting method for solving monotone inclusion of the sum of a maximal monotone operator and an inverse strongly monotone operator. In this paper, we present a new convergence analysis of a variable metric forward–backward splitting algorithm with extended relaxation parameters in real Hilbert spaces. We prove that this algorithm is weakly convergent when certain weak conditions are imposed upon the relaxation parameters. Consequently, we recover the forward–backward splitting algorithm with variable step sizes. As an application, we obtain a variable metric forward–backward splitting algorithm for solving the minimization problem of the sum of two convex functions, where one of them is differentiable with a Lipschitz continuous gradient. Furthermore, we discuss the applications of this algorithm to the fundamental of the variational inequalities problem, constrained convex minimization problem, and split feasibility problem. Numerical experimental results on LASSO problem in statistical learning demonstrate the effectiveness of the proposed iterative algorithm.

Posted Content
TL;DR: An approach to signal recovery in Generalized Linear Models (GLM) in which the signal estimation problem is reduced to the problem of solving a stochastic monotone variational inequality (VI).
Abstract: We discuss an approach to signal recovery in Generalized Linear Models (GLM) in which the signal estimation problem is reduced to the problem of solving a stochastic monotone variational inequality (VI). The solution to the stochastic VI can be found in a computationally efficient way, and in the case when the VI is strongly monotone we derive finite-time upper bounds on the expected $\|\cdot\|_2^2$ error converging to 0 at the rate $O(1/K)$ as the number $K$ of observations grows. Our structural assumptions are essentially weaker than those necessary to ensure convexity of the optimization problem resulting from Maximum Likelihood estimation. In hindsight, the approach we promote can be traced back directly to the ideas behind the Rosenblatt's perceptron algorithm.

Journal ArticleDOI
TL;DR: In this paper, a Mann-type iterative algorithm that approximates the zero of a generalized-Φ-strongly monotone map is constructed and a strong convergence theorem for a sequence generated by the algorithm is proved.
Abstract: Let X be a uniformly convex and uniformly smooth real Banach space with dual space $X^{*}$ . In this paper, a Mann-type iterative algorithm that approximates the zero of a generalized-Φ-strongly monotone map is constructed. A strong convergence theorem for a sequence generated by the algorithm is proved. Furthermore, the theorem is applied to approximate the solution of a convex optimization problem, a Hammerstein integral equation, and a variational inequality problem. This theorem generalizes, improves, and complements some recent results. Finally, examples of generalized-Φ-strongly monotone maps are constructed and numerical experiments which illustrate the convergence of the sequence generated by our algorithm are presented.

Journal ArticleDOI
10 May 2019-Symmetry
TL;DR: An iterative algorithm is introduced which converges strongly to a common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings to solve a general system of variational inequalities and strong convergence results are derived.
Abstract: We introduce an iterative algorithm which converges strongly to a common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. Our iterative method is quite general and includes a large number of iterative methods considered in recent literature as special cases. In particular, we apply our algorithm to solve a general system of variational inequalities, convex feasibility problem, zero point problem of inverse strongly monotone and maximal monotone mappings, split common null point problem, split feasibility problem, split monotone variational inclusion problem and split variational inequality problem. Under relaxed conditions on the parameters, we derive some algorithms and strong convergence results to solve these problems. Our results improve and generalize several known results in the recent literature.

Journal ArticleDOI
01 Mar 2019-Calcolo
TL;DR: In this article, the authors consider monotone inclusions in real Hilbert spaces and suggest a new splitting method, which can be viewed as a variant of a proximal-descent algorithm in a sense.
Abstract: In this article, we consider monotone inclusions in real Hilbert spaces and suggest a new splitting method. The associated monotone inclusions consist of the sum of one bounded linear monotone operator and one inverse strongly monotone operator and one maximal monotone operator. The new method, at each iteration, first implements one forward–backward step as usual and next implements a descent step, and it can be viewed as a variant of a proximal-descent algorithm in a sense. Its most important feature is that, at each iteration, it needs evaluating the inverse strongly monotone part once only in the forward–backward step and, in contrast, the original proximal-descent algorithm needs evaluating this part twice both in the forward–backward step and in the descent step. Moreover, unlike a recent work, we no longer require the adjoint operation of this bounded linear monotone operator in the descent step. Under standard assumptions, we analyze weak and strong convergence properties of this new method. Rudimentary experiments indicate the superiority of our suggested method over several recently-proposed ones for our test problems.

Journal ArticleDOI
M. D. Voisei1
TL;DR: In this article, the authors studied the intrinsic properties of monotone operators needed for the sum conjecture for maximal Monotone Operators to hold under classical interiority-type domain constraints.
Abstract: In this paper we study, in the relaxed context of locally convex spaces, intrinsic properties of monotone operators needed for the sum conjecture for maximal monotone operators to hold under classical interiority-type domain constraints.

Journal ArticleDOI
TL;DR: In this paper, a strongly convergent algorithm for solving strongly monotone variational inequality problems over the solution set of a split-monotone equilibrium problem has been proposed.
Abstract: We propose a strongly convergent algorithm for solving strongly monotone variational inequality problems over the solution set of a split monotone equilibrium problem. The proposed algorithm can be...

Journal ArticleDOI
TL;DR: In this article, the signal estimation problem is reduced to the problem of solving a stochastic monotone variational inequality (VI), and the solution is found in a computationally efficient way.
Abstract: We discuss an approach to signal recovery in Generalized Linear Models (GLM) in which the signal estimation problem is reduced to the problem of solving a stochastic monotone Variational Inequality (VI). The solution to the stochastic VI can be found in a computationally efficient way, and in the case when the VI is strongly monotone we derive finite-time upper bounds on the expected ‖ · ‖22 error converging to 0 at the rate O(1/K) as the number K of observation grows. Our structural assumptions are essentially weaker than those necessary to ensure convexity of the optimization problem resulting from Maximum Likelihood estimation. In hindsight, the approach we promote can be traced back directly to the ideas behind the Rosenblatt’s perceptron algorithm.

Journal ArticleDOI
TL;DR: In this paper, a multivalued mapping A : E → 2E ∗ which is bounded, generalized Φ-strongly monotone and such that for all t > 0, the range R(Jp+tA) = E∗, where Jp (p > 1) is the generalized duality mapping from E into 2E∗.
Abstract: Let E be a uniformly smooth and uniformly convex real Banach space and E∗ be its dual space. We consider a multivalued mapping A : E → 2E∗ which is bounded, generalized Φ-strongly monotone and such that for all t > 0, the range R(Jp+tA) = E∗, where Jp (p > 1) is the generalized duality mapping from E into 2E∗ . Suppose A−1(0) = ∅, we construct an algorithm which converges strongly to the solution of 0 ∈ Ax. The result is then applied to the generalized convex optimization problem.

Journal ArticleDOI
TL;DR: This study develops a set-point regulation approach for a certain class of systems subject to input and state constraints which leads to closed-loop strongly monotone systems and finds some well-organised controllers that satisfy the mentioned control objectives.
Abstract: Monotone systems are dynamical systems whose solutions preserve an ordering relative to the initial data. This study develops a set-point regulation approach for a certain class of systems subject to input and state constraints which leads to closed-loop strongly monotone systems. The proposed method gives static output feedback controllers that guarantee the convergence of generic bounded solutions to the desired set-point, satisfy the constraints and preserve the control performance under the input saturation. Although such a set-point regulation problem is too challenging for general non-linear systems, the proposed approach finds some well-organised controllers that satisfy the mentioned control objectives. To investigate the applicability of the proposed control technique, the authors exploit a model of cancer tumour growth in an unhealthy tissue. The medication (control) intends to take solutions to the healthy state through a constrained chemotherapy protocol. The authors present a full dynamical analysis for this system.

Journal ArticleDOI
TL;DR: In this article, a new iterative scheme is proposed which approximates a common solution of split equality fixed point problems involving quasi-nonexpansive mappings, finite family of strongly monotone mappings and finite system of generalized mixed equilibrium problems in real Banach spaces.
Abstract: A new iterative scheme is proposed which approximates a common solution of split equality fixed point problems involving $$\eta -$$ demimetric mappings, finite family of $$\gamma $$ -inverse strongly monotone mappings, finite family of relatively quasi-nonexpansive mappings and finite family of system of generalized mixed equilibrium problems in real Banach spaces which are 2-uniformly convex and uniformly smooth. Our theorems extend and complement several existing results in this area of research.

Posted Content
TL;DR: In this paper, the authors consider differentiable games where the goal is to find a Nash equilibrium and provide new analyses of the optimistic gradient method's local and global convergence properties and use is to get a tighter global convergence rate for OG and CO.
Abstract: We consider differentiable games where the goal is to find a Nash equilibrium. The machine learning community has recently started using variants of the gradient method (GD). Prime examples are extragradient (EG), the optimistic gradient method (OG) and consensus optimization (CO), which enjoy linear convergence in cases like bilinear games, where the standard GD fails. The full benefits of theses relatively new methods are not known as there is no unified analysis for both strongly monotone and bilinear games. We provide new analyses of the EG's local and global convergence properties and use is to get a tighter global convergence rate for OG and CO. Our analysis covers the whole range of settings between bilinear and strongly monotone games. It reveals that these methods converge via different mechanisms at these extremes; in between, it exploits the most favorable mechanism for the given problem. We then prove that EG achieves the optimal rate for a wide class of algorithms with any number of extrapolations. Our tight analysis of EG's convergence rate in games shows that, unlike in convex minimization, EG may be much faster than GD.

Journal ArticleDOI
TL;DR: In this paper, an adaptive analog of Nesterov's method for variational inequalities with a strongly monotone operator is proposed, where the main idea of the method is an adaptive choice of constants in the maximized concave functionals at each iteration.
Abstract: An adaptive analog of Nesterov’s method for variational inequalities with a strongly monotone operator is proposed. The main idea of the method is an adaptive choice of constants in the maximized concave functionals at each iteration. In this case there is no need in specifying exact values of the constants, since this method makes it possible to find suitable constants at each iteration. Some estimates for the parameters determining the quality of the solution to the variational inequality are obtained as functions of the number of iterations.

Posted Content
TL;DR: In this paper, a new multiscale method for strongly nonlinear monotone equations in the spirit of Localized Orthogonal Decomposition is introduced and analyzed, and the results neither require structural assumptions on the coefficient such as periodicity or scale separation nor higher regularity of the solution.
Abstract: In this work we introduce and analyze a new multiscale method for strongly nonlinear monotone equations in the spirit of the Localized Orthogonal Decomposition. A problem-adapted multiscale space is constructed by solving linear local fine-scale problems which is then used in a generalized finite element method. The linearity of the fine-scale problems allows their localization and, moreover, makes the method very efficient to use. The new method gives optimal a priori error estimates up to linearization errors. The results neither require structural assumptions on the coefficient such as periodicity or scale separation nor higher regularity of the solution. The effect of different linearization strategies is discussed in theory and practice. Several numerical examples including stationary Richards equation confirm the theory and underline the applicability of the method.

Posted Content
TL;DR: In this paper, the authors consider incentive compatible mechanisms for a domain that is very close to the domain of scheduling $n$ unrelated machines: the single exception is that the valuation of just one machine is submodular.
Abstract: We consider incentive compatible mechanisms for a domain that is very close to the domain of scheduling $n$ unrelated machines: the single exception is that the valuation of just one machine is submodular. For the scheduling problem with such cost functions, we give a lower bound of $\Omega(\sqrt{n})$ on the approximation ratio of incentive compatible deterministic mechanisms. This is a strong information-theoretic impossibility result on the approximation ratio of mechanisms on relatively simple domains. The lower bound of the current work assumes no restriction on the mechanism side, but an expanded class of valuations, in contrast to previous general results on the Nisan-Ronen conjecture that hold for only special classes of mechanisms such as local, strongly monotone, and anonymous mechanisms. Our approach is based on a novel characterization of appropriately selected smaller instances that allows us to focus on particular type of algorithms (linear mechanisms), from which we extract a locality property that gives the lower bound.