Topic
Rate of convergence
About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work considers the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anεk-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set.
Abstract: We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anek-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizesek are exogenously given, satisfyingΣk=0∞ αk = ∞, Σk=0∞ αk2 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
190 citations
••
TL;DR: A Wiener system, i.e., a system in which a linear dynamic part is followed by a nonlinear and memoryless one, is identified and a nonparametric algorithm recovering the characteristic from input-output observations of the whole system is proposed.
Abstract: A Wiener system, i.e., a system in which a linear dynamic part is followed by a nonlinear and memoryless one, is identified. No parametric restriction is imposed on the functional form of the nonlinear characteristic of the memoryless subsystem, and a nonparametric algorithm recovering the characteristic from input-output observations of the whole system is proposed. Its consistency is shown and the rate of convergence is given. An idea for identification of the impulse response of the linear subsystem is proposed. Results of numerical simulation are also presented. >
190 citations
••
TL;DR: It is shown that Newton's method converges under weaker convergence criteria than those given in earlier studies, such as Argyros (2004) and Hilout (2010), which is often used for solving nonlinear equations.
190 citations
••
TL;DR: It is proved that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small.
Abstract: We derive the $l_{\infty}$ convergence rate simultaneously for Lasso and Dantzig estimators in a high-dimensional linear regression model under a mutual coherence assumption on the Gram matrix of the design and two different assumptions on the noise: Gaussian noise and general noise with finite variance. Then we prove that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small.
190 citations
••
TL;DR: This paper develops a lower bound technique that is particularly well suited for treating “two-directional” problems such as estimating sparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses.
Abstract: This paper considers estimation of sparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are significantly different from those that occur in the conventional nonparametric function estimation problems. Standard techniques fail to yield good results, and new tools are thus needed. We first develop a lower bound technique that is particularly well suited for treating "two-directional" problems such as estimating sparse covariance matrices. The result can be viewed as a generalization of Le Cam's method in one direction and Assouad's Lemma in another. This lower bound technique is of independent interest and can be used for other matrix estimation problems. We then establish a rate sharp minimax lower bound for estimating sparse covariance matrices under the spectral norm by applying the general lower bound technique. A thresholding estimator is shown to attain the optimal rate of convergence under the spectral norm. The results are then extended to the general matrix $\ell_w$ operator norms for $1\le w\le \infty$. In addition, we give a unified result on the minimax rate of convergence for sparse covariance matrix estimation under a class of Bregman divergence losses.
189 citations