scispace - formally typeset
Search or ask a question
Author

Sever S Dragomir

Bio: Sever S Dragomir is an academic researcher from Victoria University, Australia. The author has contributed to research in topics: Convex function & Kantorovich inequality. The author has an hindex of 59, co-authored 840 publications receiving 14865 citations. Previous affiliations of Sever S Dragomir include West University of Timișoara & University of Adelaide.


Papers
More filters
Posted Content
TL;DR: The Hermite-Hadamard double inequality for convex functions has been studied extensively in the literature, see as discussed by the authors for a survey of the Hermite Hadamard inequalities.
Abstract: The Hermite-Hadamard double inequality is the first fundamental result for convex functions defined on a interval of real numbers with a natural geometrical interpretation and a loose number of applications for particular inequalities. In this monograph we present the basic facts related to Hermite- Hadamard inequalities for convex functions and a large number of results for special means which can naturally be deduced. Hermite-Hadamard type inequalities for other concepts of convexities are also given. The properties of a number of functions and functionals or sequences of functions which can be associated in order to refine the result are pointed out. Recent references that are available online are mentioned as well.

685 citations

Journal ArticleDOI
TL;DR: In this paper, two inequalities for differentiable convex mappings which are connected with the celebrated Hermite-Hadamard's integral inequality holding for convex functions are given.

622 citations

Journal ArticleDOI
TL;DR: An inequality of Hadamard's type for convex functions on the co-ordinates defined in a rectangle from the plane and some applications are given in this article, where some applications of the inequality are discussed.
Abstract: An inequality of Hadamard’s type for convex functions and convex functions on the co-ordinates defined in a rectangle from the plane and some applications are given.

292 citations

Journal ArticleDOI
TL;DR: In this article, a new inequality of Ostrowski-Gruss' type was derived and applied to the estimation of error bounds for some special means and for some numerical quadrature rules.
Abstract: In this paper we derive a new inequality of Ostrowski-Gruss' type and apply it to the estimation of error bounds for some special means and for some numerical quadrature rules.

287 citations

Book
01 Sep 2003
TL;DR: Some Gronwall type inequalities for kernels of L-type and application in qualitative theory of Volterra integral equations and for systems of differential equations are presented in this article, where they are applied in the context of system analysis.
Abstract: Some Gronwall type inequalities for kernels of L-type and application in qualitative theory of Volterra integral equations and for systems of differential equations are presented.

252 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
01 May 1970

1,935 citations

Proceedings Article
20 Jun 2018
TL;DR: This talk will introduce this formalism and give a number of results on the Neural Tangent Kernel and explain how they give us insight into the dynamics of neural networks during training and into their generalization features.
Abstract: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function (which maps input vectors to output vectors) follows the so-called kernel gradient associated with a new object, which we call the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.

1,787 citations