scispace - formally typeset
Search or ask a question
Author

Giuseppe Marino

Bio: Giuseppe Marino is an academic researcher from University of Calabria. The author has contributed to research in topics: Fixed point & Variational inequality. The author has an hindex of 24, co-authored 155 publications receiving 4120 citations. Previous affiliations of Giuseppe Marino include Seconda Università degli Studi di Napoli & King Abdulaziz University.


Papers
More filters
Journal ArticleDOI
TL;DR: Theorem 2.7 as discussed by the authors generalizes a result of Gao and Xu [4] concerning the approximation of functions of bounded variation by linear combinations of a fixed sigmoidal function.
Abstract: We generalize a result of Gao and Xu [4] concerning the approximation of functions of bounded variation by linear combinations of a fixed sigmoidal function to the class of functions of bounded φ-variation (Theorem 2.7). Also, in the case of one variable, [1: Proposition 1] is improved. Our proofs are similar to that of [4].

1,316 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider a non-pansive mapping T with a fixed point, a contraction f with coefficient 0 0, where 0 <γ <¯ is a constant.

512 citations

Journal ArticleDOI
TL;DR: Nakajo and Takahashi as mentioned in this paper showed that strong convergence for nonexpansive mappings to strict pseudo-contractions can be obtained by modifying Mann's algorithm by applying projections onto suitably constructed closed convex sets to get an algorithm which generates a strong convergent sequence.

477 citations

Journal ArticleDOI
TL;DR: Weak and strong convergence for some generalized proximal point algorithms are proved, which include the Eckstein and Bertsekas generalized proxiesimal point algorithm, a contraction-proximal points algorithm, and inexact proximal points algorithms.
Abstract: Weak and strong convergence for some generalized proximal point algorithms are proved. These algorithms include the Eckstein and Bertsekas generalized proximal point algorithm, a contraction-proximal point algorithm, and inexact proximal point algorithms. Convergence rate is also considered.

139 citations


Cited by
More filters
BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

Journal ArticleDOI
TL;DR: This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time, and delve into the math behind training algorithms used in recent deep networks.
Abstract: Deep learning (DL) is playing an increasingly important role in our lives. It has already made a huge impact in areas, such as cancer diagnosis, precision medicine, self-driving cars, predictive forecasting, and speech recognition. The painstakingly handcrafted feature extractors used in traditional learning, classification, and pattern recognition systems are not scalable for large-sized data sets. In many cases, depending on the problem complexity, DL can also overcome the limitations of earlier shallow networks that prevented efficient training and abstractions of hierarchical representations of multi-dimensional training data. Deep neural network (DNN) uses multiple (deep) layers of units with highly optimized algorithms and architectures. This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time. We delve into the math behind training algorithms used in recent deep networks. We describe current shortcomings, enhancements, and implementations. The review also covers different types of deep architectures, such as deep convolution networks, deep residual networks, recurrent neural networks, reinforcement learning, variational autoencoders, and others.

907 citations

Patent
24 Jun 1991
TL;DR: In this article, an adaptive control system uses a neural network to provide adaptive control when the plant is operating within a normal operating range, but shifts to other types of control as the plant operating conditions move outside of the normal operating ranges.
Abstract: An adaptive control system uses a neural network to provide adaptive control when the plant is operating within a normal operating range, but shifts to other types of control as the plant operating conditions move outside of the normal operating range. The controller uses a structure which allows the neural network parameters to be determined from minimal information about plant structure and the neural network is trained on-line during normal plant operation. The resulting system can be proven to be stable over all possible conditions. Further, with the inventive techniques, the tracking accuracy can be controlled by appropriate network design.

850 citations

Journal ArticleDOI
TL;DR: Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.
Abstract: Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.

751 citations

MonographDOI
05 Sep 2001
TL;DR: Within this text neural networks are considered as massively interconnected nonlinear adaptive filters.
Abstract: Within this text neural networks are considered as massively interconnected nonlinear adaptive filters.

636 citations