scispace - formally typeset
Search or ask a question
JournalISSN: 2162-237X

IEEE transactions on neural networks and learning systems 

Institute of Electrical and Electronics Engineers
About: IEEE transactions on neural networks and learning systems is an academic journal published by Institute of Electrical and Electronics Engineers. The journal publishes majorly in the area(s): Computer science & Medicine. It has an ISSN identifier of 2162-237X. Over the lifetime, 1327 publications have been published receiving 3077 citations. The journal is also known as: IEEE Trans Neural Netw Learn Syst.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: In this article , an adaptive neural network (NN) output feedback optimized control design for a class of strict-feedback nonlinear systems that contain unknown internal dynamics and the states that are immeasurable and constrained within some predefined compact sets is proposed.
Abstract: This article proposes an adaptive neural network (NN) output feedback optimized control design for a class of strict-feedback nonlinear systems that contain unknown internal dynamics and the states that are immeasurable and constrained within some predefined compact sets. NNs are used to approximate the unknown internal dynamics, and an adaptive NN state observer is developed to estimate the immeasurable states. By constructing a barrier type of optimal cost functions for subsystems and employing an observer and the actor-critic architecture, the virtual and actual optimal controllers are developed under the framework of backstepping technique. In addition to ensuring the boundedness of all closed-loop signals, the proposed strategy can also guarantee that system states are confined within some preselected compact sets all the time. This is achieved by means of barrier Lyapunov functions which have been successfully applied to various kinds of nonlinear systems such as strict-feedback and pure-feedback dynamics. Besides, our developed optimal controller requires less conditions on system dynamics than some existing approaches concerning optimal control. The effectiveness of the proposed optimal control approach is eventually validated by numerical as well as practical examples.

217 citations

Journal ArticleDOI
TL;DR: In this paper , a novel double-layer switching regulation containing Markov chain and persistent dwell-time switching regulation (PDTSR) is used to solve the H∞ synchronization issue for singularly perturbed coupled neural networks (SPCNNs).
Abstract: This work explores the H∞ synchronization issue for singularly perturbed coupled neural networks (SPCNNs) affected by both nonlinear constraints and gain uncertainties, in which a novel double-layer switching regulation containing Markov chain and persistent dwell-time switching regulation (PDTSR) is used. The first layer of switching regulation is the Markov chain to characterize the switching stochastic properties of the systems suffering from random component failures and sudden environmental disturbances. Meanwhile, PDTSR, as the second-layer switching regulation, is used to depict the variations in the transition probability of the aforementioned Markov chain. For systems under double-layer switching regulation, the purpose of the addressed issue is to design a mode-dependent synchronization controller for the network with the desired controller gains calculated by solving convex optimization problems. As such, new sufficient conditions are established to ensure that the synchronization error systems are mean-square exponentially stable with a specified level of the H∞ performance. Eventually, the solvability and validity of the proposed control scheme are illustrated through a numerical simulation.

128 citations

Journal ArticleDOI
TL;DR: In this paper , a comprehensive review of state-of-the-art robust training methods is presented, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority.
Abstract: Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data. However, the quality of data labels is a concern because of the lack of high-quality labels in many real-world scenarios. As noisy labels severely degrade the generalization performance of deep neural networks, learning from noisy labels (robust training) is becoming an important task in modern deep learning applications. In this survey, we first describe the problem of learning with label noise from a supervised learning perspective. Next, we provide a comprehensive review of 62 state-of-the-art robust training methods, all of which are categorized into five groups according to their methodological difference, followed by a systematic comparison of six properties used to evaluate their superiority. Subsequently, we perform an in-depth analysis of noise rate estimation and summarize the typically used evaluation methodology, including public noisy datasets and evaluation metrics. Finally, we present several promising research directions that can serve as a guideline for future studies.

110 citations

Journal ArticleDOI
TL;DR: In this article , a novel method by acting the gradient activation function (GAF) on the gradient is proposed to handle the saddle point problem, which enlarges the tiny gradients and restricts the large gradient.
Abstract: Deep neural networks often suffer from poor performance or even training failure due to the ill-conditioned problem, the vanishing/exploding gradient problem, and the saddle point problem. In this article, a novel method by acting the gradient activation function (GAF) on the gradient is proposed to handle these challenges. Intuitively, the GAF enlarges the tiny gradients and restricts the large gradient. Theoretically, this article gives conditions that the GAF needs to meet and, on this basis, proves that the GAF alleviates the problems mentioned above. In addition, this article proves that the convergence rate of SGD with the GAF is faster than that without the GAF under some assumptions. Furthermore, experiments on CIFAR, ImageNet, and PASCAL visual object classes confirm the GAF’s effectiveness. The experimental results also demonstrate that the proposed method is able to be adopted in various deep neural networks to improve their performance. The source code is publicly available at https://github.com/LongJin-lab/Activated-Gradients-for-Deep-Neural-Networks .

69 citations

Journal ArticleDOI
TL;DR: In this article , the authors explore the domain of personalized federated learning (PFL) to address the fundamental challenges of FL on heterogeneous data, a universal characteristic inherent in all real-world datasets.
Abstract: In parallel with the rapid adoption of artificial intelligence (AI) empowered by advances in AI research, there has been growing awareness and concerns of data privacy. Recent significant developments in the data regulation landscape have prompted a seismic shift in interest toward privacy-preserving AI. This has contributed to the popularity of Federated Learning (FL), the leading paradigm for the training of machine learning models on data silos in a privacy-preserving manner. In this survey, we explore the domain of personalized FL (PFL) to address the fundamental challenges of FL on heterogeneous data, a universal characteristic inherent in all real-world datasets. We analyze the key motivations for PFL and present a unique taxonomy of PFL techniques categorized according to the key challenges and personalization strategies in PFL. We highlight their key ideas, challenges, opportunities, and envision promising future trajectories of research toward a new PFL architectural design, realistic PFL benchmarking, and trustworthy PFL approaches.

67 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20231,063
20222,034