scispace - formally typeset
Search or ask a question

Showing papers in "Neural Networks in 2020"


Journal ArticleDOI
TL;DR: This work develops a novel architecture, MultiResUNet, as the potential successor to the U-Net architecture, and tests and compared it with the classical U- net on a vast repertoire of multimodal medical images.

1,027 citations


Journal ArticleDOI
TL;DR: A comparative study of deep techniques in image denoising by classifying the deep convolutional neural networks for additive white noisy images, the deep CNNs for real noisy images; the deepCNNs for blind Denoising and the deep network for hybrid noisy images.

518 citations


Journal ArticleDOI
TL;DR: An attention-guided denoising convolutional neural network (ADNet), mainly including a sparse block (SB), a feature enhancement block (FEB), an attention block (AB) and a reconstruction block (RB) for image Denoising.

343 citations


Journal ArticleDOI
TL;DR: The design of a novel network called a batch-renormalization denoising network (BRDNet) is reported, which combines two networks to increase the width of the network, and thus obtain more features.

305 citations


Journal ArticleDOI
TL;DR: This study establishes that RNNs are a potent computational framework for the learning and forecasting of complex spatiotemporal systems.

272 citations


Journal ArticleDOI
TL;DR: This paper utilizes LSTM to obtain a data-driven forecasting model for an application of weather forecasting and proposes Transductive LSTm (T-LSTM) which exploits the local information in time-series prediction.

224 citations


Journal ArticleDOI
TL;DR: It can be concluded that the EEG based classification of seizure type using CNN model could be used in pre-surgical evaluation for treating patients with epilepsy.

219 citations


Journal ArticleDOI
TL;DR: The paper takes a top-down view of the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing and introduces the basic building blocks that can be combined to design novel and effective neural models for graphs.

200 citations


Journal ArticleDOI
TL;DR: A review of recent developments in learning of spiking neurons and a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented.

183 citations


Journal ArticleDOI
TL;DR: An improved recurrent neural network (RNN) scheme is proposed to perform the trajectory control of redundant robot manipulators using remote center of motion (RCM) constraints to facilitate accurate task tracking based on the general quadratic performance index.

177 citations


Journal ArticleDOI
TL;DR: This paper designs a series of contrast tests using different types of datasets (ANN-oriented and SNN-oriented), diverse processing models, signal conversion methods, and learning algorithms, and recommends the most suitable model for each scenario and highlights the urgent need to build a benchmarking framework for SNNs with broader tasks, datasets, and metrics.

Journal ArticleDOI
TL;DR: A novel multi-modal Machine Learning (ML) based approach is proposed to integrate EEG engineered features for automatic classification of brain states and results show that the Multi-Layer Perceptron (MLP) classifier outperforms all other models, specifically, the Autoencoder, Logistic Regression (LR) and Support Vector Machine (SVM).

Journal ArticleDOI
TL;DR: A unified multiview subspace clustering model is proposed which incorporates the graph learning from each view, the generation of basic partitions, and the fusion of consensus partition which is seamlessly integrated and can be iteratively boosted by each other towards an overall optimal solution.

Journal ArticleDOI
TL;DR: The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy that is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources.

Journal ArticleDOI
TL;DR: This paper proposes an adaptive hierarchical network structure composed of DCNNs that can grow and learn as new data becomes available, and improves upon existing hierarchical CNN models by adding the capability of self-growth.

Journal ArticleDOI
TL;DR: This work proposes an approach based on a convolutional neural network pre-trained on a large-scale image classification task that achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and demonstrates the effectiveness of the suggested approach on five datasets and selected examples.

Journal ArticleDOI
TL;DR: A three-dimensional fractional-order discrete Hopfield neural network in the left Caputo discrete delta's sense is proposed, the dynamic behavior and synchronization of FODHNN are studied, and the system is applied to image encryption.

Journal ArticleDOI
TL;DR: This paper proposes a new heart sound classification method based on improved Mel-frequency cepstrum coefficient (MFCC) features and convolutional recurrent neural networks and comprehensive studies on the performance of different network parameters and different network connection strategies are presented.

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive overview of the use of Spiking Neural Networks for online learning in non-stationary data streams and propose a new algorithm to adapt to these changes as fast as possible, while maintaining good performance scores.

Journal ArticleDOI
TL;DR: It is proved that the downsampled deep convolutional neural networks can be used to approximate ridge functions nicely, which hints some advantages of these structured networks in terms of approximation or modeling, and a theorem for approximating functions on Riemannian manifolds is presented, which demonstrates that deep Convolutional Neural networks can been used to learn manifold features of data.

Journal ArticleDOI
TL;DR: This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively, and provides five qualitative performance evaluation criteria and presents a new taxonomy for supervisedLearning algorithms depending on these five performance evaluated criteria.

Journal ArticleDOI
TL;DR: In this paper, the authors study the average generalization dynamics of large neural networks trained using gradient descent and find that the dynamics of gradient descent learning naturally protect against overtraining and overfitting in large networks.

Journal ArticleDOI
TL;DR: Numerical simulations illustrate that the new upper bound estimate formula for the settling time is much tighter than those in the existing fixed-time stability theorems, and the plaintext signals can be recovered according to the new fixed- time stability theorem, while the plain Text signals cannot be recovered.

Journal ArticleDOI
TL;DR: The proposed NFN+ model, to the best knowledge, achieved the state-of-the-art retinal vessel segmentation accuracy on color fundus images (AUC: 98.30%, 98.75% and 98.94%, respectively).

Journal ArticleDOI
TL;DR: In this article, the authors proposed new symplectic networks (SympNets) for identifying Hamiltonian systems from data based on a composition of linear, activation and gradient modules.

Journal ArticleDOI
TL;DR: This paper investigates motor planning activity by using electroencephalographic signals with the aim to decode motor preparation phases using a novel system that succeeded in discriminating premovement from resting with an average accuracy of 90.3%, outperforming comparable methods in the literature.

Journal ArticleDOI
TL;DR: A new event-triggered mechanism (ETM) is presented, which can be regarded as a switching between the discrete-time periodic sampled-data control and a continuous ETM; a saturating controller which is equipped with two switching gains is designed to match the switching property of the proposed ETM.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a unified complete quantization framework termed as "WAGEUBN" to quantize DNNs involving all data paths including W (Weights), A (Activation), G (Gradient), E (Error), U (Update), and BN.

Journal ArticleDOI
TL;DR: A class weighted adversarial neural network is proposed to encourage positive transfer of the shared classes and ignore the source outliers in a challenging partial transfer learning problem using deep learning-based domain adaptation method.

Journal ArticleDOI
TL;DR: In this article, the approximation rate of a two-layer neural network with a polynomially-decaying non-sigmoidal activation function was shown to be dimension independent.