scispace - formally typeset
Open AccessJournal ArticleDOI

Recent advances in convolutional neural networks

TLDR
A broad survey of the recent advances in convolutional neural networks can be found in this article, where the authors discuss the improvements of CNN on different aspects, namely, layer design, activation function, loss function, regularization, optimization and fast computation.
About
This article is published in Pattern Recognition.The article was published on 2018-05-01 and is currently open access. It has received 3125 citations till now. The article focuses on the topics: Deep learning & Convolutional neural network.

read more

Citations
More filters
Journal ArticleDOI

Predicting lecithin concentration from differential mobility spectrometry measurements with linear regression models and neural networks.

TL;DR: The results demonstrate that DMS is sufficiently sensitive to detect biologically relevant changes in phospholipid concentration, potentially explaining its ability to detect cancerous tissue.
Journal ArticleDOI

A survey on the application of deep learning for code injection detection

TL;DR: This survey is to fill in the gap through analysing and classifying the existing machine learning techniques applied to the code injection attack detection, with special attention to Deep Learning.
Journal ArticleDOI

A Novel Feature-Selection Method for Human Activity Recognition in Videos

TL;DR: The results indicated that the proposed method outperformed original RBG feature-selection method in terms of accuracy, time, and memory requirements.
Proceedings ArticleDOI

Time-Based Roofline for Deep Learning Performance Analysis

TL;DR: In this paper, an extension of the Roofline model is introduced to analyze two representative computation kernels in deep learning, 2D convolution and long short-term memory, on NVIDIA GPUs.
Journal ArticleDOI

Performance modeling of the sparse matrix–vector product via convolutional neural networks

TL;DR: A high-level abstraction of the sparsity pattern of the problem matrix is presented and a blockwise strategy to feed the CNN models by blocks of nonzero elements is proposed to provide an effective estimation of the performance of the SpMV operation.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Related Papers (5)