scispace - formally typeset
Open AccessJournal ArticleDOI

Review of Deep Learning Algorithms and Architectures

Ajay Shrestha, +1 more
- 22 Apr 2019 - 
- Vol. 7, pp 53040-53065
Reads0
Chats0
TLDR
This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time, and delve into the math behind training algorithms used in recent deep networks.
Abstract
Deep learning (DL) is playing an increasingly important role in our lives. It has already made a huge impact in areas, such as cancer diagnosis, precision medicine, self-driving cars, predictive forecasting, and speech recognition. The painstakingly handcrafted feature extractors used in traditional learning, classification, and pattern recognition systems are not scalable for large-sized data sets. In many cases, depending on the problem complexity, DL can also overcome the limitations of earlier shallow networks that prevented efficient training and abstractions of hierarchical representations of multi-dimensional training data. Deep neural network (DNN) uses multiple (deep) layers of units with highly optimized algorithms and architectures. This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time. We delve into the math behind training algorithms used in recent deep networks. We describe current shortcomings, enhancements, and implementations. The review also covers different types of deep architectures, such as deep convolution networks, deep residual networks, recurrent neural networks, reinforcement learning, variational autoencoders, and others.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

TL;DR: In this paper, a comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field is provided, and the challenges and suggested solutions to help researchers understand the existing research gaps.
Journal ArticleDOI

Within the lack of chest COVID-19 X-ray dataset: A novel detection model based on GAN and deep transfer learning

TL;DR: The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible.
Journal ArticleDOI

Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next

TL;DR: A comprehensive review of the literature on physics-informed neural networks can be found in this article , where the primary goal of the study was to characterize these networks and their related advantages and disadvantages, as well as incorporate publications on a broader range of collocation-based physics informed neural networks.
Journal ArticleDOI

Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda

TL;DR: It is argued that AI can support the derivation of culturally appropriate organizational processes and individual practices to reduce the natural resource and energy intensity of human activities and facilitate and fosters environmental governance.
Journal ArticleDOI

Application of deep learning algorithms in geotechnical engineering: a short critical review

TL;DR: This study presented the state of practice of DL in geotechnical engineering, and depicted the statistical trend of the published papers, as well as describing four major algorithms, including feedforward neural, recurrent neural network, convolutional neural network and generative adversarial network.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Related Papers (5)