scispace - formally typeset
Open AccessJournal ArticleDOI

Review of deep learning: concepts, CNN architectures, challenges, applications, future directions

TLDR
In this paper, a comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field is provided, and the challenges and suggested solutions to help researchers understand the existing research gaps.
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Deep learning in computer vision: A critical review of emerging techniques and application scenarios

TL;DR: Deep learning has been overwhelmingly successful in computer vision (CV), natural language processing, and video/speech recognition, and as mentioned in this paper provides a critical review of recent achievements in terms of techniques and applications.
Journal ArticleDOI

Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges

- 01 Apr 2022 - 
TL;DR: In this article , a comprehensive survey of the most important aspects of multi-sensor applications for human activity recognition, including those recently added to the field for unsupervised learning and transfer learning, is presented.
Journal ArticleDOI

Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges

TL;DR: In this paper, a comprehensive survey of the most important aspects of multi-sensor applications for human activity recognition, including those recently added to the field for unsupervised learning and transfer learning, is presented.
Journal ArticleDOI

TransMed: Transformers Advance Multi-Modal Medical Image Classification.

TL;DR: TransMed as discussed by the authors combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities, achieving an improvement of 10.1% and 1.9% in average accuracy.
Journal ArticleDOI

Machine learning for structural engineering: A state-of-the-art review

Huu-Tai Thai
- 01 Apr 2022 - 
TL;DR: An overview of ML techniques for structural engineering is presented in this article with a particular focus on basic ML concepts, ML libraries, open-source Python codes, and structural engineering datasets.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Related Papers (5)