scispace - formally typeset
Open AccessJournal ArticleDOI

Deep Learning in Mining Biological Data

Reads0
Chats0
TLDR
In this paper, a meta-analysis has been performed and the resulting resources have been critically analyzed, focusing on the use of DL architectures to analyse patterns in data from diverse biological domains.
Abstract
Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Categorized in three broad types (i.e. images, signals, and sequences), these data are huge in amount and complex in nature. Mining such enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities, and lately their deep architectures—known as deep learning (DL)—have been successfully applied to solve many complex pattern recognition problems. To investigate how DL—especially its different architectures—has contributed and been utilized in the mining of biological data pertaining to those three types, a meta-analysis has been performed and the resulting resources have been critically analysed. Focusing on the use of DL to analyse patterns in data from diverse biological domains, this work investigates different DL architectures’ applications to these data. This is followed by an exploration of available open access data sources pertaining to the three data types along with popular open-source DL tools applicable to these data. Also, comparative investigations of these tools from qualitative, quantitative, and benchmarking perspectives are provided. Finally, some open research challenges in using DL to mine biological data are outlined and a number of possible future perspectives are put forward.

read more

Content maybe subject to copyright    Report

Citations
More filters

Artificial neural networks

Andrea Roli
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Proceedings ArticleDOI

Performance Analysis of Machine Learning Approaches in Stroke Prediction

TL;DR: In this article, the authors proposed an early prediction of stroke diseases by using different machine learning approaches with the occurrence of hypertension, body mass index level, heart disease, average glucose level, smoking status, previous stroke and age.
Book ChapterDOI

3D DenseNet Ensemble in 4-Way Classification of Alzheimer’s Disease

TL;DR: The research makes use of dense connections that improve the movement of data within the model, due to having each layer linked with all the subsequent layers in a block, to achieve better results than other state-of-the-art methods dealing with 3D MR images.
Book ChapterDOI

An XAI Based Autism Detection: The Context Behind the Detection

TL;DR: In this paper, the authors proposed an AI model for early detection of autism in children and showed why AI with explainability is important, and provided examples focused on the Autism Spectrum Disorder dataset (Autism screening data for toddlers by Dr Fadi Fayez Thabtah).
Proceedings ArticleDOI

SENSE: a Student Performance Quantifier using Sentiment Analysis

TL;DR: This work proposes a system called SENSE (Studentformance quaNtifier using SEntiment analysis) for improving the information conveyed in secondary school reports through means of natural language processing, which reduces the likelihood of a tarnished relationship between home and schools through better means of conveying information and maintains communication between students, teachers and parents or guardians.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Related Papers (5)