scispace - formally typeset
Journal ArticleDOI

Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR

TLDR
The differences in approach between deep and shallow neural networks are described, their abilities to predict the properties of test sets for 15 large drug data sets are compared, the results in terms of the Universal Approximation theorem are discussed, and how DNN may ameliorate or remove troublesome “activity cliffs” in QSAR data sets.
Abstract
Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets

read more

Citations
More filters
Journal ArticleDOI

Automating drug discovery

TL;DR: This article aims to identify the approaches and technologies that could be implemented robustly by medicinal chemists in the near future and to critically analyse the opportunities and challenges for their more widespread application.
Posted ContentDOI

Universal Differential Equations for Scientific Machine Learning

TL;DR: The UDE model augments scientific models with machine-learnable structures for scientifically-based learning and shows how UDEs can be utilized to discover previously unknown governing equations, accurately extrapolate beyond the original data, and accelerate model simulation, all in a time and data-efficient manner.
Journal ArticleDOI

Deep Learning for Drug Design: an Artificial Intelligence Paradigm for Drug Discovery in the Big Data Era

TL;DR: Several most powerful and mainstream architectures, including the convolutional neural network, recurrent neural network (RNN), and deep auto-encoder networks (DAENs), for supervised learning and nonsupervised learning are discussed.
Journal ArticleDOI

Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships.

TL;DR: A strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities is developed, and it is demonstrated its effectiveness on several examples.
References
More filters
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Approximation by superpositions of a sigmoidal function

TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Journal ArticleDOI

Approximation capabilities of multilayer feedforward networks

TL;DR: It is shown that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ.
Related Papers (5)