scispace - formally typeset
Search or ask a question
Book

Neural network design

TL;DR: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules, as well as methods for training them and their applications to practical problems.
Abstract: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules. In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems. Features Extensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks. In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks. Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks. A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies. Detailed examples and numerous solved problems. Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html.
Citations
More filters
Book
05 Jan 1998
TL;DR: Introduction to Optimization The Binary genetic Algorithm The Continuous Parameter Genetic Algorithm Applications An Added Level of Sophistication Advanced Applications Evolutionary Trends Appendix Glossary Index.
Abstract: Introduction to Optimization The Binary Genetic Algorithm The Continuous Parameter Genetic Algorithm Applications An Added Level of Sophistication Advanced Applications Evolutionary Trends Appendix Glossary Index.

4,006 citations

Proceedings ArticleDOI
09 Jun 1997
TL;DR: The application of Bayesian regularization to the training of feedforward neural networks is described, using a Gauss-Newton approximation to the Hessian matrix to reduce the computational overhead.
Abstract: This paper describes the application of Bayesian regularization to the training of feedforward neural networks. A Gauss-Newton approximation to the Hessian matrix, which can be conveniently implemented within the framework of the Levenberg-Marquardt algorithm, is used to reduce the computational overhead. The resulting algorithm is demonstrated on a simple test problem and is then applied to three practical problems. The results demonstrate that the algorithm produces networks which have excellent generalization capabilities.

1,338 citations


Cites methods from "Neural network design"

  • ...This approximation is readily available when using the Levenberg-Marquardt algorithm for network training [2], [3]....

    [...]

Journal ArticleDOI
TL;DR: By extending randomization approaches to ANNs, the “black box” mechanics of ANNs can be greatly illuminated and by coupling this new explanatory power of neural networks with its strong predictive abilities, ANNs promise to be a valuable quantitative tool to evaluate, understand, and predict ecological phenomena.

1,035 citations

Proceedings ArticleDOI
09 Nov 2016
TL;DR: This work presents DeepRM, an example solution that translates the problem of packing tasks with multiple resource demands into a learning problem, and shows that it performs comparably to state-of-the-art heuristics, adapts to different conditions, converges quickly, and learns strategies that are sensible in hindsight.
Abstract: Resource management problems in systems and networking often manifest as difficult online decision making tasks where appropriate solutions depend on understanding the workload and environment. Inspired by recent advances in deep reinforcement learning for AI problems, we consider building systems that learn to manage resources directly from experience. We present DeepRM, an example solution that translates the problem of packing tasks with multiple resource demands into a learning problem. Our initial results show that DeepRM performs comparably to state-of-the-art heuristics, adapts to different conditions, converges quickly, and learns strategies that are sensible in hindsight.

948 citations


Cites methods from "Neural network design"

  • ...Deep Neural Networks (DNNs) [18] have recently been used successfully as function approximators to solve large-scale RL tasks [30, 33]....

    [...]