scispace - formally typeset
Journal ArticleDOI

Heuristic techniques to optimize neural network architecture in manufacturing applications

TLDR
Four different approaches have been used to increase the generalization abilities of a neural network, based on the use of genetic algorithms, Taguchi, tabu search and decision trees, and the results show that a suitable technique for determining the architecture of a Neural network can generate a significant performance improvement compared to a trial-and-error approach.
Abstract
Nowadays application of neural networks in the manufacturing field is widely assessed even if this type of problem is typically characterized by an insufficient availability of data for a robust network training. Satisfactory results can be found in the literature, in both forming and machining operations, regarding the use of a neural network as a predictive tool. Nevertheless, the research of the optimal network configuration is still based on trial-and-error approaches, rather than on the application of specific techniques . As a consequence, the best method to determine the optimal neural network configuration is still a lack of knowledge in the literature overview. According to that, a comparative analysis is proposed in this work. More in detail four different approaches have been used to increase the generalization abilities of a neural network. These methods are based, respectively, on the use of genetic algorithms, Taguchi, tabu search and decision trees. The parameters taken into account in this work are the training algorithm, the number of hidden layers, the number of neurons and the activation function of each hidden layer. These techniques have been firstly tested on three different datasets, generated through numerical simulations in the Deform2D environment, in an attempt to map the input---output relationship for an extrusion, a rolling and a shearing process. Subsequently, the same approach has been validated on a fourth dataset derived from the literature review for a complex industrial process to widely generalize and asses the proposed methodology in the whole manufacturing field. Four tests were carried out for each dataset modifying the original data with a random noise with zero mean and standard deviation of one, two and five per cent. The results show that the use of a suitable technique for determining the architecture of a neural network can generate a significant performance improvement compared to a trial-and-error approach.

read more

Citations
More filters
Journal ArticleDOI

Particle swarm optimization trained neural network for structural failure prediction of multistoried RC buildings

TL;DR: A particle swarm optimization-based approach to train the NN (NN-PSO), capable to tackle the problem of predicting structural failure of multistoried reinforced concrete buildings via detecting the failure possibility of the multistory reinforced concrete building structure in the future.

Manufacturing Processes For Engineering Materials

TL;DR: The manufacturing processes for engineering materials is universally compatible with any devices to read, and an online access to it is set as public so you can download it instantly.
Journal ArticleDOI

Genetic algorithm-optimized multi-channel convolutional neural network for stock market prediction

TL;DR: This study proposes a method to systematically optimize the parameters for the CNN model by using genetic algorithm (GA), and shows that the GA-CNN outperforms the comparative models and demonstrate the effectiveness of the hybrid approach of GA and CNN.
Journal ArticleDOI

Comprehensive Overview on Computational Intelligence Techniques for Machinery Condition Monitoring and Fault Diagnosis

TL;DR: The recent research and development of computational intelligence techniques in fault diagnosis, prediction and optimal sensor placement are reviewed and the characteristics of different algorithms are compared, and application situations of these methods are summarized.
Book ChapterDOI

Optimization of ANN Architecture: A Review on Nature-Inspired Techniques

TL;DR: This chapter aims to cover a wider range of FNN optimization approaches with emphasis on nature inspired algorithms.
References
More filters
Book

Neural Networks: A Comprehensive Foundation

Simon Haykin
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Journal ArticleDOI

Approximation by superpositions of a sigmoidal function

TL;DR: It is demonstrated that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube.
Book

Statistical Decision Theory and Bayesian Analysis

TL;DR: An overview of statistical decision theory, which emphasizes the use and application of the philosophical ideas and mathematical structure of decision theory.
Proceedings ArticleDOI

A direct adaptive method for faster backpropagation learning: the RPROP algorithm

TL;DR: A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed that performs a local adaptation of the weight-updates according to the behavior of the error function to overcome the inherent disadvantages of pure gradient-descent.
Related Papers (5)
Trending Questions (1)
How does the generalization performance of neural networks depend on the architecture and training procedure?

The paper discusses different techniques to determine the optimal architecture of a neural network in manufacturing applications, but it does not specifically address the generalization performance of neural networks based on architecture and training procedure.