This study studies a feed-forward neural network for two independent function approximation tasks and demonstrates that the sizes of the modules can be dynamically driven by varying the complexities of the tasks.
Abstract:
We study a feed-forward neural network for two independent function approximation tasks. Upon training, two modules are automatically formed in the hidden layers, each handling one of the tasks predominantly. We demonstrate that the sizes of the modules can be dynamically driven by varying the complexities of the tasks. The network serves as a simple example of an artificial neural network with an adaptable modular structure. This study was motivated by related dynamical nature of modules in animal brains.
TL;DR: A novel SOH estimation method by using a prior knowledge-based neural network (PKNN) and the Markov chain for a single LIB and the maximum estimation error of the SOH is reduced to less than 1.7% by adopting the proposed method.
TL;DR: It is shown that evolved phenotypes exhibit robustness to damage and it is conjectured that it is the result of the effects of a complex mapping upon simulated evolution.
TL;DR: The authors empirically studied the evolution of connectionist models in the context of modular problems and found that the modularity of the problem is reflected by the architecture of adapted systems, although learning can counterbalance some imperfection of the architecture.
TL;DR: It turns out that the modularity of the problem is reflected by the architecture of adapted systems, although learning can counterbalance some imperfection of the architecture.
TL;DR: An investigation is conducted into the effects of a complex mapping between genotype and phenotype upon a simulated evolutionary process and a model of embryogeny is utilised to grow simple French flag like patterns.
TL;DR: This book is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning.
TL;DR: This text is a beginning graduate-level introduction to neural networks, focussing on current theoretical models, examining what these models can reveal about how the brain functions, and discussing the ramifications for psychology, artificial intelligence and the construction of a new generation of intelligent computers.
Q1. What are the contributions mentioned in the paper "Formation and dynamics of modules in a dual-tasking multilayer feed-forward neural network" ?
The authors study a feed-forward neural network for two independent function approximation tasks. The authors demonstrate that the sizes of the modules can be dynamically driven by varying the complexities of the tasks. This study was motivated by related dynamical nature of modules in animal brains.
Q2. What is the output of the ith neuron in the first hidden layer?
The output Vi (1) of the ith neuron in the first hidden layer is given byVi ~1 !5tanhF (j512wi j ~0 !j j1u i ~0 !G , ~1! where wi j (0) and u i (0) are weight and bias, respectively.
Q3. What is the proportionality constant of the modules?
The proportionality constant k is related to the compressibility of the modules with respect to changes in the complexities of the tasks.
Q4. Why are noncompact modules not favored energetically?
In fact, noncompact modules are not favored energetically since they usually give slightly larger error due to the fewer internal connections.
Q5. What is the weight of the outputs of the hidden layers?
The outputs of the neurons in the second and the third hidden layers corresponding to m52 and 3, respectively, are given byVi ~m !5tanhF (j516wi j ~m21 !V j ~m21 !1u i ~m21 !G . ~2!The weight wi j (m21) is zero if the corresponding synaptic connection does not exist.
Q6. What is the purpose of this paper?
In conclusion, motivated by the fluidity of modules in the brain, the authors have proposed a novel artificial neural network with analogous adaptable modular structures.
Q7. What is the meaning of the sentence?
The authors have already explained in Sec. II that approximations of the functions are uncorrelated tasks since the inputs are independent.