scispace - formally typeset
Search or ask a question

Showing papers on "Hybrid neural network published in 2015"


Journal ArticleDOI
TL;DR: It can be concluded that the DE and ACO algorithms are considerably more adaptive in optimizing the forecasting problem for the HNN model, which is based on fuzzy pattern-recognition and continuity equation.

159 citations


Dissertation
13 May 2015
TL;DR: It is shown that deep neural networks produce consistent and significant improvements over networks with one or two hidden layers, independently of the kind of neural network, MLP or RNN, and of input, handcrafted features or pixels, and that depth plays an important role in the reduction of the performance gap between the two kinds of inputs.
Abstract: The automatic transcription of text in handwritten documents has many applications, from automatic document processing, to indexing and document understanding. One of the most popular approaches nowadays consists in scanning the text line image with a sliding window, from which features are extracted, and modeled by Hidden Markov Models (HMMs). Associated with neural networks, such as Multi-Layer Perceptrons (MLPs) or Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs), and with a language model, these models yield good transcriptions. On the other hand, in many machine learning applications, including speech recognition and computer vision, deep neural networks consisting of several hidden layers recently produced a significant reduction of error rates. In this thesis, we have conducted a thorough study of different aspects of optical models based on deep neural networks in the hybrid neural network / HMM scheme, in order to better understand and evaluate their relative importance. First, we show that deep neural networks produce consistent and significant improvements over networks with one or two hidden layers, independently of the kind of neural network, MLP or RNN, and of input, handcrafted features or pixels. Then, we show that deep neural networks with pixel inputs compete with those using handcrafted features, and that depth plays an important role in the reduction of the performance gap between the two kinds of inputs, supporting the idea that deep neural networks effectively build hierarchical and relevant representations of their inputs, and that features are automatically learnt on the way. Despite the dominance of LSTM-RNNs in the recent literature of handwriting recognition, we show that deep MLPs achieve comparable results. Moreover, we evaluated different training criteria. With sequence-discriminative training, we report similar improvements for MLP/HMMs as those observed in speech recognition. We also show how the Connectionist Temporal Classification framework is especially suited to RNNs. Finally, the novel dropout technique to regularize neural networks was recently applied to LSTM-RNNs. We tested its effect at different positions in LSTM-RNNs, thus extending previous works, and we show that its relative position to the recurrent connections is important. We conducted the experiments on three public databases, representing two languages (English and French) and two epochs, using different kinds of neural network inputs: handcrafted features and pixels. We validated our approach by taking part to the HTRtS contest in 2014. The results of the final systems presented in this thesis, namely MLPs and RNNs, with handcrafted feature or pixel inputs, are comparable to the state-of-the-art on Rimes and IAM. Moreover, the combination of these systems outperformed all published results on the considered databases.

65 citations


Journal ArticleDOI
TL;DR: A new meta-heuristic algorithm, based on shark abilities in nature, for optimizing the number of hidden nodes pertaining to the NN, is presented and tested on two real-world case studies for predicting wind power.
Abstract: By the quick growth of wind power generation in the world, this clean energy becomes an important green electrical source in many countries. However, volatile and non-dispatchable nature of this energy source motivates researchers to find accurate and robust methods to predict its future values. Because of nonlinear and complex behaviors of this signal, more efficient wind power forecast methods are still demanded. In this paper, a new forecasting engine based on Neural Network (NN) and a novel Chaotic Shark Smell Optimization (CSSO) algorithm is proposed. Choosing optimal number of nodes for the hidden layer can enhance the efficiency of the NN’s training performance. Accordingly, a new meta-heuristic algorithm is presented in this paper, which is based on shark abilities in nature, for optimizing the number of hidden nodes pertaining to the NN. Effectiveness of the proposed forecasting strategy is tested on two real-world case studies for predicting wind power. The obtained results demonstrate the capability of the proposed technique to cope with the variability and intermittency of wind power time series for providing accurate predictions of its future values.

52 citations


Journal ArticleDOI
TL;DR: The simulation results indicated that the proposed technique can achieve good compressed images at high decomposition levels in comparison to JPEG2000.

45 citations


Proceedings ArticleDOI
23 Aug 2015
TL;DR: This paper shows that CTC training is close to forward-backward training of NN/HMMs, and can be extended to more standard HMM topologies, and applies this method to Multi-Layer Perceptrons (MLPs), and investigates the properties of CTC.
Abstract: In recent years, Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) trained with the Connectionist Temporal Classification (CTC) objective won many international handwriting recognition evaluations. The CTC algorithm is based on a forward-backward procedure, avoiding the need of a segmentation of the input before training. The network outputs are characters labels, and a special non-character label. On the other hand, in the hybrid Neural Network / Hidden Markov Models (NN/HMM) framework, networks are trained with framewise criteria to predict state labels. In this paper, we show that CTC training is close to forward-backward training of NN/HMMs, and can be extended to more standard HMM topologies. We apply this method to Multi-Layer Perceptrons (MLPs), and investigate the properties of CTC, namely the modeling of character by single labels and the role of the special label.

39 citations


Journal ArticleDOI
TL;DR: A novel hybrid model which combines continuity equation and fuzzy pattern-recognition concept with artificial neural network (ANN) is presented for downstream river discharge forecasting in a river network and results indicate that the proposed hybrid model delivers better performance, which can effectively improve forecasting capability at the studied station.
Abstract: Forecasting of river discharge is crucial in hydrology and hydraulic engineering owing to its use in the design and management of water resource projects. The problem is customarily settled with data-driven models. In this research, a novel hybrid model which combines continuity equation and fuzzy pattern-recognition concept with artificial neural network (ANN), is presented for downstream river discharge forecasting in a river network. Time-varying water storage in a river station and fuzzy feature of river flow are considered accordingly. To verify the proposed model, traditional ANN model, fuzzy pattern-recognition neural network model, and hydrological modeling network model have been employed as the benchmark models. The root mean squared error, Nash–Sutcliffe efficiency coefficient and accuracy are adopted as evaluation criteria. The proposed hybrid model is applied to compute downstream river discharge in the Yellow River, Georgia, USA. Results indicate that the proposed hybrid model delivers better performance, which can effectively improve forecasting capability at the studied station. It is, therefore, proposed as a novel model for downstream river discharge forecasting because of its highly nonlinear, fuzzy and non-stationary properties.

36 citations


Journal ArticleDOI
TL;DR: In this paper, a quantitative correlation is made between hydraulic flow units and well logs in South Pars gasfield, offshore southern Iran, by integrating intelligent and clustering methods of data analysis.
Abstract: Hydraulic flow units are defined as reservoir units with lateral continuity whose geological properties controlling fluid flow are consistent and different from those of other flow units. Because pore-throat size is the ultimate control on fluid flow, each flow unit has a relatively similar pore-throat size distribution resulting in consistent flow behaviour. The relations between porosity and permeability in terms of hydraulic flow units can be used to characterize heterogeneous carbonate reservoirs. In this study, a quantitative correlation is made between hydraulic flow units and well logs in South Pars gasfield, offshore southern Iran, by integrating intelligent and clustering methods of data analysis. For this purpose, a supervised artificial neural network model was integrated with multi-resolution graph-based clustering (MRGC) to identify hydraulic flow units from well log data. The hybrid model provides a more precise definition of flow units compared to definitions based only on a neural network. There is a good agreement between the results of well log analyses and core-derived flow units. The synthesized flow units derived from the well log data are sufficiently reliable to be considered as inputs in the construction of a 3D reservoir model of the South Pars field.

23 citations


Proceedings ArticleDOI
19 Apr 2015
TL;DR: The results reveal that cochleogram-spectrogram feature combination provides significant advantages and was evaluated in the framework of hybrid neural network - hidden Markov model (NN-HMM) system on TIMIT phoneme sequence recognition task.
Abstract: This paper explores the use of auditory features based on cochleograms; two dimensional speech features derived from gammatone filters within the convolutional neural network (CNN) framework. Furthermore, we also propose various possibilities to combine cochleogram features with log-mel filter banks or spectrogram features. In particular, we combine within low and high levels of CNN framework which we refer to as low-level and high-level feature combination. As comparison, we also construct the similar configuration with deep neural network (DNN). Performance was evaluated in the framework of hybrid neural network - hidden Markov model (NN-HMM) system on TIMIT phoneme sequence recognition task. The results reveal that cochleogram-spectrogram feature combination provides significant advantages. The best accuracy was obtained by high-level combination of two dimensional cochleogram-spectrogram features using CNN, achieved up to 8.2% relative phoneme error rate (PER) reduction from CNN single features or 19.7% relative PER reduction from DNN single features.

18 citations


Proceedings ArticleDOI
18 Feb 2015
TL;DR: The effectiveness of feature reduction techniques such as rough set analysis (RSA) and principal component analysis (PCA) and the parallel computing concept is helpful in accelerating the training procedure of the neural network model.
Abstract: Software maintenance is an important aspect of software life cycle development, hence prior estimation of effort for maintainability plays a vital role. Existing approaches for maintainability estimation are mostly based on regression analysis and neural network approaches. It is observed that numerous software metrics are even used as input for estimation. In this study, Object-Oriented software metrics are considered to provide requisite input data for designing a model. It helps in estimating the maintainability of Object-Oriented software. Models for estimating maintainability are designed using the parallel computing concept of Neuro-Genetic algorithm (hybrid approach of neural network and genetic algorithm). This technique is employed to estimate the software maintainability of two case studies such as the User Interface System (UIMS), and Quality Evaluation System (QUES). This paper also focuses on the effectiveness of feature reduction techniques such as rough set analysis (RSA) and principal component analysis (PCA). The results show that, RSA and PCA obtained better results for UIMS and QUES respectively. Further, it observed the parallel computing concept is helpful in accelerating the training procedure of the neural network model.

17 citations


Patent
12 Aug 2015
TL;DR: In this paper, a hybrid neural network-based gesture recognition method was proposed, in which a cell neural network was used to extract edge points in the gesture image, connected regions were obtained according to the extracted edge points, curvature is used to perform fingertip detection on each connected region to obtain undetermined fingertip points, interference of a face part is eliminated to obtain a gesture region, then the gesture region is partitioned according to gesture shape features, Fourier descriptors which keep phase information are obtained using contour points of the partitioned gesture region and first multiple Fou
Abstract: The invention discloses a hybrid neural network-based gesture recognition method. For a gesture image to be recognized and a gesture image training sample, first a pulse coupling neural network is used to detect to obtain noise points, then a composite denoising algorithm is used to process the noise points, then a cell neural network is used to extract edge points in the gesture image, connected regions are obtained according to the extracted edge points, curvature is used to perform fingertip detection on each connected region to obtain undetermined fingertip points, interference of a face part is eliminated to obtain a gesture region, then the gesture region is partitioned according to gesture shape features, Fourier descriptors which keep phase information are obtained according to contour points of the partitioned gesture region, and first multiple Fourier descriptors are selected as gesture features; and a BP neural network is trained according to gesture features of the gesture image training sample, and the gesture features of the gesture image to be recognized are input to the BP neural network for recognition. The hybrid neural network-based gesture recognition method provided by the invention improves the accuracy rate of gesture recognition through utilization of various neural networks.

16 citations


Journal ArticleDOI
TL;DR: This paper proposes structured ANN with hybridization of Gravitational Search Algorithm to solve inverse kinematics of 6R PUMA robot manipulator and it is found that MLPGSA gives better result and minimum error as compared to MLPBP.
Abstract: Inverse kinematics of robot manipulator is to determine the joint variables for a given Cartesian position and orientation of an end effector. There is no unique solution for the inverse kinematics thus necessitating application of appropriate predictive models from the soft computing domain. Although artificial neural network (ANN) can be gainfully used to yield the desired results but the gradient descent learning algorithm does not have ability to search for global optimum and it gives slow convergence rate. This paper proposes structured ANN with hybridization of Gravitational Search Algorithm to solve inverse kinematics of 6R PUMA robot manipulator. The ANN model used is multi-layered perceptron neural network (MLPNN) with back-propagation (BP) algorithm which is compared with hybrid multi layered perceptron gravitational search algorithm (MLPGSA). An attempt has been made to find the best ANN configuration for the problem. It has been observed that MLPGSA gives faster convergence rate and improves the problem of trapping in local minima. It is found that MLPGSA gives better result and minimum error as compared to MLPBP.

Journal ArticleDOI
TL;DR: Simulation results show that best accuracy in classification is obtained with minimum computational complexity in Multilayer perceptron with recurrent architecture.
Abstract: Multilayer perceptron (MLP) with recurrent architecture is proposed for stabilizing the state vector, which represents the characteristics of the nodes in a graph, to classify the graph structured data. M number of input and output networks are constructed for the M node undirected graphs for classifying graph structured data. Output of every input network represents the characteristics of the node as a state vector. The output of each input MLP is also taken as input for the same network along with output of neighboring node’s MLP. Both the input and output networks are trained by backpropagation. The proposed approach is implemented on the standard benchmark classification problems namely mutagenesis problem, subgraph matching problem and clique problem. Simulation results show that best accuracy in classification is obtained with minimum computational complexity.

Journal ArticleDOI
TL;DR: The results that evaluate the performance of the supervisory fuzzy PID-based control system and hybrid NN-based pH estimator have been presented and lead to conclude that the proposed algorithms are appropriate to systems nonlinearities encountered with pH reactors.
Abstract: This work concerns designing multiregional supervisory fuzzy PID (Proportional-Integral-Derivative) control for pH reactors. The proposed work focuses, mainly, on two themes. The first one is to propose a multiregional supervisory fuzzy-based cascade control structure. It would enable modifying dynamics and enhance system's stability. The fuzzy system (master loop) has been chosen as a tuner for PID controller (slave loop). It takes into consideration parameters uncertainties and reference tracking. The second theme concerns designing a hybrid neural network-based pH estimator. The proposed estimator would overcome the industrial drawbacks, that is, cost and size, found with conventional methods for pH measurement. The final end-user-interface (EUI) front panel and the results that evaluate the performance of the supervisory fuzzy PID-based control system and hybrid NN-based estimator have been presented using the compatibility found between LabView and MatLab. They lead to conclude that the proposed algorithms are appropriate to systems nonlinearities encountered with pH reactors.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: The article covers analysis of training sample data influence on results of power equipment actual state assessment with the use of different statistical criterion.
Abstract: The article is concerned with problems of expert systems development and implementation for power equipment actual state assessment on power stations and substations on the base of hybrid neural networks. The article covers analysis of training sample data influence on results of power equipment actual state assessment with the use of different statistical criterion.

Journal ArticleDOI
TL;DR: A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented and constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture.
Abstract: A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

Journal ArticleDOI
01 Sep 2015
TL;DR: A hybrid algorithm was developed to estimate the RBF neural network parameters (the weights, widths and centers of the hidden units) simultaneously and the results demonstrated the superior performance of the hybrid algorithmic method.
Abstract: Proposing a hybrid algorithm based on GBMO for image classification.Introducing a new hybrid neural network for MIML problems.Utilizing SNPOM for decreasing the train and test times. The facts show that multi-instance multi-label (MIML) learning plays a pivotal role in Artificial Intelligence studies. Evidently, the MIML learning introduces a framework in which data is described by a bag of instances associated with a set of labels. In this framework, the modeling of the connection is the challenging problem for MIML. The RBF neural network can explain the complex relations between the instances and labels in the MIMLRBF. The parameters estimation of the RBF network is a difficult task. In this paper, the computational convergence and the modeling accuracy of the RBF network has been improved. The present study aimed to investigate the impact of a novel hybrid algorithm consisting of Gases Brownian Motion optimization (GBMO) algorithm and the gradient based fast converging parameter estimation method on multi-instance multi-label learning. In the current study, a hybrid algorithm was developed to estimate the RBF neural network parameters (the weights, widths and centers of the hidden units) simultaneously. The algorithm uses the robustness of the GBMO to search the parameter space and the efficiency of the gradient. For this purpose, two real-world MIML tasks and a Corel dataset were utilized within a two-step experimental design. In the first step, the GBMO algorithm was used to determine the widths and centers of the network nodes. In the second step, for each molecule with fixed inputs and number of hidden nodes, the parameters were optimized by a structured nonlinear parameter optimization method (SNPOM). The findings demonstrated the superior performance of the hybrid algorithmic method. Additionally, the results for training and testing the dataset revealed that the hybrid method enhances RBF network learning more efficiently in comparison with other conventional RBF approaches. The results obtain better modeling accuracy than some other algorithms.

Proceedings ArticleDOI
01 Jun 2015
TL;DR: A new hybrid neural network is showed in this paper for image compression, in which the hybrid genetic algorithm and BP algorithm approach are used to train the weight vector.
Abstract: A new hybrid neural network is showed in this paper for image compression, in which the hybrid genetic algorithm and BP algorithm approach are used to train the weight vector. The essence of the hybrid neural network in this paper is a feed-forward artificial neural network. It uses the hybrid intelligent learning algorithm for training. The advantage of genetic algorithm is the parallel search and high search efficiency. So its convergent speed and precision are improved greatly. The results of this method show high compression ratio, high ratio of signal vs. noise, low errors of coding, high decoding speed and fine resuming effect on subject.

Proceedings ArticleDOI
28 Jul 2015
TL;DR: In this paper, a predictive model is established for the power kite using hybrid neural network, and then the PFC principles are applied for its controller design, which integrates on-line identification, learning mechanism and predictive controller.
Abstract: The power kite is a kind of high altitude wind energy (HAWE), which has received an increasing attention in the last decade. The unique feature of the kite-based system is its structural simplicity coupled with the complexity in its modeling and control. Since the system is open-loop unstable, it is difficult to model and it subjects to significant external disturbances during operation. To address these challenges, nonlinear predictive functional controller (PFC) is presented in this paper. Firstly, a predictive model is established for the power kite using hybrid neural network, and then the PFC principles are applied for its controller design. With the neural network structure, the PFC integrates on-line identification, learning mechanism and predictive controller. A closed-loop control system is developed and implemented to improve the performance of the power kite. The effectiveness of the proposed approach has been illustrated by numerical simulation tests.

Journal ArticleDOI
TL;DR: This paper is based on the review of various intelligent computing methods that are used to detect sleep disorders and finds that traditional approach questionnaire was used for the detection of various disorders that is now overcome with all above mentioned techniques to enhance the accuracy, sensitivity and specificity.
Abstract: Intelligent computing methods and knowledge based systems are well known techniques used for the detection of various medical disorders. This paper is based on the review of various intelligent computing methods that are used to detect sleep disorders. The main concern is based on the detection of sleep disorders such as sleep apnea, insomnia, parasomnia and snoring. The most common diagnostic methods used by many researchers are based on knowledge-based system (KBS), rule based reasoning (RBR), case based reasoning (CBR), fuzzy logic (FL), artificial neural network (ANN), support vector machine(SVM), multi-layer perceptron (MLP) neural network, genetic algorithm (GA), k-nearest neighbor (kNN), hybrid neural network, bayesian network (BN), data mining (DM) and many other integrated approaches. In traditional approach questionnaire was used for the detection of various disorders that is now overcome with all above mentioned techniques to enhance the accuracy, sensitivity and specificity.

Journal ArticleDOI
TL;DR: Results showed that integration of different artificial neural networks using generalized regression neural network can significantly improve the accuracy of final prediction.
Abstract: Stoneley wave velocity (Vst) is capable of providing accurate data for reservoir characterization objectives, such as permeability estimation, fracture evaluation, formation anisotropy identification, etc. At the first stage of this study, different types of artificial neural networks, including generalized regression neural network, radial basis neural network, and feed-forward backpropagation neural network were utilized to predict Vst from conventional well log data. Consequently, a generalized regression neural network was employed to combine results of mentioned artificial neural networks for overall estimation of Vst. This novel hybrid method can enhance the accuracy of final prediction through reaping the benefits of individual artificial neural networks. The proposed methodology, hybrid neural network, was applied in Asmari formation, which is the major carbonate reservoir rock of Iranian southern oil field. A group of 1,640 data points was used to establish the intelligent model, and a group of 8...

Journal ArticleDOI
TL;DR: In this paper, a wavelet hybrid neural network (WHNN) was proposed to classify multiple harmonic sources using non-linear closed curves in the time-domain, referring to the converters, reactors, and nonlinear loads.
Abstract: This paper proposes a method using non-linear voltage-current characteristics for multiple harmonic sources classification using wavelet hybrid neural network (WHNN). Typical voltage-current characteristics of harmonic sources are non-linear closed curves in the time-domain, referring to the converters, reactors, and non-linear loads. The hybrid neural network is a two-subnetwork architecture, consisting of wavelet layer and a self-organizing feature map (SOFM) network connected in cascade. The effectiveness of the proposed method is demonstrated by numerical tests. The results of multiple harmonic sources show the computational efficiency and accurate classification.

Journal ArticleDOI
Xian(, Wang, Jianrong1, Pan, JianrongDong, Zhan 
TL;DR: The improved chaotic particle swarm is applied to optimize the neural network so as to improve the computational e‐ciency and accuracy and the proposed sensitivity analysis method values the global response of the outputs by varying all the input parameters at a time with the correlations of parameters.
Abstract: In the stochastic sensitivity analysis, a large number of simulation models lead to low computational e‐ciency. With the dimension increasing in assigned problems, the accurate is di‐cult to achieve by popular regression methodologies. Without considering the correlations of parameters, inaccurate sensitivity coe‐cients would be calculated by analyzing the efiect of parametric variables on structures, moreover, the existing methods only calculate the local gradient as sensitivity. According to these problems, approximate model and global sensitivity method are employed for design sensitivity analysis of structures. The approximate model is constructed by the hybrid neural network which possesses signiflcant learning capacity and generalization capability with a small amount of information. The improved chaotic particle swarm is applied to optimize the neural network so as to improve the computational e‐ciency and accuracy. The proposed sensitivity analysis method values the global response of the outputs by varying all the input parameters at a time with the correlations of parameters. Uniform design and Latin hypercube sampling are used to sample points. Numerical analysis show that the proposed method can successfully measure the actual sensitivity.

Journal ArticleDOI
TL;DR: A neural network is proposed to be used, with features derived from the probabilistic Hough voting step of the Generalized Hough Transform, to implement an improved version of the GHT where the output of the network represents the conventional target class posteriors.
Abstract: While typical hybrid neural network architectures for automatic speech recognition (ASR) use a context window of frame-based features, this may not be the best approach to capture the wider temporal context, which contains phonetic and linguistic information that is equally important. In this paper, we introduce a system that integrates both the spectral and geometrical shape information from the acoustic spectrum, inspired by research in the field of machine vision. In particular, we focus on the Generalized Hough Transform (GHT), which is a sophisticated technique that can model the geometrical distribution of speech information over the wider temporal context. To integrate the GHT as part of a hybrid-ASR system, we propose to use a neural network, with features derived from the probabilistic Hough voting step of the GHT, to implement an improved version of the GHT where the output of the network represents the conventional target class posteriors. A major advantage of our approach is that each step of the GHT is highly interpretable, particularly compared to deep neural network (DNN) systems which are commonly treated as powerful black-box classifiers that give little insight into how the output is achieved. Experiments are carried out on two speech pattern classification tasks. The first is the TIMIT phoneme classification, which demonstrates the performance of the approach on a standard ASR task. The second is a spoken word recognition challenge, which highlights the flexibility of the approach to capture phonetic information within a longer temporal context.

Journal Article
TL;DR: Adaptive-Genetic Algorithm based A-GA based ANN learning and weightestimation scheme has been developed that alleviates the existing Artificial Neural Network limitations such as local minima and convergence issues and has performed better as compared to major existing schemes.
Abstract: To meet the requirement of an efficient software defect prediction,in this paper an evolutionary computing based neural network learning scheme has been developed that alleviates the existing Artificial Neural Network (ANN) limitations such as local minima and convergence issues. To achieve optimal software defect prediction, in this paper, Adaptive-Genetic Algorithm (A-GA) based ANN learning and weightestimation scheme has been developed. Unlike conventional GA, in this paper we have used adaptive crossover and mutation probability parameter that alleviates the issue of disruption towards optimal solution. We have used object oriented software metrics, CK metrics for fault prediction and the proposed Evolutionary Computing Based Hybrid Neural Network (HENN)algorithm has been examined for performance in terms of accuracy, precision, recall, F-measure, completeness etc, where it has performed better as compared to major existing schemes. The proposed scheme exhibited 97.99% prediction accuracy while ensuring optimal precision, Fmeasure and recall.

Journal Article
TL;DR: Experimental verification shows that, the 3-3-3 combination under the trapezoid membership function with the hybrid neural network support and the 2-2-2 combinationunder the g-bell membershipfunction with the same neural networkSupport perform the best among all combinations with RMSE 4.78881 and 4.12944 giving on average 5% deviation from the observed values.
Abstract: Modeling of groundwater recharge is one of the most important topics in hydrology due to its essential application to water resources management. In this study, an Adaptive Neuro Fuzzy Inference System (ANFIS) method is used to simulate groundwater recharge for watersheds. In-situ observational datasets for temperature, precipitation, evapotranspiration, (ETo) and groundwater recharge of the Lake Karla, Thessaly, Greece watershed were taken into consideration for the present study. The datasets consisted of monthly average values of the last almost 50 years, where 70% of the values used for learning with the rest for the testing phase. The testing was performed under a set of different membership functions without expert’s knowledge acquisition and with the support of a five-layer neural network. Experimental verification shows that, the 3-3-3 combination under the trapezoid membership function with the hybrid neural network support and the 2-2-2 combination under the g-bell membership function with the same neural network support perform the best among all combinations with RMSE 4.78881 and 4.12944 giving on average 5% deviation from the observed values.

Book ChapterDOI
02 Aug 2015
TL;DR: The results indicated that the neural network combining with association rule not only has excellent dimensionality reduction ability but also has the similar accurate prediction with correlation based neural network which has best accurate prediction rate among all three systems compared.
Abstract: Breast cancer is the second leading cause of death among the women aged between 40 and 59 in the world. The diagnosis of such disease has been a challenging research problem. With the advancement of artificial intelligence in medical science, numerous AI based breast cancer diagnosis system have been proposed. Many researches combine different algorithms to develop hybrid systems to improve the diagnosis accuracy. In this study, we propose three artificial neural network based hybrid diagnosis systems respectively combining association rule, correlation and genetic algorithm. The effectiveness of these systems is examined on Wisconsin Breast Cancer Dataset. We then compare the accuracy of these three hybrid diagnosis systems. The results indicated that the neural network combining with association rule not only has excellent dimensionality reduction ability but also has the similar accurate prediction with correlation based neural network which has best accurate prediction rate among all three systems compared.

Journal ArticleDOI
TL;DR: The predicted recovery from the ACO-ANN model, in comparison with other proposed models in literature, were in good agreement with those measured from simulations, and were comparable to those estimated from theother proposed models.
Abstract: Hybrid system is a potential tool to deal with nonlinear regression problems. The authors present an efficient prediction model for gas assisted gravity drainage injection recovery process based on artificial neural network (ANN) and dimensionless groups. Ant colony optimization (ACO) is applied to determine the network parameters. Results show that ACO optimization algorithm can obtain the optimal parameters of the ANN model with very high predictive accuracy. The predicted recovery from the ACO-ANN model, in comparison with other proposed models in literature, were in good agreement with those measured from simulations, and were comparable to those estimated from the other proposed models.

Journal ArticleDOI
TL;DR: The results showed that muscular actuations showed periodic behaviors, and the maximum length variation of temporalis muscle was larger than that of masseter and pterygoid muscles, in the 6-universal-prismatic-spherical parallel mechanism.
Abstract: Introduction we aimed to introduce a 6-universal-prismatic-spherical (UPS) parallel mechanism for the human jaw motion and theoretically evaluate its kinematic problem. We proposed a strategy to provide a fast and accurate solution to the kinematic problem. The proposed strategy could accelerate the process of solution-finding for the direct kinematic problem by reducing the number of required iterations in order to reach the desired accuracy level. Materials and Methods To overcome the direct kinematic problem, an artificial neural network and third-order Newton-Raphson algorithm were combined to provide an improved hybrid method. In this method, approximate solution was presented for the direct kinematic problem by the neural network. This solution could be considered as the initial guess for the third-order Newton-Raphson algorithm to provide an answer with the desired level of accuracy. Results The results showed that the proposed combination could help find a approximate solution and reduce the execution time for the direct kinematic problem, The results showed that muscular actuations showed periodic behaviors, and the maximum length variation of temporalis muscle was larger than that of masseter and pterygoid muscles. By reducing the processing time for solving the direct kinematic problem, more time could be devoted to control calculations.. In this method, for relatively high levels of accuracy, the number of iterations and computational time decreased by 90% and 34%, respectively, compared to the conventional Newton method. Conclusion The present analysis could allow researchers to characterize and study the mastication process by specifying different chewing patterns (e.g., muscle displacements).

Journal Article
TL;DR: Results indicate that optimally trained artificial neural networks may accurately predict airfoil profile.
Abstract: Here, we investigate a different hybrid neural network method for the design of airfoil using inverse procedure. The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with input as a aerodynamic coefficient and the output as the airfoil coordinates. In existing algorithm as an FNN training method has some limitation associated with local optimum and oscillation. The cost terms of the first algorithm are selected based on the activation functions of the hidden neurons and first order derivatives of the activation functions of the output neurons. The cost terms of the second algorithm are selected based on the first order derivatives of the activation functions of the hidden neurons and the activation functions of the output neurons. Results indicate that optimally trained artificial neural networks may accurately predict airfoil profile.

Book ChapterDOI
09 Nov 2015
TL;DR: A non-linear ordinal logistic regression method based on the combination of a linear regression model and an evolutionary neural network with hybrid basis functions, combining Sigmoidal Unit and Radial Basis Functions neural networks is proposed.
Abstract: This paper proposes a non-linear ordinal logistic regression method based on the combination of a linear regression model and an evolutionary neural network with hybrid basis functions, combining Sigmoidal Unit and Radial Basis Functions neural networks. The process for obtaining the coefficients is carried out in several steps. Firstly we use an evolutionary algorithm to determine the structure of the hybrid neural network model, in a second step we augment the initial feature space covariate space adding the non-linear transformations of the input variables given by the hybrid hidden layer of the best individual of the evolutionary algorithm. Finally, we apply an ordinal logistic regression in the new feature space. This methodology is tested using 10 benchmark problems from the UCI repository. The hybrid model outperforms both the RBF and the SU pure models obtaining a good compromise between them and better results in terms of accuracy and ordinal classification error.