scispace - formally typeset
Search or ask a question
Topic

Hybrid neural network

About: Hybrid neural network is a research topic. Over the lifetime, 1305 publications have been published within this topic receiving 18223 citations.


Papers
More filters
Proceedings ArticleDOI
21 Jun 2004
TL;DR: This paper presents the genetic-neural network for sinter's burning through point since BTP control is the most important, which is tightly coupled with sinter ore quality.
Abstract: This paper presents the genetic-neural network for sinter's burning through point since BTP control is the most important, which is tightly coupled with sinter ore quality. In offline, advanced genetic algorithm (GA) is used to optimize the original connection weights and thresholds, and during online, hybrid neural network (HNN) inherited from the principle of backpropagation is used to train the map parameters and improve the system precision in each sampling period. The results obtained from the actual process demonstrate that the performance and capability of the proposed system are superior.

6 citations

Journal ArticleDOI
Lingyu Yan1, Menghan Sheng1, Chunzhi Wang1, Rong Gao1, Han Yu 
TL;DR: In this article, a hybrid neural network structure is proposed, which includes Sparse Autoencoder and Convolutional Neural Network (SCNN), and the network reconstructs the input data by Sparse Auto-Encoder, so as to learn the approximate value between the original data and the reconstructed data, and obtain more high-dimensional abstract features.
Abstract: With the development of science and technology and the progress of human beings, intelligence is gradually integrated into human daily life. The smart city uses innovative technology to manage and operate cities intelligently. Through the research of facial expression recognition technology, this paper explores the application of facial expression recognition in smart city construction. In this paper, a hybrid neural network structure is proposed, which includes Sparse Autoencoder and Convolutional Neural Network (SCNN). The network reconstructs the input data by Sparse Autoencoder, so as to learn the approximate value between the original data and the reconstructed data, and obtain more high-dimensional abstract features. Then, combined with the Convolutional Neural Network, the features are further extracted and dimensionally reduced. The model can effectively solve the problem that the shallow network structure can not fully extract image features and train the model with a small number of samples. In this paper, CK+, FER2013 and Oulu-CASIA databases are used for Cross-Validation of the model. The experimental results show that the model has achieved good results in both databases. Compared with other methods, the accuracy of this model has been greatly improved.

6 citations

Proceedings ArticleDOI
27 Nov 1995
TL;DR: Experiments shows that the consistency can improve the classification capability of LVQ by not only reducing the influence of distorted features but also making the boundaries of overlapped classes more discriminative.
Abstract: This paper presents a hybrid neural network system which combines the learning vector quantization (LVQ) classifier with the theory of consistency. The hybrid system employs consistency to measure the degree of matching between the input feature vectors and the output classes. In the calculation of the consistency, the probability distribution is embedded to describe the occurring frequencies of various classes in a neighborhood region associated with the input feature. This successfully avoids the case that usually occurs in complex classification problems of machine faults, that is, one or a few deviated input feature affecting the Euclidean distance and leads to misclassifications. Experiments shows that the consistency can improve the classification capability of LVQ by not only reducing the influence of distorted features but also making the boundaries of overlapped classes more discriminative. From the results of identifying faults occurred in a tapping machine, it is demonstrated that the successful rate of classification using this hybrid method outweighed the backpropagation and the conventional LVQ classifiers.

6 citations

Journal ArticleDOI
TL;DR: In this article, a hybrid neural network bushing model combining linear and neural network is suggested to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses of bushings.
Abstract: Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers.

6 citations

Journal ArticleDOI
TL;DR: This paper proposes the conditioning method for two types of neural networks, and respectively uses the gated recurrent unit network (GRU) and the dilated depthwise separable temporal convolutional networks (DDSTCNs) instead of LSTM and DC-CNN for reducing the parameters.
Abstract: Traditional time series forecasting techniques can not extract good enough sequence data features, and their accuracies are limited. The deep learning structure SeriesNet is an advanced method, which adopts hybrid neural networks, including dilated causal convolutional neural network (DC-CNN) and Long-short term memory recurrent neural network (LSTM-RNN), to learn multi-range and multi-level features from multi-conditional time series with higher accuracy. However, they didn’t consider the attention mechanisms to learn temporal features. Besides, the conditioning method for CNN and RNN is not specific, and the number of parameters in each layer is tremendous. This paper proposes the conditioning method for two types of neural networks, and respectively uses the gated recurrent unit network (GRU) and the dilated depthwise separable temporal convolutional networks (DDSTCNs) instead of LSTM and DC-CNN for reducing the parameters. Furthermore, this paper presents the lightweight RNN-based hidden state attention module (HSAM) combined with the proposed CNN-based convolutional block attention module (CBAM) for time series forecasting. Experimental results show our model is superior to other models from the viewpoint of forecasting accuracy and computation efficiency.

6 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Fuzzy logic
151.2K papers, 2.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20233
20228
2021128
2020119
2019104
201863