Topic
Hybrid neural network
About: Hybrid neural network is a research topic. Over the lifetime, 1305 publications have been published within this topic receiving 18223 citations.
Papers published on a yearly basis
Papers
More filters
••
24 Sep 2000
TL;DR: Improved performance for NNM (client barcode) with more inputs and proper alignment of the speech signals supports the hypothesis that a more detailed representation of thespeech patterns proved helpful for the system.
Abstract: A hybrid neural network is proposed for speaker verification (SV). The basic idea in this system is the usage of vector quantization preprocessing as the feature extractor. The experiments were carried out using a neural network model (NNM) with frame labeling performed from a client codebook known as NNM-C. The work also examines how the neural network model with enhance features from the client barcode compares to NNM client codebook with linear time normalization (LTN). Improved performance for NNM (client barcode) with more inputs and proper alignment of the speech signals supports the hypothesis that a more detailed representation of the speech patterns proved helpful for the system. The flexibility of this system allows an equal error rate (EER) of 0.62% (speaker specific EER) on a single isolated digit and 1.9% (SI EER) on a sequence of 12 isolated digits.
•
TL;DR: Results indicate that optimally trained artificial neural networks may accurately predict airfoil profile.
Abstract: Here, we investigate a different hybrid neural network method for the design of airfoil using inverse procedure. The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with input as a aerodynamic coefficient and the output as the airfoil coordinates. In existing algorithm as an FNN training method has some limitation associated with local optimum and oscillation. The cost terms of the first algorithm are selected based on the activation functions of the hidden neurons and first order derivatives of the activation functions of the output neurons. The cost terms of the second algorithm are selected based on the first order derivatives of the activation functions of the hidden neurons and the activation functions of the output neurons. Results indicate that optimally trained artificial neural networks may accurately predict airfoil profile.
••
01 Jan 2000TL;DR: A hybrid neural network, based on the synergism of the Fuzzy ARTMAP and Probabilistic Neural Networks, is employed to predict and classify Myocardial Infarction patients into two categories using a database of real records collected from a hospital.
Abstract: We have previously devised a hybrid neural network, based on the synergism of the Fuzzy ARTMAP and Probabilistic Neural Networks, for on-line pattern classification and probability estimation tasks. In this paper, we investigate the applicability of the hybrid network to medical diagnosis problems. In particular, the network was employed to predict and classify Myocardial Infarction patients into two categories (positive and negative cases) using a database of real records collected from a hospital. A number of experiments was conducted to evaluate the effects of several network parameters on its performance. The results are discussed and compared with those from the Fuzzy ARTMAP network.
••
01 Jul 2017TL;DR: This study analyses the dynamic behavior of the propagation of discontinuities in cracks via AE, in the following propagation classes : no propagation (NP), stable propagation (SP), and unstable propagation (UP).
Abstract: The acoustic emission (AE) technique is a non-destructive testing technique applied to pressurized rigid pipelines in order to identify metallurgical discontinuities. This study analyses the dynamic behavior of the propagation of discontinuities in cracks via AE, in the following propagation classes : no propagation (NP), stable propagation (SP), and unstable propagation (UP). The methodology involves applying the concept of modulation of analogue signals, which are used in signal transmission in telecommunications, for the development of a neural network in order to determine new parameters for the AE waveform. The classification of AE signals into propagation classes occurs, therefore, through the extraction of information related to the dynamics of AE signals, by means of the parameters of the analogue carriers of the modulations (in amplitude and in angle) that make up the AE signal. This set of parameters enables an efficient classification (average of 90%) through the identification of patterns from the AE signals in the classes for monitoring the state of the discontinuity, through the use of computational intelligence techniques (artificial neural networks and non-linear classification).
•
TL;DR: An end-to-end joint optimization framework of a multi-channel neural speech extraction and deep acoustic model without mel-filterbank (FBANK) extraction for overlapped speech recognition that achieves 28% word error rate reduction over a separately optimized system on AISHELL-1 and shows consistent robustness to signal to interference ratio (SIR) and angle difference between overlapping speakers.
Abstract: We propose an end-to-end joint optimization framework of a multi-channel neural speech extraction and deep acoustic model without mel-filterbank (FBANK) extraction for overlapped speech recognition. First, based on a multi-channel convolutional TasNet with STFT kernel, we unify the multi-channel target speech enhancement front-end network and a convolutional, long short-term memory and fully connected deep neural network (CLDNN) based acoustic model (AM) with the FBANK extraction layer to build a hybrid neural network, which is thus jointly updated only by the recognition loss. The proposed framework achieves 28% word error rate reduction (WERR) over a separately optimized system on AISHELL-1 and shows consistent robustness to signal to interference ratio (SIR) and angle difference between overlapping speakers. Next, a further exploration shows that the speech recognition is improved with a simplified structure by replacing the FBANK extraction layer in the joint model with a learnable feature projection. Finally, we also perform the objective measurement of speech quality on the reconstructed waveform from the enhancement network in the joint model.