scispace - formally typeset
Search or ask a question
Topic

Hybrid neural network

About: Hybrid neural network is a research topic. Over the lifetime, 1305 publications have been published within this topic receiving 18223 citations.


Papers
More filters
Posted Content
26 Jul 2019
TL;DR: A novel tagging scheme to assist in building a large-scale, high-quality training dataset automatically and can improve the performance of models by assigning multiple, overlapping labels for each word and helping models to learn pre-identifying arguments segment-level information.
Abstract: Open relation extraction (ORE) remains a challenge to obtain a semantic representation by discovering arbitrary relation tuples from the unstructured text. Conventional methods heavily depend on feature engineering or syntactic parsing, they are inefficient or error-cascading. Recently, leveraging supervised deep learning structures to address the ORE task is an extraordinarily promising way. However, there are two main challenges: (1) The lack of enough labeled corpus to support supervised training; (2) The exploration of specific neural architecture that adapts to the characteristics of open relation extracting. In this paper, to overcome these difficulties, we build a large-scale, high-quality training corpus in a fully automated way, and design a tagging scheme to assist in transforming the ORE task into a sequence tagging processing. Furthermore, we propose a hybrid neural network model (HNN4ORT) for open relation tagging. The model employs the Ordered Neurons LSTM to encode potential syntactic information for capturing the associations among the arguments and relations. It also emerges a novel Dual Aware Mechanism, including Local-aware Attention and Global-aware Convolution. The dual aware nesses complement each other so that the model can take the sentence-level semantics as a global perspective, and at the same time implement salient local features to achieve sparse annotation. Experimental results on various testing sets show that our model can achieve state-of-the-art performances compared to the conventional methods or other neural models.
Book ChapterDOI
01 Jan 2004
TL;DR: The approach it makes to hybrid neural systems and its theoretical foundations as a function approximation method are discussed and four benchmark problems are solved in order to study AppART from a practical point of view and to compare it with those obtained from other models.
Abstract: AppART is an adaptive resonance theory low parameterized neural model that incrementally approximates continuous—valued multidimensional functions from noisy data using biologically plausible processes. AppART performs a higher—order Nadaraya—Watson regression and can be interpreted as a fuzzy logic standard additive model. In this chapter we describe AppART dynamics and training. We discuss the approach it makes to hybrid neural systems and deal with its theoretical foundations as a function approximation method. Two modifications to AppART, aimed at improving AppART efficiency, are proposed and tested. We also discuss the combination AppART with growing neural gas networks. Finally, four benchmark problems are solved in order to study AppART from a practical point of view and to compare its results with those obtained from other models.
Proceedings ArticleDOI
01 Dec 2019
TL;DR: A hybrid ASC model based on convolutional neural network (CNN) and long-short-time memory network (LSTM) and the proposed hybrid neural network classification algorithm has better classification accuracy.
Abstract: In order to solve the problem of complex feature extraction and low classification accuracy of traditional acoustic scene classification (ASC) model, this paper proposes a hybrid ASC model based on convolutional neural network (CNN) and long-short-time memory network (LSTM). The convolutional layer in the CNN learns the invariant features from the time-frequency input, the extracted features are input to the LSTM for training, the LSTM units to process the sequence of the extracted features by CNNs. The model finally outputs the result through the classifier. We evaluate the model on UrbanSound8k dataset and analyze which type of the model is more suitable for ASC. The experimental results show that compared with the traditional neural network classification model, the proposed hybrid neural network classification algorithm has better classification accuracy.
Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a hybrid neural network architecture that utilizes reliable binary RRAM devices to perform semianalog computing and is tested for dictionary learning and analog image compression.
Abstract: RRAM (memristive) neural networks have shown great potential to implement massively parallel neuromorphic systems. In this work, we propose a hybrid neural network architecture that utilizes reliable binary RRAM devices to perform semianalog computing. The presented system is tested for dictionary learning and analog image compression, where the effect of synaptic weight precision is analyzed.
Proceedings ArticleDOI
23 Feb 2021
TL;DR: By combining particle swarm optimization and neural network, a new hybrid neural prediction algorithm named as PSONN was proposed in this article, where the algorithm was applied to Pima Indians Diabetes Database and compared with eight other algorithms including Logistic regression, Ridge regression, Lasso regression, Linear Discriminant Analysis, Quadratic Dimensional Analysis, Random Forest, Gradient Boosting Machine, and Adam (a neural network algorithm).
Abstract: In recent years, more and more studies have applied hybrid models in order to improve the performance of traditional neural networks. By combining particle swarm optimization and neural network, this research proposes a new hybrid neural prediction algorithm named as PSONN. The algorithm was applied to Pima Indians Diabetes Database and compared with eight other algorithms including Logistic regression, Ridge regression, Lasso regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis, Random Forest, Gradient Boosting Machine, and Adam (a neural network algorithm). The findings indicated that the proposed algorithm had higher accuracy and stability, but it took more time to execute. It is suggested that, future research could apply parallelization technology for reducing execution time.

Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
89% related
Feature extraction
111.8K papers, 2.1M citations
88% related
Fuzzy logic
151.2K papers, 2.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20233
20228
2021128
2020119
2019104
201863