scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Neural networks

19 Nov 2016-Neurocomputing (Elsevier)-Vol. 214, Iss: 214, pp 242-268
TL;DR: The development and evolution of different topics related to neural networks is described showing that the field has acquired maturity and consolidation, proven by its competitiveness in solving real-world problems.
About: This article is published in Neurocomputing.The article was published on 2016-11-19. It has received 184 citations till now. The article focuses on the topics: Neural modeling fields & Nervous system network models.
Citations
More filters
Journal ArticleDOI
TL;DR: The proposed work based on 744 segments of ECG signal is obtained from the MIT-BIH Arrhythmia database and can be applied in cloud computing or implemented in mobile devices to evaluate the cardiac health immediately with highest precision.
Abstract: The heart disease is one of the most serious health problems in today’s world. Over 50 million persons have cardiovascular diseases around the world. Our proposed work based on 744 segments of ECG signal is obtained from the MIT-BIH Arrhythmia database (strongly imbalanced data) for one lead (modified lead II), from 29 people. In this work, we have used long-duration (10 s) ECG signal segments (13 times less classifications/analysis). The spectral power density was estimated based on Welch’s method and discrete Fourier transform to strengthen the characteristic ECG signal features. Our main contribution is the design of a novel three-layer (48 + 4 + 1) deep genetic ensemble of classifiers (DGEC). Developed method is a hybrid which combines the advantages of: (1) ensemble learning, (2) deep learning, and (3) evolutionary computation. Novel system was developed by the fusion of three normalization types, four Hamming window widths, four classifiers types, stratified tenfold cross-validation, genetic feature (frequency components) selection, layered learning, genetic optimization of classifiers parameters, and new genetic layered training (expert votes selection) to connect classifiers. The developed DGEC system achieved a recognition sensitivity of 94.62% (40 errors/744 classifications), accuracy = 99.37%, specificity = 99.66% with classification time of single sample = 0.8736 (s) in detecting 17 arrhythmia ECG classes. The proposed model can be applied in cloud computing or implemented in mobile devices to evaluate the cardiac health immediately with highest precision.

196 citations


Cites background from "Neural networks"

  • ...tilayer perceptron, (h) recurrent neural networks [55], and (i) discriminant analysis [44]....

    [...]

Journal ArticleDOI
TL;DR: This paper is the first SLR specifically on the deep learning based RS to summarize and analyze the existing studies based on the best quality research publications and indicated that autoencoder models are the most widely exploited deep learning architectures for RS followed by the Convolutional Neural Networks and the Recurrent Neural Networks.
Abstract: These days, many recommender systems (RS) are utilized for solving information overload problem in areas such as e-commerce, entertainment, and social media. Although classical methods of RS have achieved remarkable successes in providing item recommendations, they still suffer from many issues such as cold start and data sparsity. With the recent achievements of deep learning in various applications such as Natural Language Processing (NLP) and image processing, more efforts have been made by the researchers to exploit deep learning methods for improving the performance of RS. However, despite the several research works on deep learning based RS, very few secondary studies were conducted in the field. Therefore, this study aims to provide a systematic literature review (SLR) of deep learning based RSs that can guide researchers and practitioners to better understand the new trends and challenges in the field. This paper is the first SLR specifically on the deep learning based RS to summarize and analyze the existing studies based on the best quality research publications. The paper particularly adopts an SLR approach based on the standard guidelines of the SLR designed by Kitchemen-ham which uses selection method and provides detail analysis of the research publications. Several publications were gathered and after inclusion/exclusion criteria and the quality assessment, the selected papers were finally used for the review. The results of the review indicated that autoencoder (AE) models are the most widely exploited deep learning architectures for RS followed by the Convolutional Neural Networks (CNNs) and the Recurrent Neural Networks (RNNs) models. Also, the results showed that Movie Lenses is the most popularly used datasets for the deep learning-based RS evaluation followed by the Amazon review datasets. Based on the results, the movie and e-commerce have been indicated as the most common domains for RS and that precision and Root Mean Squared Error are the most commonly used metrics for evaluating the performance of the deep leaning based RSs.

144 citations


Cites background or methods from "Neural networks"

  • ...However RNN modelss suffer from the exploding or vanishing gradient which makes it difficult to train (Prieto et al. 2016)....

    [...]

  • ...Like other unsupervised deep-learning-methods, denoising AE is trained using layers initialization in which each layer is used to generate input data of the next layer presentation (Prieto et al. 2016)....

    [...]

Journal ArticleDOI
TL;DR: A deep feedforward method to classify the given microarray cancer data into a set of classes for subsequent diagnosis purposes using a 7-layer deep neural network architecture having various parameters for each dataset is developed.

138 citations

Journal ArticleDOI
TL;DR: This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively, and provides five qualitative performance evaluation criteria and presents a new taxonomy for supervisedLearning algorithms depending on these five performance evaluated criteria.

125 citations

Journal ArticleDOI
TL;DR: A thorough review on the development of ML-ELMs, including stacked ELM autoencoder, residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications, and the connection between random neural networks and conventional deep learning.
Abstract: In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning.

112 citations

References
More filters
Book
01 Aug 1996
TL;DR: A separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint.
Abstract: A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one. The notions of inclusion, union, intersection, complement, relation, convexity, etc., are extended to such sets, and various properties of these notions in the context of fuzzy sets are established. In particular, a separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint.

52,705 citations

Journal ArticleDOI
13 May 1983-Science
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Abstract: There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.

41,772 citations

Book
Vladimir Vapnik1
01 Jan 1995
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.

40,147 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

01 Jan 1998
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.

26,531 citations