B
Barak A. Pearlmutter
Researcher at Maynooth University
Publications - 185
Citations - 10811
Barak A. Pearlmutter is an academic researcher from Maynooth University. The author has contributed to research in topics: Automatic differentiation & Blind signal separation. The author has an hindex of 38, co-authored 183 publications receiving 9674 citations. Previous affiliations of Barak A. Pearlmutter include University of California, San Diego & National University of Ireland.
Papers
More filters
Proceedings ArticleDOI
Detecting intrusions using system calls: alternative data models
TL;DR: This work compares the ability of different data modeling methods to represent normal behavior accurately and to recognize intrusions and concludes that for this particular problem, weaker methods than HMMs are likely sufficient.
Journal ArticleDOI
Blind Source Separation by Sparse Decomposition in a Signal Dictionary
TL;DR: This work suggests a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability.
Posted Content
Automatic differentiation in machine learning: a survey
TL;DR: Automatic differentiation (AD) is a family of techniques similar to backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs as discussed by the authors, which is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization.
Journal Article
Automatic differentiation in machine learning: a survey
TL;DR: Automatic differentiation (AD) is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs as discussed by the authors, which is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and engineering design optimization.
Journal ArticleDOI
Learning state space trajectories in recurrent neural networks
TL;DR: A procedure for finding E/wij, where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and wij are the weights of that network, which seems particularly suited for temporally continuous domains.