Open AccessPosted Content
Learning with Hierarchical Gaussian Kernels.
TLDR
It is shown that Gaussian kernels are universal and that SVMs using these kernels are universally consistent, and a parameter optimization method for the kernel parameters is described that is empirically compared to SVMs, random forests, a multiple kernel learning approach, and to some deep neural networks.Abstract:
We investigate iterated compositions of weighted sums of Gaussian kernels and provide an interpretation of the construction that shows some similarities with the architectures of deep neural networks. On the theoretical side, we show that these kernels are universal and that SVMs using these kernels are universally consistent. We further describe a parameter optimization method for the kernel parameters and empirically compare this method to SVMs, random forests, a multiple kernel learning approach, and to some deep neural networks.read more
Citations
More filters
Journal ArticleDOI
Universality of deep convolutional neural networks
TL;DR: It is shown that a deep convolutional neural network (CNN) is universal, meaning that it can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough.
Posted Content
Universality of Deep Convolutional Neural Networks
TL;DR: The authors showed that deep convolutional neural networks (CNNs) can be used to approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough.
Journal ArticleDOI
Kernel Flows: From learning kernels from data into the abyss
Houman Owhadi,Gene Ryan Yoo +1 more
TL;DR: In this paper, the authors explore a numerical approximation approach to kernel selection/construction based on the simple premise that a kernel must be good if the number of interpolation points can be halved without significant loss in accuracy, measured using the intrinsic RKHS norm ∥·∥ associated with the kernel.
Journal Article
Group invariance, stability to deformations, and complexity of deep convolutional representations
Alberto Bietti,Julien Mairal +1 more
TL;DR: In this article, a reproducing kernel Hilbert space (RKHS) is introduced, which contains a large class of convolutional neural networks with homogeneous activation functions, and a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model.
Posted Content
Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations
Alberto Bietti,Julien Mairal +1 more
TL;DR: This paper considers deep convolutional representations of signals; it studies their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information.
References
More filters
Posted Content
Caffe: Convolutional Architecture for Fast Feature Embedding
Yangqing Jia,Evan Shelhamer,Jeff Donahue,Sergey Karayev,Jonathan Long,Ross Girshick,Sergio Guadarrama,Trevor Darrell +7 more
TL;DR: Caffe as discussed by the authors is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Book
Support Vector Machines
TL;DR: This book explains the principles that make support vector machines (SVMs) a successful modelling and prediction tool for a variety of applications and provides a unique in-depth treatment of both fundamental and recent material on SVMs that so far has been scattered in the literature.
Journal ArticleDOI
Do we need hundreds of classifiers to solve real world classification problems
TL;DR: The random forest is clearly the best family of classifiers (3 out of 5 bests classifiers are RF), followed by SVM (4 classifiers in the top-10), neural networks and boosting ensembles (5 and 3 members in theTop-20, respectively).
Journal Article
Large Scale Multiple Kernel Learning
TL;DR: It is shown that the proposed multiple kernel learning algorithm can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations, and generalize the formulation and the method to a larger class of problems, including regression and one-class classification.