Proceedings ArticleDOI
GaborNet: Gabor filters with learnable parameters in deep convolutional neural network
Andrey Alekseev,Anatoly Bobe +1 more
TLDR
In this paper, a modified network architecture is proposed that focuses on improving convergence and reducing training complexity of deep convolutional neural networks, where filters in the first layer of the network are constrained to fit the Gabor function.Abstract:
The article describes a system for image recognition using deep convolutional neural networks. Modified network architecture is proposed that focuses on improving convergence and reducing training complexity. The filters in the first layer of the network are constrained to fit the Gabor function. The parameters of Gabor functions are learnable and are updated by standard backpropagation techniques. The system was implemented on Python, tested on several datasets and outperformed the common convolutional networks.read more
Citations
More filters
Journal ArticleDOI
Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals
TL;DR: In this paper , the authors developed DenseNet and CNN models for the classification of healthy subjects and patients with ten classes of MI based on the location of myocardial involvement, which achieved high classification accuracies.
Journal ArticleDOI
The Variational Kernel-Based 1-D Convolutional Neural Network for Machinery Fault Diagnosis
TL;DR: In this article, a variational kernel is derived by adapting constraints and formulations of the successive variational mode decomposition (SVMD), and a gradient descent process based on a fault classification loss is developed to estimate the parameter of the variational kernels.
Posted ContentDOI
Biological convolutions improve DNN robustness to noise and generalisation
TL;DR: In this article, fixed biological filter banks, in particular banks of Gabor filters, are used to constrain the networks to avoid reliance on shortcuts, making them develop more structured internal representations and more tolerant to noise.
Proceedings ArticleDOI
A Review of Convolutional Neural Networks and Gabor Filters in Object Recognition
Mehang Rai,Pablo Rivas +1 more
TL;DR: A review of the literature concerning approaches that involve both Gabor filters and CNNs can be found in this paper, where the authors pay close attention to successes and opportunities for future research in the intersection of these two computer vision tools.
Book ChapterDOI
Gabor Layers Enhance Network Robustness
Juan C. Pérez,Motasem Alfarra,Guillaume Jeanneret,Adel Bibi,Ali Thabet,Bernard Ghanem,Pablo Arbeláez +6 more
TL;DR: It is observed that architectures enhanced with Gabor layers gain a consistent boost in robustness over regular models and preserve high generalizing test performance, even though these layers come at a negligible increase in the number of parameters.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Posted Content
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
MonographDOI
Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations
Proceedings ArticleDOI
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.