scispace - formally typeset
N

Nikos Komodakis

Researcher at École des ponts ParisTech

Publications -  141
Citations -  24837

Nikos Komodakis is an academic researcher from École des ponts ParisTech. The author has contributed to research in topics: Image segmentation & Convolutional neural network. The author has an hindex of 52, co-authored 137 publications receiving 18225 citations. Previous affiliations of Nikos Komodakis include University of Crete & École Normale Supérieure.

Papers
More filters
Posted Content

Wide Residual Networks

TL;DR: Wide residual networks (WRNs) as mentioned in this paper decrease the depth and increase the width of residual networks, which achieves state-of-the-art results on CIFAR, SVHN, and ImageNet.
Proceedings Article

Unsupervised Representation Learning by Predicting Image Rotations

TL;DR: Gidaris et al. as discussed by the authors proposed to learn image features by training ConvNets to recognize the 2D rotation that is applied to the image that it gets as input, which provides a very powerful supervisory signal for semantic feature learning.
Proceedings ArticleDOI

Wide Residual Networks

TL;DR: This paper conducts a detailed experimental study on the architecture of ResNet blocks and proposes a novel architecture where the depth and width of residual networks are decreased and the resulting network structures are called wide residual networks (WRNs), which are far superior over their commonly used thin and very deep counterparts.
Proceedings ArticleDOI

Learning to compare image patches via convolutional neural networks

TL;DR: This paper shows how to learn directly from image data a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems, and opts for a CNN-based model that is trained to account for a wide variety of changes in image appearance.
Posted Content

Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

TL;DR: In this article, the authors show that by properly defining attention for convolutional neural networks, they can actually use this type of information in order to significantly improve the performance of a student CNN by forcing it to mimic the attention maps of a powerful teacher network.