scispace - formally typeset
Search or ask a question
Institution

École Polytechnique de Montréal

EducationMontreal, Quebec, Canada
About: École Polytechnique de Montréal is a education organization based out in Montreal, Quebec, Canada. It is known for research contribution in the topics: Finite element method & Population. The organization has 8015 authors who have published 18390 publications receiving 494372 citations.


Papers
More filters
01 Jan 2007
TL;DR: It is argued that deep architectures have the potential to generalize in non-local ways, i.e., beyond immediate neighbors, and that this is crucial in order to make progress on the kind of complex tasks required for artificial intelligence.
Abstract: One long-term goal of machine learning research is to produce methods that are applicable to highly complex tasks, such as perception (vision, audition), reasoning, intelligent control, and other artificially intelligent behaviors. We argue that in order to progress toward this goal, the Machine Learning community must endeavor to discover algorithms that can learn highly complex functions, with minimal need for prior knowledge, and with minimal human intervention. We present mathematical and empirical evidence suggesting that many popular approaches to non-parametric learning, particularly kernel methods, are fundamentally limited in their ability to learn complex high-dimensional functions. Our analysis focuses on two problems. First, kernel machines are shallow architectures, in which one large layer of simple template matchers is followed by a single layer of trainable coefficients. We argue that shallow architectures can be very inefficient in terms of required number of computational elements and examples. Second, we analyze a limitation of kernel machines with a local kernel, linked to the curse of dimensionality, that applies to supervised, unsupervised (manifold learning) and semi-supervised kernel machines. Using empirical results on invariant image recognition tasks, kernel methods are compared with deep architectures, in which lower-level features or concepts are progressively combined into more abstract and higher-level representations. We argue that deep architectures have the potential to generalize in non-local ways, i.e., beyond immediate neighbors, and that this is crucial in order to make progress on the kind of complex tasks required for artificial intelligence.

1,163 citations

Journal ArticleDOI
TL;DR: This chemistry is investigated using in situ Raman and transmission electron spectroscopies to highlight a thickness-dependent photoassisted oxidation reaction with oxygen dissolved in adsorbed water, consistent with a phenomenological model involving electron transfer and quantum confinement as key parameters.
Abstract: The degradation of exfoliated black phosphorus in ambient conditions may limit its use in electronic devices. The combined effects of light irradiation and exposure to oxygen on mono- and multilayers of this material are now investigated.

1,138 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of the recent advances in the modelling, design and technological implementation of SIW structures and components, as well as their application in the development of circuits and components operating in the microwave and millimetre wave region.
Abstract: Substrate-integrated waveguide (SIW) technology represents an emerging and very promising candidate for the development of circuits and components operating in the microwave and millimetre-wave region. SIW structures are generally fabricated by using two rows of conducting cylinders or slots embedded in a dielectric substrate that connects two parallel metal plates, and permit the implementation of classical rectangular waveguide components in planar form, along with printed circuitry, active devices and antennas. This study aims to provide an overview of the recent advances in the modelling, design and technological implementation of SIW structures and components.

1,129 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this paper, a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics is used for video description, which is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior.
Abstract: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.

1,115 citations

Posted Content
TL;DR: The proposed DenseNets approach achieves state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining, and has much less parameters than currently published best entries for these datasets.
Abstract: State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions. Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train. In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets. Code to reproduce the experiments is available here : this https URL

1,086 citations


Authors

Showing all 8139 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Claude Leroy135117088604
Lucie Gauthier13267964794
Reyhaneh Rezvani12063861776
M. Giunta11560866189
Alain Dufresne11135845904
David Brown105125746827
Pierre Legendre9836682995
Michel Bouvier9739631267
Aharon Gedanken9686138974
Michel Gendreau9445636253
Frederick Dallaire9347531049
Pierre Savard9342742186
Nader Engheta8961935204
Ke Wu87124233226
Network Information
Related Institutions (5)
Delft University of Technology
94.4K papers, 2.7M citations

93% related

Royal Institute of Technology
68.4K papers, 1.9M citations

93% related

Georgia Institute of Technology
119K papers, 4.6M citations

93% related

University of Waterloo
93.9K papers, 2.9M citations

93% related

École Polytechnique Fédérale de Lausanne
98.2K papers, 4.3M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202340
2022276
20211,275
20201,207
20191,140
20181,102