scispace - formally typeset
Search or ask a question
Institution

Université de Montréal

EducationMontreal, Quebec, Canada
About: Université de Montréal is a education organization based out in Montreal, Quebec, Canada. It is known for research contribution in the topics: Population & Context (language use). The organization has 45641 authors who have published 100476 publications receiving 4004007 citations. The organization is also known as: University of Montreal & UdeM.


Papers
More filters
Journal ArticleDOI
TL;DR: Two mutations in the gene PCSK9 (encoding proprotein convertase subtilisin/kexin type 9) that cause ADH are reported, a newly identified human subtilase that is highly expressed in the liver and contributes to cholesterol homeostasis.
Abstract: Autosomal dominant hypercholesterolemia (ADH; OMIM144400), a risk factor for coronary heart disease, is characterized by an increase in low-density lipoprotein cholesterol levels that is associated with mutations in the genes LDLR (encoding low-density lipoprotein receptor) or APOB (encoding apolipoprotein B). We mapped a third locus associated with ADH, HCHOLA3 at 1p32, and now report two mutations in the gene PCSK9 (encoding proprotein convertase subtilisin/kexin type 9) that cause ADH. PCSK9 encodes NARC-1 (neural apoptosis regulated convertase), a newly identified human subtilase that is highly expressed in the liver and contributes to cholesterol homeostasis.

2,691 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a summary of recent work on a new methodology to test for the presence of a unit root in univariate time series models, which is quite general.

2,686 citations

Posted Content
TL;DR: In this article, a generative adversarial network (GAN) is proposed to estimate generative models via an adversarial process, in which two models are simultaneously trained: a generator G and a discriminator D that estimates the probability that a sample came from the training data rather than G.
Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

2,657 citations

Journal ArticleDOI
TL;DR: Genetic loci associated with body mass index map near key hypothalamic regulators of energy balance, and one of these loci is near GIPR, an incretin receptor, which may provide new insights into human body weight regulation.
Abstract: Obesity is globally prevalent and highly heritable, but its underlying genetic factors remain largely elusive. To identify genetic loci for obesity susceptibility, we examined associations between body mass index and similar to 2.8 million SNPs in up to 123,865 individuals with targeted follow up of 42 SNPs in up to 125,931 additional individuals. We confirmed 14 known obesity susceptibility loci and identified 18 new loci associated with body mass index (P < 5 x 10(-8)), one of which includes a copy number variant near GPRC5B. Some loci (at MC4R, POMC, SH2B1 and BDNF) map near key hypothalamic regulators of energy balance, and one of these loci is near GIPR, an incretin receptor. Furthermore, genes in other newly associated loci may provide new insights into human body weight regulation.

2,632 citations

Proceedings Article
16 Jun 2013
TL;DR: In this article, a gradient norm clipping strategy is proposed to deal with the vanishing and exploding gradient problems in recurrent neural networks. But the proposed solution is limited to the case of RNNs.
Abstract: There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.

2,586 citations


Authors

Showing all 45957 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Alan C. Evans183866134642
Richard H. Friend1691182140032
Anders Björklund16576984268
Charles N. Serhan15872884810
Fernando Rivadeneira14662886582
C. Dallapiccola1361717101947
Michael J. Meaney13660481128
Claude Leroy135117088604
Georges Azuelos134129490690
Phillip Gutierrez133139196205
Danny Miller13351271238
Henry T. Lynch13392586270
Stanley Nattel13277865700
Lucie Gauthier13267964794
Network Information
Related Institutions (5)
University of Toronto
294.9K papers, 13.5M citations

96% related

University of Pennsylvania
257.6K papers, 14.1M citations

93% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

92% related

University of Minnesota
257.9K papers, 11.9M citations

92% related

Harvard University
530.3K papers, 38.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023118
2022485
20216,077
20205,753
20195,212
20184,696