scispace - formally typeset
Search or ask a question
Institution

Université de Montréal

EducationMontreal, Quebec, Canada
About: Université de Montréal is a education organization based out in Montreal, Quebec, Canada. It is known for research contribution in the topics: Population & Context (language use). The organization has 45641 authors who have published 100476 publications receiving 4004007 citations. The organization is also known as: University of Montreal & UdeM.


Papers
More filters
Journal ArticleDOI
TL;DR: In patients with atrial fibrillation and congestive heart failure, a routine strategy of rhythm control does not reduce the rate of death from cardiovascular causes, as compared with a rate-control strategy.
Abstract: Methods We conducted a multicenter, randomized trial comparing the maintenance of sinus rhythm (rhythm control) with control of the ventricular rate (rate control) in patients with a left ventricular ejection fraction of 35% or less, symptoms of congestive heart failure, and a history of atrial fibrillation. The primary outcome was the time to death from cardiovascular causes. Results A total of 1376 patients were enrolled (682 in the rhythm-control group and 694 in the rate-control group) and were followed for a mean of 37 months. Of these patients, 182 (27%) in the rhythm-control group died from cardiovascular causes, as compared with 175 (25%) in the rate-control group (hazard ratio in the rhythm-control group, 1.06; 95% confidence interval, 0.86 to 1.30; P = 0.59 by the log-rank test). Secondary outcomes were similar in the two groups, including death from any cause (32% in the rhythm-control group and 33% in the rate-control group), stroke (3% and 4%, respectively), worsening heart failure (28% and 31%), and the composite of death from cardiovascular causes, stroke, or worsening heart failure (43% and 46%). There were also no significant differences favoring either strategy in any predefined subgroup. Conclusions In patients with atrial fibrillation and congestive heart failure, a routine strategy of rhythm control does not reduce the rate of death from cardiovascular causes, as compared with a rate-control strategy. (ClinicalTrials.gov number, NCT00597077.)

1,331 citations

Posted Content
TL;DR: This article showed that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend, which suggests that it is the space, rather than individual units, that contains of the semantic information in the high layers of neural networks.
Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.

1,313 citations

Proceedings Article
07 Dec 2015
TL;DR: BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.
Abstract: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.

1,311 citations

Proceedings Article
01 Jan 2019
TL;DR: To demonstrate adapter's effectiveness, the recently proposed BERT Transformer model is transferred to 26 diverse text classification tasks, including the GLUE benchmark, and adapter attain near state-of-the-art performance, whilst adding only a few parameters per task.
Abstract: Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.

1,308 citations

Journal ArticleDOI
TL;DR: Sacubitril-valsartan did not result in a significantly lower rate of total hospitalizations for heart failure and death from cardiovascular causes among patients with heart failureand an ejection fraction of 45% or higher, and among 12 prespecified subgroups, there was suggestion of heterogeneity with possible benefit in patients with lower ejection fractions and in women.
Abstract: Background The angiotensin receptor–neprilysin inhibitor sacubitril–valsartan led to a reduced risk of hospitalization for heart failure or death from cardiovascular causes among patients ...

1,306 citations


Authors

Showing all 45957 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Alan C. Evans183866134642
Richard H. Friend1691182140032
Anders Björklund16576984268
Charles N. Serhan15872884810
Fernando Rivadeneira14662886582
C. Dallapiccola1361717101947
Michael J. Meaney13660481128
Claude Leroy135117088604
Georges Azuelos134129490690
Phillip Gutierrez133139196205
Danny Miller13351271238
Henry T. Lynch13392586270
Stanley Nattel13277865700
Lucie Gauthier13267964794
Network Information
Related Institutions (5)
University of Toronto
294.9K papers, 13.5M citations

96% related

University of Pennsylvania
257.6K papers, 14.1M citations

93% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

92% related

University of Minnesota
257.9K papers, 11.9M citations

92% related

Harvard University
530.3K papers, 38.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023118
2022485
20216,077
20205,753
20195,212
20184,696