Université de Sherbrooke
Education•Sherbrooke, Quebec, Canada•
About: Université de Sherbrooke is a(n) education organization based out in Sherbrooke, Quebec, Canada. It is known for research contribution in the topic(s): Population & Receptor. The organization has 14922 authors who have published 28783 publication(s) receiving 792511 citation(s). The organization is also known as: Universite de Sherbrooke & Sherbrooke University.
Topics: Population, Receptor, Health care, Angiotensin II, Poison control
Papers published on a yearly basis
TL;DR: It is demonstrated that the benefit of cholesterol-lowering therapy extends to the majority of patients with coronary disease who have average cholesterol levels and was also greater in patients with higher pretreatment levels of LDL cholesterol.
Abstract: Background In patients with high cholesterol levels, lowering the cholesterol level reduces the risk of coronary events, but the effect of lowering cholesterol levels in the majority of patients with coronary disease, who have average levels, is less clear. Methods In a double-blind trial lasting five years, we administered either 40 mg of pravastatin per day or placebo to 4159 patients (3583 men and 576 women) with myocardial infarction who had plasma total cholesterol levels below 240 mg per deciliter (mean, 209) and low-density lipoprotein (LDL) cholesterol levels of 115 to 174 mg per deciliter (mean, 139). The primary end point was a fatal coronary event or a nonfatal myocardial infarction. Results The frequency of the primary end point was 10.2 percent in the pravastatin group and 13.2 percent in the placebo group, an absolute difference of 3 percentage points and a 24 percent reduction in risk (95 percent confidence interval, 9 to 36 percent; P = 0.003). Coronary bypass surgery was needed in 7.5 per...
TL;DR: In this article, a new representation learning approach for domain adaptation is proposed, in which data at training and test time come from similar but different distributions, and features that cannot discriminate between the training (source) and test (target) domains are used to promote the emergence of features that are discriminative for the main learning task on the source domain.
Abstract: We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4 +2519 more•Institutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
•03 Dec 2012
TL;DR: This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Abstract: The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a "black art" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.
01 Jan 2003-Information & Management
TL;DR: It is concluded that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model.
Abstract: Information systems (IS) implementation is costly and has a relatively low success rate Since the seventies IS research has contributed to a better understanding of this process and its outcomes The early efforts concentrated on the identification of factors that facilitated IS use This produced a long list of items that proved to be of little practical value It became obvious that, for practical reasons, the factors had to be grouped into a model in a way that would facilitate analysis of IS useIn 1985, Fred Davis suggested the technology acceptance model (TAM) It examines the mediating role of perceived ease of use and perceived usefulness in their relation between systems characteristics (external variables) and the probability of system use (an indicator of system success) More recently, Davis proposed a new version of his model: TAM2 It includes subjective norms, and was tested with longitudinal research designs Overall the two explain about 40% of system's use Analysis of empirical research using TAM shows that results are not totally consistent or clear This suggests that significant factors are not included in the modelsWe conclude that TAM is a useful model, but has to be integrated into a broader one which would include variables related to both human and social change processes, and to the adoption of the innovation model
Showing all 14922 results
|Joseph V. Bonventre||126||596||61009|
|Jeffrey L. Benovic||99||264||30041|
|Simon C. Robson||88||552||29808|
|Paul B. Corkum||88||576||37200|
|Stephen M. Collins||86||320||25646|
|William D. Fraser||85||827||30155|
Related Institutions (5)
162.5K papers, 6.9M citations
University of British Columbia
209.6K papers, 9.2M citations
University of Toronto
294.9K papers, 13.5M citations
Centre national de la recherche scientifique
382.4K papers, 13.6M citations
University of California, Irvine
113.6K papers, 5.5M citations