A
Alexandre Piché
Researcher at McGill University
Publications - 18
Citations - 163
Alexandre Piché is an academic researcher from McGill University. The author has contributed to research in topics: Computer science & Reinforcement learning. The author has an hindex of 5, co-authored 10 publications receiving 101 citations. Previous affiliations of Alexandre Piché include Université de Montréal.
Papers
More filters
Journal ArticleDOI
Interventions for implementation of thromboprophylaxis in hospitalized patients at risk for venous thromboembolism.
Susan R. Kahn,David R. Morrison,Gisele Diendéré,Alexandre Piché,Kristian B. Filion,Kristian B. Filion,Adi J. Klil-Drori,James D. Douketis,Jessica Emed,Andre Roussin,Vicky Tagalakis,Vicky Tagalakis,Martin Morris,William H. Geerts +13 more
TL;DR: The effects of system-wide interventions designed to increase the implementation of thromboprophylaxis and decrease the incidence of VTE in hospitalized adult medical and surgical patients at risk for VTE are assessed, focusing on RCTs only.
Proceedings Article
Probabilistic Planning with Sequential Monte Carlo methods
Journal ArticleDOI
Effectiveness of interventions for the implementation of thromboprophylaxis in hospitalised patients at risk of venous thromboembolism: an updated abridged Cochrane systematic review and meta-analysis of randomised controlled trials.
Susan R. Kahn,Gisele Diendéré,David R. Morrison,Alexandre Piché,Kristian B. Filion,Adi J. Klil-Drori,James D. Douketis,Jessica Emed,Andre Roussin,Vicky Tagalakis,Martin Morris,William H. Geerts +11 more
TL;DR: System-wide interventions designed to increase the implementation of thromboprophylaxis and decrease the incidence of venous thromboembolism in hospitalised medical and surgical patients at risk of VTE were found to be more effective than no intervention, existing policy or another intervention.
Posted Content
Iterative Amortized Policy Optimization
TL;DR: It is demonstrated that the resulting technique, iterative amortized policy optimization, yields performance improvements over conventional direct amortization methods on benchmark continuous control tasks.
Reward Estimation for Variance Reduction in Deep Reinforcement Learning
TL;DR: In this paper, an estimator for both rewards and value functions is proposed to handle corrupted reward signals in model-free reinforcement learning (RL) methods, which improves performance under corrupted stochastic rewards in both the tabular and nonlinear function approximation settings for a variety of noise types and environments.