scispace - formally typeset
L

Lawrence M. Murray

Researcher at Uber

Publications -  50
Citations -  1260

Lawrence M. Murray is an academic researcher from Uber . The author has contributed to research in topics: Particle filter & Probabilistic logic. The author has an hindex of 21, co-authored 50 publications receiving 1118 citations. Previous affiliations of Lawrence M. Murray include Commonwealth Scientific and Industrial Research Organisation & University of Oxford.

Papers
More filters
Journal ArticleDOI

Parallel Resampling in the Particle Filter

TL;DR: This work analyzes two alternative schemes that do not involve a collective operation, and compares them to standard schemes, finding that, in certain circumstances, the alternative resamplers can perform significantly faster on a GPU, and to a lesser extent on a CPU, than the standard approaches.
Posted Content

Bayesian State-Space Modelling on High-Performance Hardware Using LibBi

TL;DR: LibBi as discussed by the authors is a software package for state-space modelling and Bayesian inference on modern computer hardware, including multi-core central processing units (CPUs), many-core GPUs and distributed-memory clusters of such devices.
Journal ArticleDOI

Comparative Analysis of Dengue and Zika Outbreaks Reveals Differences by Setting and Virus.

TL;DR: Compared three outbreaks of dengue and Zika virus in two different island settings in Micronesia, using a mathematical model of transmission dynamics and making full use of commonalities in disease and setting between the outbreaks, it is found that the estimated reproduction numbers for Zika and d Dengue were similar when considered in the same setting, but that, conversely, reproduction number for the same disease can vary considerably by setting.
Journal ArticleDOI

GPU Acceleration of Runge-Kutta Integrators

TL;DR: This work considers the use of commodity graphics processing units (GPUs) for the common task of numerically integrating ordinary differential equations (ODEs), achieving speedups of up to 115-fold over comparable serial CPU implementations, and 15-foldover multithreaded CPU code with SIMD intrinsics.
Posted Content

Better together? Statistical learning in models made of modules

TL;DR: Why modular approaches might be preferable to the full model in misspecified settings is investigated and a principled criteria to choose between modular and full-model approaches is proposed.