Institution
D. E. Shaw Research
Company•New York, New York, United States•
About: D. E. Shaw Research is a company organization based out in New York, New York, United States. It is known for research contribution in the topics: Massively parallel & G protein-coupled receptor. The organization has 233 authors who have published 273 publications receiving 38359 citations.
Topics: Massively parallel, G protein-coupled receptor, Protein structure, Protein folding, Binding site
Papers published on a yearly basis
Papers
More filters
••
25 May 2015TL;DR: This work uses the combination of packet filtering, in-network reductions and log-weight synchronization to decrease the communication requirements of MD simulations by as much as 51% on Anton 2, yielding application-level performance improvements of up to 14%.
Abstract: Parallel implementations of molecular dynamics (MD) simulation require significant inter-node communication, but off-chip communication bandwidth is not scaling as quickly as on-chip logic density. We present three network features targeting this problem that have been implemented in Anton 2, a massively parallel special-purpose supercomputer for MD simulations. The first is a mechanism to dynamically identify packets that do not need to be delivered to all endpoints within a multicast tree, these packets are filtered to conserve network bandwidth. The second is hardware for in-network reductions that supports over a thousand concurrent neighbourhood reductions per node and fast all-to-all global reductions. The third is a log-weight synchronization mechanism for multicast-reduce communication patterns that can be used to efficiently detect the completion of reduction operations when the number of summands is difficult to predict. We use the combination of packet filtering, in-network reductions and log-weight synchronization to decrease the communication requirements of MD simulations by as much as 51% on Anton 2, yielding application-level performance improvements of up to 14%.
28 citations
••
TL;DR: A Bayesian machine learning framework is provided for the rational design of improved, personalized radiotherapy plans using mathematical modeling and patient multimodal medical scans to infer tumor cell density in GBM patients.
Abstract: Glioblastoma is a highly invasive brain tumor, whose cells infiltrate surrounding normal brain tissue beyond the lesion outlines visible in the current medical scans. These infiltrative cells are treated mainly by radiotherapy. Existing radiotherapy plans for brain tumors derive from population studies and scarcely account for patient-specific conditions. Here we provide a Bayesian machine learning framework for the rational design of improved, personalized radiotherapy plans using mathematical modeling and patient multimodal medical scans. Our method, for the first time, integrates complementary information from high resolution MRI scans and highly specific FET-PET metabolic maps to infer tumor cell density in glioblastoma patients. The Bayesian framework quantifies imaging and modeling uncertainties and predicts patient-specific tumor cell density with confidence intervals. The proposed methodology relies only on data acquired at a single time point and thus is applicable to standard clinical settings. An initial clinical population study shows that the radiotherapy plans generated from the inferred tumor cell infiltration maps spare more healthy tissue thereby reducing radiation toxicity while yielding comparable accuracy with standard radiotherapy protocols. Moreover, the inferred regions of high tumor cell densities coincide with the tumor radioresistant areas, providing guidance for personalized dose-escalation. The proposed integration of multimodal scans and mathematical modeling provides a robust, non-invasive tool to assist personalized radiotherapy design.
28 citations
••
TL;DR: Under physiological conditions, the KcsA outer vestibule undergoes relatively large dynamic rearrangements upon inactivation, suggesting that subunits must dynamically come in close proximity as the channels undergo inactivation.
27 citations
••
24 Oct 2008TL;DR: Together, these features allow Anton to perform pairwise interactions with very high throughput and unusually low latency, enabling MD simulations on time scales inaccessible to other general- and special-purpose parallel systems.
Abstract: Anton is a massively parallel special-purpose supercomputer designed to accelerate molecular dynamics (MD) simulations by several orders of magnitude, making possible for the first time the atomic-level simulation of many biologically important phenomena that take place over microsecond to millisecond time scales. The majority of the computation required for MD simulations involves the calculation of pairwise interactions between particles and/or gridpoints separated by no more than some specified cutoff radius. In Anton, such range-limited interactions are handled by a high-throughput interaction subsystem (HTIS). The HTIS on each of Antonpsilas 512 ASICs includes 32 computational pipelines running at 800 MHz, each producing a result on every cycle that would require approximately 50 arithmetic operations to compute on a general-purpose processor. In order to feed these pipelines and collect their results at a speed sufficient to take advantage of this computational power, Anton uses two novel techniques to limit inter- and intra-chip communication. The first is a recently developed parallelization algorithm for the range-limited N-body problem that offers major advantages in both asymptotic and absolute terms by comparison with traditional methods. The second is an architectural feature that processes pairs of points chosen from two point sets in time proportional to the product of the sizes of those sets, but with input and output volume proportional only to their sum. Together, these features allow Anton to perform pairwise interactions with very high throughput and unusually low latency, enabling MD simulations on time scales inaccessible to other general- and special-purpose parallel systems.
27 citations
••
TL;DR: Using an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size, it is shown that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Abstract: Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
26 citations
Authors
Showing all 236 results
Name | H-index | Papers | Citations |
---|---|---|---|
Richard A. Friesner | 97 | 367 | 52729 |
Burkhard Rost | 93 | 322 | 38606 |
Efthimios Kaxiras | 92 | 509 | 34924 |
David E. Shaw | 88 | 298 | 42616 |
Ron O. Dror | 70 | 188 | 27249 |
Adriaan P. IJzerman | 62 | 399 | 18706 |
Sheng Meng | 57 | 326 | 12307 |
Murcko Mark A | 53 | 130 | 14347 |
Kresten Lindorff-Larsen | 47 | 162 | 16222 |
Isaiah T. Arkin | 42 | 105 | 5058 |
Stefano Piana | 40 | 61 | 14065 |
Bronwyn MacInnis | 40 | 84 | 8500 |
Kevin J. Bowers | 36 | 99 | 7197 |
David W. Borhani | 34 | 70 | 6068 |
Anton Arkhipov | 32 | 72 | 4831 |