scispace - formally typeset
Search or ask a question
Author

Philip R. Baldwin

Bio: Philip R. Baldwin is an academic researcher from Baylor College of Medicine. The author has contributed to research in topics: Image processing & Habenula. The author has an hindex of 21, co-authored 35 publications receiving 6605 citations. Previous affiliations of Philip R. Baldwin include Veterans Health Administration & Michael E. DeBakey Veterans Affairs Medical Center in Houston.

Papers
More filters
Journal ArticleDOI
TL;DR: EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1.

2,852 citations

Journal ArticleDOI
TL;DR: EMAN (Electron Micrograph ANalysis), a software package for performing semiautomated single-particle reconstructions from transmission electron micrographs, was written from scratch in C++ and is provided free of charge on the Web site.

2,551 citations

Journal ArticleDOI
TL;DR: SPARX as mentioned in this paper is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination, which includes a graphical user interface that provides a complete graphical programming environment with an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions.

395 citations

Journal ArticleDOI
TL;DR: Using electrochemical detection of rapid dopamine release in the striatum of freely moving rats, it is established that a single dynamic model can capture all the measured fluctuations in dopamine delivery.
Abstract: Activity changes in a large subset of midbrain dopamine neurons fulfill numerous assumptions of learning theory by encoding a prediction error between actual and predicted reward. This computational interpretation of dopaminergic spike activity invites the important question of how changes in spike rate are translated into changes in dopamine delivery at target neural structures. Using electrochemical detection of rapid dopamine release in the striatum of freely moving rats, we established that a single dynamic model can capture all the measured fluctuations in dopamine delivery. This model revealed three independent short-term adaptive processes acting to control dopamine release. These short-term components generalized well across animals and stimulation patterns and were preserved under anesthesia. The model has implications for the dynamic filtering interposed between changes in spike production and forebrain dopamine release.

185 citations

Journal ArticleDOI
01 May 2016-Methods
TL;DR: This manuscript provides the first detailed description of the high resolution single particle analysis pipeline and the philosophy behind its approach to the reconstruction problem.

159 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Developments that reduce the computational costs of the underlying maximum a posteriori (MAP) algorithm, as well as statistical considerations that yield new insights into the accuracy with which the relative orientations of individual particles may be determined are described.

4,554 citations

Journal ArticleDOI
09 Nov 2018-eLife
TL;DR: CPU-based vector acceleration has been added in addition to GPU support, which provides flexibility in use of resources and avoids memory limitations in the third major release of RELION.
Abstract: Here, we describe the third major release of RELION. CPU-based vector acceleration has been added in addition to GPU support, which provides flexibility in use of resources and avoids memory limitations. Reference-free autopicking with Laplacian-of-Gaussian filtering and execution of jobs from python allows non-interactive processing during acquisition, including 2D-classification, de novo model generation and 3D-classification. Per-particle refinement of CTF parameters and correction of estimated beam tilt provides higher resolution reconstructions when particles are at different heights in the ice, and/or coma-free alignment has not been optimal. Ewald sphere curvature correction improves resolution for large particles. We illustrate these developments with publicly available data sets: together with a Bayesian approach to beam-induced motion correction it leads to resolution improvements of 0.2-0.7 A compared to previous RELION versions.

3,520 citations

Journal ArticleDOI
TL;DR: Modifications to the CTFFIND algorithm are described which make it significantly faster and more suitable for use with images collected using modern technologies such as dose fractionation and phase plates.

3,512 citations

Journal ArticleDOI
01 Oct 2019
TL;DR: Recent developments in the Phenix software package are described in the context of macromolecular structure determination using X-rays, neutrons and electrons.
Abstract: Diffraction (X-ray, neutron and electron) and electron cryo-microscopy are powerful methods to determine three-dimensional macromolecular structures, which are required to understand biological processes and to develop new therapeutics against diseases. The overall structure-solution workflow is similar for these techniques, but nuances exist because the properties of the reduced experimental data are different. Software tools for structure determination should therefore be tailored for each method. Phenix is a comprehensive software package for macromolecular structure determination that handles data from any of these techniques. Tasks performed with Phenix include data-quality assessment, map improvement, model building, the validation/rebuilding/refinement cycle and deposition. Each tool caters to the type of experimental data. The design of Phenix emphasizes the automation of procedures, where possible, to minimize repetitive and time-consuming manual tasks, while default parameters are chosen to encourage best practice. A graphical user interface provides access to many command-line features of Phenix and streamlines the transition between programs, project tracking and re-running of previous tasks.

3,268 citations

Journal ArticleDOI
TL;DR: The main target of Gctf is to maximize the cross-correlation of a simulated CTF with the logarithmic amplitude spectra of observed micrographs after background subtraction to improve CTF parameters of all particles for subsequent image processing.

2,919 citations