scispace - formally typeset
Open AccessBook ChapterDOI

Phenix - a comprehensive python-based system for macromolecular structure solution

Reads0
Chats0
TLDR
PHENIX has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on the automation of all procedures.
Abstract
Macromolecular X-ray crystallography is routinely applied to understand biological processes at a molecular level. However, significant time and effort are still required to solve and complete many of these structures because of the need for manual interpretation of complex numerical data using many software packages and the repeated use of interactive three-dimensional graphics. PHENIX has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on the automation of all procedures. This has relied on the development of algorithms that minimize or eliminate subjective input, the development of algorithms that automate procedures that are traditionally performed by hand and, finally, the development of a framework that allows a tight integration between the algorithms. Keywords: PHENIX; Python; algorithms

read more

Content maybe subject to copyright    Report

electronic reprint
ISSN: 2059-7983
journals.iucr.org/d
PHENIX
: a comprehensive Python-based system for
macromolecular structure solution
Paul D. Adams, Pavel V. Afonine, G
´
abor Bunk
´
oczi, Vincent B. Chen, Ian
W. Davis, Nathaniel Echols, Jeffrey J. Headd, Li-Wei Hung, Gary J. Kapral,
Ralf W. Grosse-Kunstleve, Airlie J. McCoy, Nigel W. Moriarty, Robert
Oeffner, Randy J. Read, David C. Richardson, Jane S. Richardson, Thomas
C. Terwilliger and Peter H. Zwart
Acta Cryst.
(2010). D66, 213–221
IUCr Journals
CRYSTALLOGRAPHY JOURNALS ONLINE
This open-access article is distributed under the terms of the Creative Commons Attribution Licence
http://creativecommons.org/licenses/by/2.0/uk/legalcode, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original authors and source are cited.
Acta Cryst.
(2010). D66, 213–221 Adams
et al.
·
PHENIX

research papers
Acta Cryst. (2010). D66, 213–221 doi:10.1107/S0907444909052925 213
Acta Crystallographica Section D
Biological
Crystallography
ISSN 0907-4449
PHENIX: a comprehensive Python-based system for
macromolecular structure solution
Paul D. Adams,
a,b
* Pavel V.
Afonine,
a
Ga
´
bor Bunko
´
czi,
c
Vincent B. Chen,
d
Ian W.
Davis,
d
Nathaniel Echols,
a
Jeffrey J. Headd,
d
Li-Wei Hung,
e
Gary J. Kapral,
d
Ralf W. Grosse-
Kunstleve,
a
Airlie J. McCoy,
c
Nigel W. Moriarty,
a
Robert
Oeffner,
c
Randy J. Read,
c
David C. Richardson,
d
Jane S.
Richardson,
d
Thomas C.
Terwilliger
e
and Peter H. Zwart
a
a
Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA,
b
Department of
Bioengineering, UC Berkeley, CA 94720, USA,
c
Department of Haematology, University of
Cambridge, Cambridge Institute for Medical
Research, Wellcome Trust/MRC Building,
Cambridge CB2 0XY, England,
d
Department of
Biochemistry, Duke University Medical Center,
Durham, NC 27710, USA, and
e
Los Alamos
National Laboratory, Los Alamos, NM 87545,
USA
Current address: GrassRoots Biotechnology,
598 Airport Boulevard, Morrisville, NC 27560,
USA.
Correspondence e-mail: pdadams@lbl.gov
Macromolecular X-ray crystallography is routinely applied to
understand biological processes at a molecular level. How-
ever, significant time and effort are still required to solve and
complete many of these structures because of the need for
manual interpretation of complex numerical data using many
software packages and the repeated use of interactive three-
dimensional graphics. PHENIX has been developed to
provide a comprehensive system for macromolecular crystallo-
graphic structure solution with an emphasis on the automation
of all procedures. This has relied on the development of
algorithms that minimiz e or eliminate subjective input, the
development of algorithms that automate procedures that are
traditionally performed by hand and, finally, the development
of a framework that allows a tight integration between the
algorithms.
Received 8 October 2009
Accepted 9 December 2009
A version of this paper will be
published as a chapter in the
new edition of Volume F of
International Tables for
Crystallography.
1. Foundations
1.1. PHENIX architecture
The PHENIX (Adams et al., 2002) architecture is designed
from the ground up as a hybrid system of tightly integrated
interpreted (‘scripted’) and compiled software modules. A mix
of scripted and compiled components is invariably found in
all major successful crystallographic packages, but ofte n the
scripting is added as an afterthought in an ad hoc fashion using
tools that predate the object-oriented programming era. While
such ad hoc systems are quickly established, they tend to
become a severe maintenance burden as they grow. In addi-
tion, users are often forced into many time-consuming routine
tasks such as manually converting file formats. In PHENIX,
the scripting layer is the heart of the system. With only a few
exceptions, all major functionality is implemented as modules
that are exclusively accessed via the scripting interfaces. The
object-oriented Python scripting language (Lutz & Ascher,
1999) is used for this purpose. In about two decades, a large
developer/user community has produced millions of lines of
highly uniform, interoperable, mature and openly available
sources covering all aspects of programming ranging from
simple file handling to highly sophisticated network commu-
nication and fully featured cross-platform graphical interfaces.
Embedding crystallographic methods into this environment
enables an unprecedented degree of automation, stability and
portability. By design, the object-oriented programming
model fosters shared collaborative development by multiple
groups. It is routine practice to hierarchically recombine
modules written by different groups into ever more complex
procedures that appear uniform from the outside. A more
detailed overview of the key software technology leading to all
electronic reprint

these advances, presented in the context of crystallography,
can be found in Grosse-Kunstleve et al. (2002).
In addition to the advantages outlined in the previous
paragraph, the scripting language is generally most efficient
for the rapid development of new algorithms. However, run-
time performance considerations often dictate that numeri-
cally intensive calculations are eventually implemented in a
compiled language. The first choice of a compiled language is
of course to reuse the same language environment as used for
the scripting language itself, which is a C/C++ environment.
Not only is this the mainstream software environment on all
major platforms used today, but with probably hundreds of
millions of lines of C/C++ source s in existence it is an envir-
onment that is virtually guaranteed to thrive in the long term.
An in-depth discussion of the combined use of Python and
C++ can be found in Grosse-Kunstleve et al. (2002) and
Abrahams & Grosse-Kunstleve (2003). This model is used
throughout the PHENIX system.
1.2. Graphical user interface
A new graphical user interface (GUI) for PHENIX was
introduced in version 1.4. It uses the open-source wxPython
toolkit, which provides a ‘native’ look on each operating
system. Development has focused on providing interfaces
around the existing command-line programs with minimal
modification, using the same underlying configuration system
(libtbx.phil) as used by most PHENIX programs as a template
to automatically generate controls. Because these programs
are implemented primarily as Python modules, complex data
including models, reflections and other viewable data may be
exchanged with the GUI without resorting to parsing log files.
The current PHENIX release (version 1.5) includes GUIs for
phenix.refine (Afonine et al., 2005), phenix.xtriage (Zwart et
al., 2005), the AutoSol (Terwilliger et al., 2009), AutoBuild
(Terwilliger, Grosse-Kunstleve, Afonine, Moriarty, Adams et
al., 2008) and LigandFit (Terwilliger et al., 2006) wizards, the
restraints editor REEL, all of the validation tools and several
utilities for creating and manipulating maps and reflection
files. More recent builds of PHENIX contain a new GUI for
the AutoMR wizard and future releases will include a new
interface for Phaser (McCoy et al., 2007).
Intrinsically graphical data is visualized with embedded
graphs (using the free matplotlib Python library) or a simple
OpenGL viewer. This simplifies the most complex parameters,
such as atom selections in phenix.refine, which can be visual-
ized or picked interactively with the built-in viewer. The GUI
also serves as a platform for additional automation and user
customization. Similarly to the CCP4 interface (CCP4i;
Potterton et al., 2003), PHENIX manages data and task
history for separate user-defined projects. Default parameters
and input files can be specified for each project; for instance,
the generation of ligand restraints from the phenix.refine GUI
gives the user the option of automatically loadi ng these
restraints in futu re runs.
The popularity of Python as a scientific programming
language has led to its use in many other structural-biology
applications, especially molecular-graphics software. The
PHENIX GUI includes extension modules for the modeling
programs Coot (Emsley & Cowtan, 2004) and PyMOL
(DeLano, 2002), both of which are controlled remotely from
PHENIX using the XML-RPC protocol. This allows the
interfaces to integrate seamlessly; any model or map in
PHENIX can be automatically opened in Coot with a single
click. In programs that iteratively rebuild or refine structures,
such as AutoBuild and phenix.refine, the current model and
maps will be continually updated in Co ot and/or PyMOL as
soon as they are available. In the validation utilities, clicking
on any atom or residue flagged for poor statistics will recentre
the graphics windows on that atom. Remote control of the
PHENIX
GUI is also simple using the same protocol and
simple extensions to the Coot
interface provide direct
launching of phenix.refine with a model pre-loaded.
2. Analysis of experimental data
PHENIX has a range of tools for the analysis, validation and
manipulation of X-ray diffraction data. A comprehensive tool
for analyzing X-ray diffraction data is phenix.xtriage (Zwart et
al., 2005), which carries out tests ranging from space-group
determination and detection of twinning to detection of
anomalous signal. These tests provide the user and the various
wizards with a set of statistics that characterize a data set. For
analysis of twinning, phenix.xtriage consolidates a number of
statistics to provide a balanced verdict of possible symmetry
and twin-related issues with the data. Pheni x.xtriage provides
the user with feedback on the overall characteristics of the
data. Routine usage of phenix.xtriage during or immediately
after data collection has resulted in the timely discovery of
twinning or other issues (Flynn et al., 2007; Kostelecky et al.,
2009). Detection of these idiosyncrasies in the data typically
reduces the overall effort in a successful structure determi-
nation.
A likelihood-based estimation of the overall anisotropic
scale factor is performed using the likelihood formalism
described by Popov & Bourenkov (2003). Database-derived
standard Wilson plots for proteins and nucleic acids are used
to detect anomalies in the mean intensity. These anomalies
may arise from ice rings or other issues (Morris et al., 2004).
Data strength and low-resolution completeness are also
analysed. The presence of anomalous signal is detected by
analysis of the measurability, a quantity expressing the frac-
tion of statistically significant Bijvoet differences in a data set
(Zwart, 2005). The native Patterson function is used to detect
the presence of pseudo-translational symmetry. A database-
derived empirical distribution of maximum peak heights is
used to assign significance to detected peaks in the Patterson
function.
A comprehensive automated twinning analysis is per-
formed. Twi n laws are derived from first principles to facilitate
the identification of pseudo-merodehral cases. Amplitude and
intensity ratios, h|E
2
1|i values, the L-statistic (Padilla &
Yeates, 2003) and N(Z) plots are derived from data cut to
the resolution limit suggested by the data-strength analysis.
research papers
214 Adams et al.
PHENIX Acta Cryst. (2010). D66, 213–221
electronic reprint

The rem oval of shells of data with relatively high noi se content
greatly improves the automated interpretation of these
statistics. A Britton plot, H -test and a likelihood-derived
approach are used to estimate twin fractions when twin laws
are present. If a model has been supplied, an R versus R
(Lebedev et al., 2006) analysis is carried out. This type of
analysis is of particular use when dealing with pseudo-
symmetry, space-group problems and twinning (Zwart et al.,
2008).
To test for inconsistent indexing between different data sets,
a set of reindexing laws is derived from first principles given
the unit cells and space groups of the sample and reference
data sets. A correlation analysis suggests the most likely choice
of reindexing of the data. Analysis of the metric symmetry of
the unit cell provides a number of likely point groups. A
likelihood-inspired method is used to suggest the most likely
point group of the data. Subsequent analysis of systematic
absences in a likelihood framework ranks subsequent space-
group possibilities (details to be published).
3. Substructure determination, phasing and molecular
replacement
After ensuring that the diffraction data are sound and
understood, the next critical necessity for solving a structure is
the determination of phases using one of several strategies
(Adams, Afonine et al., 2009).
3.1. Substructure determination
The substructure-determination procedure implemented as
phenix.hyss (Hybrid Substructure Search; Grosse-Kunstleve &
Adams, 2003) combines the multi-trial dual-space recycling
approaches pioneered by Shake-and-Bake (Miller et al., 1994)
and later SHELXD (Sheldric k, 2008) with the use of the fast
translation function (Navaza & Vernoslova, 1995; Grosse-
Kunstleve & Brunger, 1999). The fast translation function is
the basis for a systematic search in the Patterson function
(performed in reciprocal space), in contrast to the stoc hastic
alternative of SHELXD (performed in direct space).
Phenix.hyss is the only substructure-determination program to
fully integrate automatic comparison of the substructures
found in multiple trials via a Euclidean Model Matching
procedure (part of the cctbx open-source libraries). This
allows phenix.hyss to detect if the same solution was found
multiple times and to terminate automatically if this is the
case. Extensive tests with a variety of SAD data sets (Grosse-
Kunstleve & Adams, 2003) ha ve led to a parameterization of
the procedure that balances runtime considerations and the
likelihood that repeated solutions present the correct
substructure. In many cases the procedure finishes in seconds
if the substructure is detectable from the input data.
3.2. Phasing
Phaser, available in PHENIX as phenix.phase r, applies the
principle of maximum likelihood to solving crystal structures
by molecular replacement, by single-wavelength anomalous
diffraction (SAD) or by a combination of both. The likelihood
targets tak e proper account of the effects of different sources
of error (and, in the case of SAD phasing, their correlations)
and allow different sources of information to be combined. In
solving a molecular-replacement problem with a number of
different components, the information gained from a partial
solution increases the signal in the search for subsequent
components. Because the likelihood scores for different
models can be directly compared, decisions among models can
readily be made as part of automation strategies (discussed
below).
3.3. Noncrystallographic symmetry (NCS)
Noncrystallograp hic symmetry is an important feature of
many macromolecular crystals that can be used to greatly
improve electron-density maps. PHENIX has tools for the
identification of NCS and for using NCS and multiple crystal
forms of a macromolecule in phase improvement.
Phenix.find_ncs and phenix.simple_n cs_from_pdb are tools
for the identification of noncrystallographic symmetry in a
structure using information from a heavy-atom substructure
or an atomic model. Phenix.simple_ncs_from_pdb will identify
NCS and generate transformations from the chains in a model
in a PDB file. Phenix.find_ncs will identify NCS from either a
heavy-atom substructure (Terwill iger, 2002a) or the chains in a
PDB file and will then compare this NCS with the density in a
map to verify that the NCS is actually present.
Phenix.multi_crystal_average is a method for combining
information from several crystal form s of a structure. It is
especially well suited to cases where each crystal form has its
own NCS, adjusting phases for eac h crystal form so that all the
NCS copies in all crystals are as similar as possible.
NCS restraints should normally be applied in density
modification and model building in all cases except where
there is clear evidence that NCS is not present. In density
modification within PHENIX the presence of NCS is identi-
fied from the heavy-atom sites or from an atomic model if
available. The local correlation of density in NCS-related
locations is then used automatically to set variable restraints
on NCS symmetry in the ma p. In refinement, NCS symmetry is
applied through coordinate restraints, targeting the positions
of each NCS copy relative to those of the other NCS-related
chains. The default NCS restraints in PHENIX are very tight,
with targets of 0.05 A
˚
r.m.s. At resolutions lower than about
2.5 A
˚
these tight restraints on NCS should usually be applied.
At higher resolutions it may be appropriate to use looser
restraints or to remove them altogether. Additionally, if there
are segments of the chains that clearly do not obey the NCS
relationships they should be excluded from the NCS restraints.
Normally this is perform ed automatically, but it can also be
specified explicitly.
4. Model building, ligand fitting and nucleic acids
Key steps in the analysis of a macromolecular crystal structure
are building an initial core model, identification and fitting of
research papers
Acta Cryst. (2010). D66, 213–221 Adams et al.
PHENIX 215
electronic reprint

ligands into the electron-density map and building an atomic
model for loop regions that are less well defined than the
majority of the structure. PHENIX has tools for rapid model
building of secondary structure and main-chain tracing
(phenix.find_helices_strands) and for the fitting of flexible
ligands (phenix.ligandfit) as well as for fitting a set of ligands
to a map (phenix.find_all_ligands) and for the identification
of ligands in a map (phenix.ligand_identification). PHENIX
additionally has a tool for the fitting of missing loops (phenix.
fit_loops). Validation tools are provided so that the models
produced can be validated at each step along the way.
4.1. Model building
Phenix.find_helices_strands will rapidly build a secondary-
structure-only model into a map or very rapidly trace the
polypeptide backbone of a model into a map. To build
secondary structure in a map, phenix.find_helices_strands
identifies -helical regions and -strand segments, models
idealized helices and strands into the corresponding density,
allowing for bending of the helices and strands, and assembles
these into a composite model. To very rapidly trace the main
chain in a map, phenix.find_helices_strands finds points along
ridgelines of high density where C
atoms might be located,
identifies pairs and then triplets of these C
atoms that have
density between the atoms and plausible geometry, constructs
all possible connectio ns of these C
atoms into nonamers and
then identifies all the longest possible chains that can be made
by joining the nonamers. This process can build a C
model at
a rate of about 20 residues per second, yielding a backbone
model that can readily be interpreted visually or automatically
to evaluate the quality of the map that it is based on.
Phenix.fit_loops will fit missing loops in an atomic model.
It uses RESOLVE model building (Terwilliger, 2003a,b,c)to
extend the chain from either end where a loop is missing and
to connect the chains into a loop with the expected number of
residues.
4.2. Ligand fitting
Phenix.ligandfit is a tool for fitting a flexible ligand into
an electron-density map (Terwilliger et al., 2006). The key
approaches used are breaking the lig and into its component
rigid-body parts, finding where each of these can be placed
into density, tracing the remainder of the ligand based on the
positions of these core rigid-body parts and recombining the
best parts of multiple fits while scoring based on the fit to the
density.
Phenix.find_all_ligands is a tool for finding all the instances
of each of several ligands in an electron -density map.
Phenix.find_all_l igands finds the largest contiguous region of
unused density in a map and uses phenix.ligandfit to fit each
supplied ligand into that density. It then chooses the ligand
that has the highest real-space correlation to the density
(Terwilliger, Adams et al., 2007). It then repeats this process
until no ligands can be satisfactorily fitted into any remaining
density in the map.
Phenix.ligand_identification is a tool for identifying which
ligands are compatible with unknown electron density in a
map (Terwilliger, Adams et al., 2007). It can search using the
200 most common ligands from the PDB or from a user-
supplied list of ligands. Phenix.ligand_identification uses
phenix.ligandfit to fit each ligand to the map and identifies the
best-fitting ligand using the real-space correlation and surface
complementarity of the ligand and the atoms in the structure
surrounding the ligand-binding site.
4.3. RNA and DNA
In common with most ma cromolecular crystallographic
tools, PHENIX was originally developed with protein st ruc-
tures primarily in mind. Now that nucleic acids, and especially
RNA, are increasingly important in large biological structures,
the system is being modified in places where subtle differences
in procedure are needed rather than just the relevant libraries.
Model building in phenix.autobuild now has a preliminary
set of nucleic acid procedures that take advantage of the
relatively well determined phosphate and base positions, as
well as the preponderance of double helix, and that make use
of the RNA backbone conformers recently defined by the
RNA Ontology Consortium (Richardson et al., 2008). Nucleic
acid structures benefit significantly from torsion-angle refine-
ment, which has recently been added to the options in
phenix.refine. A principal problem in RNA models is getting
the ribose pucker correct, although it is known to consist
almost entirely of either C3
0
-endo (which is commoner and
that found in the A-form helix) or C2
0
-endo (Altona &
Sundaralingam, 1972). MolProbity uses the perpendicular
distance from the 3
0
phosphate to the line of the C1
0
—N1/9
glycosidic bond as a reliable diagnostic of ribose pucker
(Davis et al., 2007; Chen et al., 2010). This same test ha s now
been built into phenix.refine to allow the use of pucker-specific
target parameters for bond lengths, angles and torsions
(Gelbin et al., 1996) rather than the uneasy co mpromise values
(Parkinson et al., 1996) used in most pucker-agnostic refine-
ment. Currently, if an incorrect pucker is diagnosed it must
usually be fixed by user rebuilding, for instance in Coot
(Emsley & Cowtan, 2004) or in RNABC (Wang et al., 2008). A
rebuilding functionality will probably be incorporated into
PHENIX soon, but in the meantime the refinement will now
correctly maintain the geometry of a C2
0
-endo pucker once it
has been built and identified using conformation-specific
residue names.
4.4. Maps, models and avoiding bias
Phenix.refine (and the graphical tool phenix.create_maps)
can produce various types of maps, including anom alous
difference, maximum-likelihood weighted (p*mF
obs
q*DF
model
)exp(i
model
) and regular (p*F
obs
q*F
model
)
exp(i
model
), where p and q are any user-defined numbers,
filled and kick maps. The coefficients m and D of likelihood-
weighted maps (Read, 1986) are computed using test-set
reflections as described in Lunin & Skovoroda (1995) and
Urzhumtsev et al. (1996).
research papers
216 Adams et al.
PHENIX Acta Cryst. (2010). D66, 213–221
electronic reprint

Citations
More filters
Journal ArticleDOI

Structural basis for the recognition of SARS-CoV-2 by full-length human ACE2.

TL;DR: Cryo–electron microscopy structures of full-length human ACE2 in the presence of the neutral amino acid transporter B0AT1 with or without the receptor binding domain (RBD) of the surface spike glycoprotein of SARS-CoV-2 are presented, providing important insights into the molecular basis for coronavirus recognition and infection.
References
More filters
Book

Learning Python

Mark Lutz
TL;DR: This book starts with a thorough introduction to the elements of Python: types, operators, statements, functions, modules, and exceptions, and shows how Python performs common tasks and presents real applications and the libraries available for those applications.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What are the contributions mentioned in the paper "Phenix: a comprehensive python-based system for macromolecular structure solution" ?

The Phenix project this paper has developed a toolkit for low resolution ( worse than 3.0 Å ) macromolecular data. 

Refinement can be performed using a variety of refinement target functions, including maximum likelihood,maximum likelihood with experimental phase information and amplitude least squares. 

A comprehensive tool for analyzing X-ray diffraction data is phenix.xtriage (Zwart et al., 2005), which carries out tests ranging from space-group determination and detection of twinning to detection of anomalous signal. 

A broad range of atomic displacement parameterizations are available, including grouped isotropic, constrained anisotropic (TLS) and individual atomic isotropic or anisotropic, allowing efficient modelling of atomic displacement parameters at any resolution. 

After ensuring that the diffraction data are sound and understood, the next critical necessity for solving a structure is the determination of phases using one of several strategies (Adams, Afonine et al., 2009). 

A computationally intensive but powerful method of creating a very low-bias map is to carry out iterative model building and refinement while omitting one region of the map from all calculations of structure factors (Terwilliger, GrosseKunstleve, Afonine, Moriarty, Adams et al., 2008). 

The key approaches used are breaking the ligand into its component rigid-body parts, finding where each of these can be placed into density, tracing the remainder of the ligand based on the positions of these core rigid-body parts and recombining the best parts of multiple fits while scoring based on the fit to the density. 

Phenix.find_ncs and phenix.simple_ncs_from_pdb are tools for the identification of noncrystallographic symmetry in a structure using information from a heavy-atom substructure or an atomic model. 

In density modification within PHENIX the presence of NCS is identified from the heavy-atom sites or from an atomic model if available. 

The availability of ultrahigh-resolution data makes it possible to visualize the residual density arising from bonding effects; phenix. 

PHENIX 219 electronic reprintseveral components to place, the ability of the likelihood functions to take advantage of preliminary partial solutions can provide a crucial increase in the signal. 

find_helices_strands will rapidly build a secondarystructure-only model into a map or very rapidly trace the polypeptide backbone of a model into a map. 

Phenix.hyss is the only substructure-determination program to fully integrate automatic comparison of the substructures found in multiple trials via a Euclidean Model Matching procedure (part of the cctbx open-source libraries). 

Phenix.get_cc_mtz_mtz and phenix.get_cc_mtz_pdb are tools for analyzing the agreement between maps based on a pair of MTZ files or between maps calculated from an MTZ file and a PDB file. 

Automation has dramatically changed macromolecular crystallography over the past decade, both by greatly speeding up the process of structure solution, model building and refinement and by bringing the tools for structure determination to a much wider group of scientists. 

Because these programs are implemented primarily as Python modules, complex data including models, reflections and other viewable data may be exchanged with the GUI without resorting to parsing log files.