scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 2010"


Journal ArticleDOI
TL;DR: Coot is a molecular-graphics program designed to assist in the building of protein and other macromolecular models and the current state of development and available features are presented.
Abstract: Coot is a molecular-graphics application for model building and validation of biological macromolecules. The program displays electron-density maps and atomic models and allows model manipulations such as idealization, real-space refinement, manual rotation/translation, rigid-body fitting, ligand search, solvation, mutations, rotamers and Ramachandran idealization. Furthermore, tools are provided for model validation as well as interfaces to external programs for refinement, validation and graphics. The software is designed to be easy to learn for novice users, which is achieved by ensuring that tools for common tasks are `discoverable' through familiar user-interface elements (menus and toolbars) or by intuitive behaviour (mouse controls). Recent developments have focused on providing tools for expert users, with customisable key bindings, extensions and an extensive scripting interface. The software is under rapid development, but has already achieved very widespread use within the crystallographic community. The current state of the software is presented, with a description of the facilities available and of some of the underlying methods employed.

22,053 citations


Journal ArticleDOI
TL;DR: The Open Visualization Tool (OVITO) as discussed by the authors is a 3D visualization software designed for post-processing atomistic data obtained from molecular dynamics or Monte Carlo simulations, which is written in object-oriented C++, controllable via Python scripts and easily extendable through a plug-in interface.
Abstract: The Open Visualization Tool (OVITO) is a new 3D visualization software designed for post-processing atomistic data obtained from molecular dynamics or Monte Carlo simulations. Unique analysis, editing and animations functions are integrated into its easy-to-use graphical user interface. The software is written in object-oriented C++, controllable via Python scripts and easily extendable through a plug-in interface. It is distributed as open-source software and can be downloaded from the website http://ovito.sourceforge.net/.

8,956 citations


Journal ArticleDOI
TL;DR: VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.
Abstract: We present VOSviewer, a freely available computer program that we have developed for constructing and viewing bibliometric maps. Unlike most computer programs that are used for bibliometric mapping, VOSviewer pays special attention to the graphical representation of bibliometric maps. The functionality of VOSviewer is especially useful for displaying large bibliometric maps in an easy-to-interpret way. The paper consists of three parts. In the first part, an overview of VOSviewer’s functionality for displaying bibliometric maps is provided. In the second part, the technical implementation of specific parts of the program is discussed. Finally, in the third part, VOSviewer’s ability to handle large maps is demonstrated by using the program to construct and display a co-citation map of 5,000 major scientific journals.

7,719 citations


Journal ArticleDOI
TL;DR: publCIF is an application designed for creating, editing and validating crystallographic information files (CIFs) that are used in journal publication and provides a web interface to the checkCIF service of the International Union of Crystallography (IUCr), which provides a full crystallographic analysis of the structural data.
Abstract: publCIF is an application designed for creating, editing and validating crystallographic information files (CIFs) that are used in journal publication. It validates syntax and dictionary-defined data attributes through internal routines, and also provides a web interface to the checkCIF service of the International Union of Crystallography (IUCr), which provides a full crystallographic analysis of the structural data. The graphical interface allows users to edit the CIF either in its `raw' ASCII form (using a text editor with context-sensitive data validation and input facilities) or as a formatted representation of a structure report (using a word-processing environment), as well as via a number of convenience tools (e.g. spreadsheet representations of looped data). Beyond file and data validation, publCIF provides access to resources to facilitate preparation of a structure report (e.g. databases of author details, experimental data, standard references etc., either distributed with the program or collected during its use), along with tools for reference parsing, spell checking, structure visualization and image management. publCIF was commissioned by the IUCr, both as free software for authors and as a tool for in-house journal production; the tool for authors is described here. Binary distributions for Linux, MacOS and Windows operating systems are available.

4,836 citations


22 May 2010
TL;DR: This work describes a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion, and implements several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation in a way that makes them completely independent of the training corpus size.
Abstract: Large corpora are ubiquitous in today's world and memory quickly becomes the limiting factor in practical applications of the Vector Space Model (VSM). We identify gap in existing VSM implementations, which is their scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion. In this framework, we implement several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design, so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ.

3,965 citations



Journal ArticleDOI
TL;DR: The working principles of important steps in processing rotation data are described as employed by the program XDS.
Abstract: Important steps in the processing of rotation data are described that are common to most software packages. These programs differ in the details and in the methods implemented to carry out the tasks. Here, the working principles underlying the data-reduction package XDS are explained, including the new features of automatic determination of spot size and reflecting range, recognition and assignment of crystal sym­metry and a highly efficient algorithm for the determination of correction/scaling factors.

2,096 citations


Journal ArticleDOI
TL;DR: This unit provides step‐by‐step protocols describing how to get started working with µManager, as well as some starting points for advanced use of the software.
Abstract: With the advent of digital cameras and motorization of mechanical components, computer control of microscopes has become increasingly important. Software for microscope image acquisition should not only be easy to use, but also enable and encourage novel approaches. The open-source software package µManager aims to fulfill those goals. This unit provides step-by-step protocols describing how to get started working with µManager, as well as some starting points for advanced use of the software.

1,604 citations


Book
18 Jan 2010
TL;DR: This introductory text to statistical machine translation (SMT) provides all of the theories and methods needed to build a statistical machine translator, such as Google Language Tools and Babelfish, and the companion website provides open-source corpora and tool-kits.
Abstract: This introductory text to statistical machine translation (SMT) provides all of the theories and methods needed to build a statistical machine translator, such as Google Language Tools and Babelfish. In general, statistical techniques allow automatic translation systems to be built quickly for any language-pair using only translated texts and generic software. With increasing globalization, statistical machine translation will be central to communication and commerce. Based on courses and tutorials, and classroom-tested globally, it is ideal for instruction or self-study, for advanced undergraduates and graduate students in computer science and/or computational linguistics, and researchers in natural language processing. The companion website provides open-source corpora and tool-kits.

1,538 citations


Journal ArticleDOI
TL;DR: The PKSolver provided pharmacokinetic researchers with a fast and easy-to-use tool for routine and basic PK and PD data analysis with a more user-friendly interface and its output could be generated in Microsoft Word in the form of an integrated report.

1,493 citations


Journal ArticleDOI
TL;DR: The HADDOCK web server protocol is presented, facilitating the modeling of biomolecular complexes for a wide community, and has access to the resources of a dedicated cluster and of the e-NMR GRID infrastructure.
Abstract: Computational docking is the prediction or modeling of the three-dimensional structure of a biomolecular complex, starting from the structures of the individual molecules in their free, unbound form. HADDOCK is a popular docking program that takes a data-driven approach to docking, with support for a wide range of experimental data. Here we present the HADDOCK web server protocol, facilitating the modeling of biomolecular complexes for a wide community. The main web interface is user-friendly, requiring only the structures of the individual components and a list of interacting residues as input. Additional web interfaces allow the more advanced user to exploit the full range of experimental data supported by HADDOCK and to customize the docking process. The HADDOCK server has access to the resources of a dedicated cluster and of the e-NMR GRID infrastructure. Therefore, a typical docking run takes only a few minutes to prepare and a few hours to complete.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive literature review on the automated analysis of feature models 20 years after of their invention and presents a conceptual framework to understand the different proposals as well as categorise future contributions.

Journal ArticleDOI
TL;DR: ObsPy as discussed by the authors is a Python toolbox that simplifies the usage of Python programming for seismologists by providing direct access to the actual time series, allowing the use of powerful numerical array-programming modules like NumPy (http://numpy.thz.edu/manuals/sac/Manual.html), as well as filtering, instrument simulation, triggering, and plotting.
Abstract: The wide variety of computer platforms, file formats, and methods to access seismological data often requires considerable effort in preprocessing such data. Although preprocessing work-flows are mostly very similar, few software standards exist to accomplish this task. The objective of ObsPy is to provide a Python toolbox that simplifies the usage of Python programming for seismologists. It is conceptually similar to SEATREE (Milner and Thorsten 2009) or the exploration seismic software project MADAGASCAR (http://www.reproducibility.org). In ObsPy the following essential seismological processing routines are implemented and ready to use: reading and writing data only SEED/MiniSEED and Dataless SEED (http://www.iris.edu/manuals/SEEDManual_V2.4.pdf), XML-SEED (Tsuboi et al. 2004), GSE2 (http://www.seismo.ethz.ch/autodrm/downloads/provisional_GSE2.1.pdf) and SAC (http://www.iris.edu/manuals/sac/manual.html), as well as filtering, instrument simulation, triggering, and plotting. There is also support to retrieve data from ArcLink (a distributed data request protocol for accessing archived waveform data, see Hanka and Kind 1994) or a SeisHub database (Barsch 2009). Just recently, modules were added to read SEISAN data files (Havskov and Ottemoller 1999) and to retrieve data with the IRIS/FISSURES data handling interface (DHI) protocol (Malone 1997). Python gives the user all the features of a full-fledged programming language including a large collection of scientific open-source modules. ObsPy extends Python by providing direct access to the actual time series, allowing the use of powerful numerical array-programming modules like NumPy (http://numpy.scipy.org) or SciPy (http://scipy.org). Results can be visualized using modules such as matplotlib (2D) (Hunter 2007) or MayaVi (3D) (http://code.enthought.com/projects/mayavi/). This is an advantage over the most commonly used seismological analysis packages SAC, SEISAN, SeismicHandler (Stammler 1993), or PITSA (Scherbaum and Johnson 1992), which do not provide methods for general numerical array manipulation. Because Python and its previously mentioned modules are open-source, there …

Journal ArticleDOI
TL;DR: The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages, to show that it represents the state of the art for forward computations.
Abstract: Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.

Journal ArticleDOI
TL;DR: The design and validation of a cardiovascular image analysis software package (Segment) is presented and its release in a source code format is announced and made freely available for research purposes.
Abstract: Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se . Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

Journal ArticleDOI
TL;DR: The current version of Chronux includes software for signal processing of neural time-series data including several specialized mini-packages for spike-sorting, local regression, audio segmentation, and other data-analysis tasks typically encountered by a neuroscientist.

Journal ArticleDOI
TL;DR: DETEX is proposed, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, and a detection technique that instantiates this method, and an empirical validation in terms of precision and recall of DETEX.
Abstract: Code and design smells are poor solutions to recurring implementation and design problems. They may hinder the evolution of a system by making it hard for software engineers to carry out changes. We propose three contributions to the research field related to code and design smells: (1) DECOR, a method that embodies and defines all the steps necessary for the specification and detection of code and design smells, (2) DETEX, a detection technique that instantiates this method, and (3) an empirical validation in terms of precision and recall of DETEX. The originality of DETEX stems from the ability for software engineers to specify smells at a high level of abstraction using a consistent vocabulary and domain-specific language for automatically generating detection algorithms. Using DETEX, we specify four well-known design smells: the antipatterns Blob, Functional Decomposition, Spaghetti Code, and Swiss Army Knife, and their 15 underlying code smells, and we automatically generate their detection algorithms. We apply and validate the detection algorithms in terms of precision and recall on XERCES v2.7.0, and discuss the precision of these algorithms on 11 open-source systems.

Journal ArticleDOI
TL;DR: The new version of mMass is based on a stand-alone Python library, which provides the basic functionality for data processing and interpretation, and can serve as a good starting point for other developers in their projects.
Abstract: While tools for the automated analysis of MS and LC-MS/MS data are continuously improving, it is still often the case that at the end of an experiment, the mass spectrometrist will spend time carefully examining individual spectra. Current software support is mostly provided only by the instrument vendors, and the available software tools are often instrument-dependent. Here we present a new generation of mMass, a cross-platform environment for the precise analysis of individual mass spectra. The software covers a wide range of processing tasks such as import from various data formats, smoothing, baseline correction, peak picking, deisotoping, charge determination, and recalibration. Functions presented in the earlier versions such as in silico digestion and fragmentation were redesigned and improved. In addition to Mascot, an interface for ProFound has been implemented. A specific tool is available for isotopic pattern modeling to enable precise data validation. The largest available lipid database (from...

Journal ArticleDOI
TL;DR: The Theriak/Domino software as discussed by the authors uses a unique algorithm of scanning and bookkeeping, which allows to compute completely and automatically a great variety of diagrams: phase diagrams, pseudo-binary, pseudoternary, isopleths, modal amounts, molar properties of single phases or bulk-rock properties like total Δ G, volume of solids, etc.
Abstract: In this paper, the term “equilibrium assemblage diagrams” refers to diagrams strictly based on assemblages predicted by Gibbs free energy minimization. The presented Theriak/Domino software uses a unique algorithm of scanning and bookkeeping, which allows to compute completely and automatically a great variety of diagrams: phase diagrams, pseudo-binary, pseudo-ternary, isopleths, modal amounts, molar properties of single phases or bulk-rock properties like total Δ G , volume of solids, etc. The speed and easiness of use makes thermodynamic modeling accessible to any student of Earth sciences and offers a powerful tool to check the consistency of thermodynamic databases, develop new solution models, plan experimental work, and to understand natural systems. The examples described in this paper demonstrate the capacity of the software, but also to show the usefulness and limitations of computed equilibrium assemblage diagrams. For most illustrations, a metapelite (TN205) from the eastern Lepontine Alps is used. The applications include the interpretation of complex diagrams, mineral reactions, the effect of Al content on the equilibrium assemblages, the interpretation of Si per formula unit in white mica, understanding some features of garnet growth, dehydration and isothermal compressibility, a broadening of the concept of AFM diagrams, combining equilibrium assemblage diagram information with thermobarometry, and comparing the results produced with different databases. Equilibrium assemblage diagrams do not always provide straightforward answers, but mostly stimulate further thought.

Journal ArticleDOI
TL;DR: 3DLigandSite is a web server for the prediction of ligand-binding sites based upon successful manual methods used in the eighth round of the Critical Assessment of techniques for protein Structure Prediction (CASP8), which utilizes protein-structure prediction to provide structural models for proteins that have not been solved.
Abstract: 3DLigandSite is a web server for the prediction of ligand-binding sites. It is based upon successful manual methods used in the eighth round of the Critical Assessment of techniques for protein Structure Prediction (CASP8). 3DLigandSite utilizes protein-structure prediction to provide structural models for proteins that have not been solved. Ligands bound to structures similar to the query are superimposed onto the model and used to predict the binding site. In benchmarking against the CASP8 targets 3DLigandSite obtains a Matthew's correlation co-efficient (MCC) of 0.64, and coverage and accuracy of 71 and 60%, respectively, similar results to our manual performance in CASP8. In further benchmarking using a large set of protein structures, 3DLigandSite obtains an MCC of 0.68. The web server enables users to submit either a query sequence or structure. Predictions are visually displayed via an interactive Jmol applet. 3DLigandSite is available for use at http://www.sbg.bio.ic.ac.uk/3dligandsite.

Journal ArticleDOI
TL;DR: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions.
Abstract: Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user’s manual and sample scripts demonstrating usage are also available on the web site.

Journal ArticleDOI
TL;DR: A software package that allows researchers to use a computer mouse-tracking method for assessing real-time processing in psychological tasks and validate the software by demonstrating the accuracy and reliability of its trajectory and reaction time data.
Abstract: In the present article, we present a software package, MouseTracker, that allows researchers to use a computer mouse-tracking method for assessing real-time processing in psychological tasks. By recording the streaming x-, y-coordinates of the computer mouse while participants move the mouse into one of multiple response alternatives, motor dynamics of the hand can reveal the time course of mental processes. MouseTracker provides researchers with fine-grained information about the real-time evolution of participant responses by sampling 60–75 times/sec the online competition between multiple response alternatives. MouseTracker allows researchers to develop and run experiments and subsequently analyze mouse trajectories in a user-interactive, graphics-based environment. Experiments may incorporate images, letter strings, and sounds. Mouse trajectories can be processed, averaged, visualized, and explored, and measures of spatial attraction/curvature, complexity, velocity, and acceleration can be computed. We describe the software and the method, and we provide details on mouse trajectory analysis. We validate the software by demonstrating the accuracy and reliability of its trajectory and reaction time data. The latest version of MouseTracker is freely available at http://mousetracker.jbfreeman.net.

Journal ArticleDOI
TL;DR: The GWAMA (Genome-Wide Association Meta-Analysis) software has been developed to perform meta-analysis of summary statistics generated from genome-wide association studies of dichotomous phenotypes or quantitative traits.
Abstract: Despite the recent success of genome-wide association studies in identifying novel loci contributing effects to complex human traits, such as type 2 diabetes and obesity, much of the genetic component of variation in these phenotypes remains unexplained. One way to improving power to detect further novel loci is through meta-analysis of studies from the same population, increasing the sample size over any individual study. Although statistical software analysis packages incorporate routines for meta-analysis, they are ill equipped to meet the challenges of the scale and complexity of data generated in genome-wide association studies. We have developed flexible, open-source software for the meta-analysis of genome-wide association studies. The software incorporates a variety of error trapping facilities, and provides a range of meta-analysis summary statistics. The software is distributed with scripts that allow simple formatting of files containing the results of each association study and generate graphical summaries of genome-wide meta-analysis results. The GWAMA (Genome-Wide Association Meta-Analysis) software has been developed to perform meta-analysis of summary statistics generated from genome-wide association studies of dichotomous phenotypes or quantitative traits. Software with source files, documentation and example data files are freely available online at http://www.well.ox.ac.uk/GWAMA .

Journal ArticleDOI
TL;DR: Sequence diagrams document the interoperability of the analysis classes for solving nonlinear finite-element equations, demonstrating that object composition with design patterns provides a general approach to developing and refactoring nonlinear infinite-element software.
Abstract: Object composition offers significant advantages over class inheritance to develop a flexible software architecture for finite-element analysis. Using this approach, separate classes encapsulate fu...

Journal ArticleDOI
TL;DR: This framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment and provides high-level access to volume rendering, volume editing, surface extraction, and image annotation.
Abstract: Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de .

Journal ArticleDOI
TL;DR: An intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program, using the well-known Michaelis–Menten equation characterizing simple enzyme kinetics.
Abstract: We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.

Book
01 Jul 2010
TL;DR: Introduction to WINBUGS for Ecologists goes right to the heart of the matter by providing ecologists with a comprehensive, yet concise, guide to applying WinBUGS to the types of models that they use most often: linear (LM), generalized linear (GLM), linear mixed (LMM) and generalized linear mixed models (GLMM).
Abstract: Bayesian statistics has exploded into biology and its sub-disciplines such as ecology over the past decade. The free software program WinBUGS and its open-source sister OpenBugs is currently the only flexible and general-purpose program available with which the average ecologist can conduct their own standard and non-standard Bayesian statistics. Introduction to WINBUGS for Ecologists goes right to the heart of the matter by providing ecologists with a comprehensive, yet concise, guide to applying WinBUGS to the types of models that they use most often: linear (LM), generalized linear (GLM), linear mixed (LMM) and generalized linear mixed models (GLMM).Introduction to WinBUGS for Ecologists combines the use of simulated data sets "paired" analyses using WinBUGS (in a Bayesian framework for analysis) and in R (in a frequentist mode of inference) and uses a very detailed step-by-step tutorial presentation style that really lets the reader repeat every step of the application of a given mode in their own research. - Introduction to the essential theories of key models used by ecologists - Complete juxtaposition of classical analyses in R and Bayesian Analysis of the same models in WinBUGS- Provides every detail of R and WinBUGS code required to conduct all analyses- Written with ecological language and ecological examples- Companion Web Appendix that contains all code contained in the book, additional material (including more code and solutions to exercises)- Tutorial approach shows ecologists how to implement Bayesian analysis in practical problems that they face

Journal ArticleDOI
TL;DR: In this paper, a potential function is defined for each controllable unit of the micro-grid such that the minimum of the potential function corresponds to the control goal, and dynamic set points are updated, using communication within the microgrid.
Abstract: This paper introduces the potential-function based method for secondary (as well as tertiary) control of a microgrid, in both islanded and grid-connected modes. A potential function is defined for each controllable unit of the microgrid such that the minimum of the potential function corresponds to the control goal. The dynamic set points are updated, using communication within the microgrid. The proposed potential function method is applied for the secondary voltage control of two microgrids with single and multiple feeders. Both islanded and grid-connected modes are investigated. The studies are conducted in the time-domain, using the PSCAD/EMTDC software environment. The study results demonstrate feasibility of the proposed potential function method and viability of the secondary voltage control method for a microgrid.

Proceedings ArticleDOI
17 Feb 2010
TL;DR: The Hardware Locality (hwloc) software is introduced which gathers hardware information about processors, caches, memory nodes and more, and exposes it to applications and runtime systems in a abstracted and portable hierarchical manner.
Abstract: The increasing numbers of cores, shared caches and memory nodes within machines introduces a complex hardware topology. High-performance computing applications now have to carefully adapt their placement and behavior according to the underlying hierarchy of hardware resources and their software affinities. We introduce the Hardware Locality (hwloc) software which gathers hardware information about processors, caches, memory nodes and more, and exposes it to applications and runtime systems in a abstracted and portable hierarchical manner. hwloc may significantly help performance by having runtime systems place their tasks or adapt their communication strategies depending on hardware affinities. We show that hwloc can already be used by popular high-performance OpenMP or MPI software. Indeed, scheduling OpenMP threads according to their affinities or placing MPI processes according to their communication patterns shows interesting performance improvement thanks to hwloc. An optimized MPI communication strategy may also be dynamically chosen according to the location of the communicating processes in the machine and its hardware characteristics.

Journal ArticleDOI
TL;DR: The authors provide an overview of recommendation systems for software engineering: what they are, what they can do for developers, and what they might do in the future.
Abstract: Software development can be challenging because of the large information spaces that developers must navigate. Without assistance, developers can become bogged down and spend a disproportionate amount of their time seeking information at the expense of other value-producing tasks. Recommendation systems for software engineering (RSSEs) are software tools that can assist developers with a wide range of activities, from reusing code to writing effective bug reports. The authors provide an overview of recommendation systems for software engineering: what they are, what they can do for developers, and what they might do in the future.