scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 2008"


Journal ArticleDOI
TL;DR: This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options.
Abstract: This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls.

8,171 citations


Journal ArticleDOI
TL;DR: The package adegenet for the R software is dedicated to the multivariate analysis of genetic markers by implementing formal classes and functions to manipulate and analyse genetic markers.
Abstract: Summary: The package adegenet for the R software is dedicated to the multivariate analysis of genetic markers. It extends the ade4 package of multivariate methods by implementing formal classes and functions to manipulate and analyse genetic markers. Data can be imported from common population genetics software and exported to other software and R packages. adegenet also implements standard population genetics tools along with more original approaches for spatial genetics and hybridization. Availability: Stable version is available from CRAN: http://cran. r-project.org/mirrors.html. Development version is available from adegenet website: http://adegenet.r-forge.r-project.org/. Both versions can be installed directly from R. adegenet is distributed under the GNU General Public Licence (v.2). Contact: jombart@biomserv.univ-lyon1.fr Supplementary information: Supplementary data are available at Bioinformatics online.

5,690 citations


Journal ArticleDOI
TL;DR: The ProteoWizard project provides a modular and extensible set of open-source, cross-platform tools and libraries that perform proteomics data analyses and enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access.
Abstract: Summary: The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Availability: Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged. Contact: gro.draziwoetorp@nerrad; ude.alcu@garap Supplementary information: Supplementary data are available at Bioinformatics online.

1,611 citations


Journal ArticleDOI
TL;DR: ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands; and finding ordered water molecules.
Abstract: ARP/wARP is a software suite to build macromolecular models in X-ray crystallography electron density maps. Structural genomics initiatives and the study of complex macromolecular assemblies and membrane proteins all rely on advanced methods for 3D structure determination. ARP/wARP meets these needs by providing the tools to obtain a macromolecular model automatically, with a reproducible computational procedure. ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands, including a choice of the best-fitting ligand from a 'cocktail'; and finding ordered water molecules. All protocols are easy to handle by a nonexpert user through a graphical user interface or a command line. The time required is typically a few minutes although iterative model building may take a few hours.

1,582 citations


Journal ArticleDOI
TL;DR: The range of tools and stimuli that PsychoPy provides, using OpenGL to generate very precise visual stimuli on standard personal computers, and the environment in which experiments are conducted are described.
Abstract: PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

1,321 citations


Journal ArticleDOI
15 Oct 2008
TL;DR: KEEL as discussed by the authors is a software tool to assess evolutionary algorithms for data mining problems of various kinds including regression, classification, unsupervised learning, etc., which includes evolutionary learning algorithms based on different approaches: Pittsburgh, Michigan and IRL.
Abstract: This paper introduces a software tool named KEEL which is a software tool to assess evolutionary algorithms for Data Mining problems of various kinds including as regression, classification, unsupervised learning, etc. It includes evolutionary learning algorithms based on different approaches: Pittsburgh, Michigan and IRL, as well as the integration of evolutionary learning techniques with different pre-processing techniques, allowing it to perform a complete analysis of any learning model in comparison to existing software tools. Moreover, KEEL has been designed with a double goal: research and educational.

1,297 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: The CenceMe application is presented, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace.
Abstract: We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.

1,184 citations


Journal ArticleDOI
TL;DR: In this paper, the underlying principles of video and stereo video analysis as well as its automation are reviewed and accompanied by a fully functional and freely available software implementation, which can be used to automate the analysis.
Abstract: Researchers studying aspects of locomotion or movement in biological and biomimetic systems commonly use video or stereo video recordings to quantify the behaviour of the system in question, often with an emphasis on measures of position, velocity and acceleration. However, despite the apparent simplicity of video analysis, it can require substantial investment of time and effort, even when performed with adequate software tools. This paper reviews the underlying principles of video and stereo video analysis as well as its automation and is accompanied by fully functional and freely available software implementation.

1,049 citations


Journal ArticleDOI
TL;DR: This note reports paradigm revisions that enable maintaining such a knowledge base up-to-date on a PC using the Dali server, a frugal solution to reduce the total computational cost by pruning search space using prior knowledge about the distribution of structures in fold space.
Abstract: The Red Queen said, ‘It takes all the running you can do, to keep in the same place.’ Lewis Carrol Motivation: Newly solved protein structures are routinely scanned against structures already in the Protein Data Bank (PDB) using Internet servers. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The number of known structures continues to grow exponentially. Sensitive—thorough but slow—search algorithms are challenged to deliver results in a reasonable time, as there are now more structures in the PDB than seconds in a day. The brute-force solution would be to distribute the individual comparisons on a massively parallel computer. A frugal solution, as implemented in the Dali server, is to reduce the total computational cost by pruning search space using prior knowledge about the distribution of structures in fold space. This note reports paradigm revisions that enable maintaining such a knowledge base up-to-date on a PC. Availability: The Dali server for protein structure database searching at http://ekhidna.biocenter.helsinki.fi/dali_server is running DaliLite v.3. The software can be downloaded for academic use from http://ekhidna.biocenter.helsinki.fi/dali_lite/downloads/v3. Contact: liisa.holm@helsinki.fi

1,013 citations


Journal ArticleDOI
30 Dec 2008
TL;DR: The concept of “ cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today are discussed.
Abstract: "Cloud" computing – a relatively recent term, builds on decades of research in virtualization, distributed computing, utility computing, and more recently networking, web and software services. It implies a service oriented architecture, reduced information technology overhead for the end-user, great flexibility, reduced total cost of ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today.

945 citations


Journal ArticleDOI
TL;DR: An overview of the fundamental principles of operation of this technology and the influence of geometric and software parameters on image quality and patient radiation dose are provided.

Journal ArticleDOI
TL;DR: Statnet is a suite of software packages for statistical network analysis that provides a comprehensive framework for ERGM-based network modeling, including tools for model estimation, model evaluation, model- based network simulation, and network visualization.
Abstract: statnet is a suite of software packages for statistical network analysis. The packages implement recent advances in network modeling based on exponential-family random graph models (ERGM). The components of the package provide a comprehensive framework for ERGM-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC) algorithm. The coding is optimized for speed and robustness.

Journal ArticleDOI
TL;DR: The Bayesian modelling methods introduced in this article represent an array of enhanced tools for learning the genetic structure of populations designed to meet the increasing need for analyzing large-scale population genetics data.
Abstract: During the most recent decade many Bayesian statistical models and software for answering questions related to the genetic structure underlying population samples have appeared in the scientific literature. Most of these methods utilize molecular markers for the inferences, while some are also capable of handling DNA sequence data. In a number of earlier works, we have introduced an array of statistical methods for population genetic inference that are implemented in the software BAPS. However, the complexity of biological problems related to genetic structure analysis keeps increasing such that in many cases the current methods may provide either inappropriate or insufficient solutions. We discuss the necessity of enhancing the statistical approaches to face the challenges posed by the ever-increasing amounts of molecular data generated by scientists over a wide range of research areas and introduce an array of new statistical tools implemented in the most recent version of BAPS. With these methods it is possible, e.g., to fit genetic mixture models using user-specified numbers of clusters and to estimate levels of admixture under a genetic linkage model. Also, alleles representing a different ancestry compared to the average observed genomic positions can be tracked for the sampled individuals, and a priori specified hypotheses about genetic population structure can be directly compared using Bayes' theorem. In general, we have improved further the computational characteristics of the algorithms behind the methods implemented in BAPS facilitating the analyses of large and complex datasets. In particular, analysis of a single dataset can now be spread over multiple computers using a script interface to the software. The Bayesian modelling methods introduced in this article represent an array of enhanced tools for learning the genetic structure of populations. Their implementations in the BAPS software are designed to meet the increasing need for analyzing large-scale population genetics data. The software is freely downloadable for Windows, Linux and Mac OS X systems at http://web.abo.fi/fak/mnf//mate/jc/software/baps.html .

Journal ArticleDOI
TL;DR: CisGenome is a software system for analyzing genome-wide chromatin immunoprecipitation (ChIP) data designed to meet all basic needs of ChIP data analyses, including visualization, data normalization, peak detection, false discovery rate computation, gene-peak association, and sequence and motif analysis.
Abstract: We present CisGenome, a software system for analyzing genome-wide chromatin immunoprecipitation (ChIP) data. CisGenome is designed to meet all basic needs of ChIP data analyses, including visualization, data normalization, peak detection, false discovery rate computation, gene-peak association, and sequence and motif analysis. In addition to implementing previously published ChIP-microarray (ChIP-chip) analysis methods, the software contains statistical methods designed specifically for ChlP sequencing (ChIP-seq) data obtained by coupling ChIP with massively parallel sequencing. The modular design of CisGenome enables it to support interactive analyses through a graphic user interface as well as customized batch-mode computation for advanced data mining. A built-in browser allows visualization of array images, signals, gene structure, conservation, and DNA sequence and motif information. We demonstrate the use of these tools by a comparative analysis of ChIP-chip and ChIP-seq data for the transcription factor NRSF/REST, a study of ChIP-seq analysis with or without a negative control sample, and an analysis of a new motif in Nanog- and Sox2-binding regions.

Proceedings ArticleDOI
10 May 2008
TL;DR: A comparative analysis of the predictive power of two different sets of metrics for defect prediction indicates that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.
Abstract: In this paper we present a comparative analysis of the predictive power of two different sets of metrics for defect prediction. We choose one set of product related and one set of process related software metrics and use them for classifying Java files of the Eclipse project as defective respective defect-free. Classification models are built using three common machine learners: logistic regression, naive Bayes, and decision trees. To allow different costs for prediction errors we perform cost-sensitive classification, which proves to be very successful: >75% percentage of correctly classified files, a recall of >80%, and a false positive rate <30%. Results indicate that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.

Book
09 Jun 2008
TL;DR: In Two Bits, Christopher M. Kelty investigates the history and cultural significance of Free Software, revealing the people and practices that have transformed not only software but also music, film, science, and education.
Abstract: In Two Bits, Christopher M. Kelty investigates the history and cultural significance of Free Software, revealing the people and practices that have transformed not only software but also music, film, science, and education. Free Software is a set of practices devoted to the collaborative creation of software source code that is made openly and freely available through an unconventional use of copyright law. Kelty explains how these specific practices have reoriented the relations of power around the creation, dissemination, and authorization of all kinds of knowledge. He also makes an important contribution to discussions of public spheres and social imaginaries by demonstrating how Free Software is a recursive publica public organized around the ability to build, modify, and maintain the very infrastructure that gives it life in the first place.Drawing on ethnographic research that took him from an Internet healthcare start-up company in Boston to media labs in Berlin to young entrepreneurs in Bangalore, Kelty describes the technologies and the moral vision that bind together hackers, geeks, lawyers, and other Free Software advocates. In each case, he shows how their practices and way of life include not only the sharing of software source code but also ways of conceptualizing openness, writing copyright licenses, coordinating collaboration, and proselytizing. By exploring in detail how these practices came together as the Free Software movement from the 1970s to the 1990s, Kelty also considers how it is possible to understand the new movements emerging from Free Software: projects such as Creative Commons, a nonprofit organization that creates copyright licenses, and Connexions, a project to create an online scholarly textbook commons.

Book ChapterDOI
10 Jul 2008
TL;DR: The effectiveness of the proposed method for learning and discrimination of malware behavior is demonstrated, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.
Abstract: Malicious software in form of Internet worms, computer viruses, and Trojan horses poses a major threat to the security of networked systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware families share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a method for learning and discrimination of malware behavior. Our method proceeds in three stages: (a) behavior of collected malware is monitored in a sandbox environment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminative features of the behavior models are ranked for explanation of classification decisions. Experiments with different heterogeneous test data collected over several months using honeypots demonstrate the effectiveness of our method, especially in detecting novelinstances of malware families previously not recognized by commercial anti-virus software.

Book
02 Feb 2008
TL;DR: This book teaches you how to use JavaSpaces technology to design and build distributed applications, and a set of common patterns and frameworks for approaching space-based programs, intended for computer professionals, students, and Java enthusiasts.
Abstract: From the Book: PREFACE: Over the next decade the computing landscape will change dramatically as devices become ubiquitous, network-connected, and ready to communicate. As the landscape changes, the way in which we design and build software will change as well: The distributed application (one that involves multiple processes and devices) will become the natural way we build systems, while the standalone desktop application will become nearly extinct. Designing distributed software is remarkably hard, however. The fundamental characteristics of a networked environment (such as heterogeneity, partial failure, and latency) and the difficulty of "gluing together" multiple, independent processes into a robust, scalable application present the programmer with many challenges that don't arise when designing and building desktop applications. JavaSpaces(TM) technology is a simple, expressive, and powerful tool that eases the burden of creating distributed applications. Processes are loosely coupled—communicating and synchronizing their activities using a persistent object store called a space, rather than through direct communication. This method of coordinating distributed processes leads to systems that are flexible, scalable, and reliable. While simple, the space-based model is powerful enough to implement advanced distributed applications—from e-commerce systems to groupware to heavy-duty parallel computations. Space-based programming also leverages the Jini(TM) technology's leasing, distributed event, and transaction features, making it suitable for building robust, commercial-quality distributed systems. This book teaches you how touseJavaSpaces technology to design and build distributed applications. It is intended for computer professionals, students, and Java enthusiasts—anyone who wants experience building networked applications. Through experimentation with the code examples, you'll develop a repertoire of useful techniques and patterns for creating space-based systems. We assume that you already have some programming experience and basic working knowledge of Java programming language fundamentals, but this book doesn't require any specific knowledge of network programming. JavaSpaces technology is new, and writing a book before the technology is in widespread use presents a unique challenge. We've approached the project from complementary perspectives. Two of the authors, Eric Freeman and Susanne Hupfer, spent much of the past decade designing and building space-based systems as part of the Linda research group at Yale University and used the JavaSpaces technology during the two years of its development. The third, Ken Arnold, was in charge of the JavaSpaces project at Sun Microsystems, working with a team of engineers to design and build the technology this book is all about. In this book, we present the foundations of programming with JavaSpaces technology, and a set of common patterns and frameworks for approaching space-based programs. As the technology becomes more widely used, the JavaSpace programming community will discover new ways of using it. We would like future editions of this book to incorporate these new patterns, and we invite you to send comments, suggestions, and ideas to javaspaces@awl.com and to make use of the book's web site at ...

Journal ArticleDOI
TL;DR: 'fdrtool' implements a flexible FDR estimation scheme that is unified across different test statistics and variants of FDR, and can be applied to very large scale (in the order of millions of hypotheses) multiple testing problems.
Abstract: Summary: False discovery rate (FDR) methodologies are essential in the study of high-dimensional genomic and proteomic data. The R package ‘fdrtool’ facilitates such analyses by offering a comprehensive set of procedures for FDR estimation. Its distinctive features include: (i) many different types of test statistics are allowed as input data, such as P-values, z-scores, correlations and t-scores; (ii) simultaneously, both local FDR and tail area-based FDR values are estimated for all test statistics and (iii) empirical null models are fit where possible, thereby taking account of potential over- or underdispersion of the theoretical null. In addition, ‘fdrtool’ provides readily interpretable graphical output, and can be applied to very large scale (in the order of millions of hypotheses) multiple testing problems. Consequently, ‘fdrtool’ implements a flexible FDR estimation scheme that is unified across different test statistics and variants of FDR. Availability: The program is freely available from the Comprehensive R Archive Network (http://cran.r-project.org/) under the terms of the GNU General Public License (version 3 or later). Contact: strimmer@uni-leipzig.de

Journal ArticleDOI
TL;DR: A description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.
Abstract: This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.

Journal ArticleDOI
TL;DR: A set of protocols showing how to use the 3DNA suite of programs to analyze, rebuild and visualize three-dimensional nucleic-acid structures is presented.
Abstract: We present a set of protocols showing how to use the 3DNA suite of programs to analyze, rebuild and visualize three-dimensional nucleic-acid structures. The software determines a wide range of conformational parameters, including the identities and rigid-body parameters of interacting bases and base-pair steps, the nucleotides comprising helical fragments, the area of overlap of stacked bases and so on. The reconstruction of three-dimensional structure takes advantage of rigorously defined rigid-body parameters, producing rectangular block representations of the nucleic-acid bases and base pairs and all-atom models with approximate sugar-phosphate backbones. The visualization components create vector-based drawings and scenes that can be rendered as raster-graphics images, allowing for easy generation of publication-quality figures. The utility programs use geometric variables to control the view and scale of an object, for comparison of related structures. The commands run in seconds even for large structures. The software and related information are available at http://3dna.rutgers.edu/.

Proceedings ArticleDOI
11 Aug 2008
TL;DR: This course is targeted at software developers with geometric needs, and course graduates will be able to select and use the appropriate algorithms and data structures provided by CGAL in their current or upcoming projects.
Abstract: The CGAL C++ library offers geometric data structures and algorithms that are reliable, efficient, easy to use, and easy to integrate in existing software. Use of de facto standard libraries like CGAL increases productivity, because they allow software developers to focus on the application layer. This course is an overview of CGAL geometric algorithms and data structures. The lectures cover:•CGAL for 2D vector graphics, including Boolean operations on Bezier curves, offsets, simplification, and geometry on the sphere.•CGAL for 3D point sets, including principal component analysis, bounding volumes, simplification, outlier removal, normal estimation, normal orientation, denoising, triangulation, and surface reconstruction.•CGAL for mesh-based modeling and processing, including Boolean operations, convex decomposition, simplification, and parameterization.•CGAL for mesh generation, including surface and volume mesh generation, from 3D images, implicit functions, or polyhedral surfaces.The introductory lecture covers non-geometric topics: the exact geometric computing paradigm that makes CGAL reliable without sacrificing efficiency and the generic programming paradigm that facilitates integration into existing software.

Proceedings ArticleDOI
10 May 2008
TL;DR: The experimental results show that the approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.
Abstract: An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as duplicate and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.

Proceedings ArticleDOI
19 May 2008
TL;DR: The concept of torque-to-position transformer designed to allow the implementation of joint torque control techniques on joint position-controlled robots is presented and the results are presented which demonstrate the effectiveness of this approach.
Abstract: Joint position control is a dominant paradigm in industrial robots. While it has been successful in various industrial tasks, joint position control is severely limited in performing advanced robotic tasks, especially in unstructured dynamic environments. This paper presents the concept of torque-to-position transformer designed to allow the implementation of joint torque control techniques on joint position-controlled robots. Robot torque control is essentially accomplished by converting desired joint torques into instantaneous increments of joint position inputs. For each joint, the transformer is based on the knowledge of the joint position servo controller and the closed-loop frequency response of that joint. This transformer can be implemented as a software unit and applied to any conventional position-controlled robot so that torque command to the robot becomes available. This approach has been experimentally implemented on the Honda ASIMO robot arm. The paper presents the results of this implementation which demonstrate the effectiveness of this approach.

01 Aug 2008
TL;DR: Recent results from terascale DNS of turbulent flames are presented, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame.
Abstract: Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air co-flow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations betweenmore » computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.« less

Journal ArticleDOI
TL;DR: Evaluating the capability of SVM in predicting defect-prone software modules and comparing its prediction performance against eight statistical and machine learning models in the context of four NASA datasets indicates that the prediction performance is generally better than or at least is competitive against the compared models.

Proceedings ArticleDOI
20 Feb 2008
TL;DR: This paper investigates dynamic tuning mechanisms on a new time-based software transactional memory implementation and exhibits the benefits of dynamic tuning.
Abstract: The current generation of software transactional memories has the advantage of being simple and efficient. Nevertheless, there are several parameters that affect the performance of a transactional memory, for example the locality of the application and the cache line size of the processor. In this paper, we investigate dynamic tuning mechanisms on a new time-based software transactional memory implementation. We study in extensive measurements the performance of our implementation and exhibit the benefits of dynamic tuning. We compare our results with TL2, which is currently one of the fastest word-based software transactional memories.

Journal ArticleDOI
TL;DR: A software package for computing non-parametric efficiency estimates, making inference, and testing hypotheses in frontier models, as well as computation of some new, robust estimators of efficiency, etc.
Abstract: This paper describes a software package for computing non-parametric efficiency estimates, making inference, and testing hypotheses in frontier models. Commands are provided for bootstrapping as well as computation of some new, robust estimators of efficiency, etc.

Journal ArticleDOI
TL;DR: The developed models proved to accurately represent the PLLs under real test conditions and are presented, providing a refined method for performance evaluation and comparison.
Abstract: In this paper, the performance assessment of three software single-phase phase-locked loop (PLL) algorithms is carried out by means of dynamic analysis and experimental results. Several line disturbances such as phase-angle jump, voltage sag, frequency step, and harmonics are generated by a DSP together with a D/A converter and applied to each PLL. The actual minus the estimated phase-angle values are displayed, providing a refined method for performance evaluation and comparison. Guidelines for parameters adjustments are also presented. In addition, practical implementation issues such as computational delay effects, ride-through, and computational load are addressed. The developed models proved to accurately represent the PLLs under real test conditions.

Journal IssueDOI
TL;DR: This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally perceived information in preference to potentially inaccurate or incomplete map data to provide a strong platform for future research in autonomous driving in global positioning system–denied and highly dynamic environments with poor a priori information.
Abstract: This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally perceived information in preference to potentially inaccurate or incomplete map data. The vehicle architecture was designed to handle the original DARPA Urban Challenge requirements of perceiving and navigating a road network with segments defined by sparse waypoints. The vehicle implementation includes many heterogeneous sensors with significant communications and computation bandwidth to capture and process high-resolution, high-rate sensor data. The output of the comprehensive environmental sensing subsystem is fed into a kinodynamic motion planning algorithm to generate all vehicle motion. The requirements of driving in lanes, three-point turns, parking, and maneuvering through obstacle fields are all generated with a unified planner. A key aspect of the planner is its use of closed-loop simulation in a rapidly exploring randomized trees algorithm, which can randomly explore the space while efficiently generating smooth trajectories in a dynamic and uncertain environment. The overall system was realized through the creation of a powerful new suite of software tools for message passing, logging, and visualization. These innovations provide a strong platform for future research in autonomous driving in global positioning system–denied and highly dynamic environments with poor a priori information. © 2008 Wiley Periodicals, Inc.