scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 2015"


Journal ArticleDOI
07 May 2015
TL;DR: This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help.
Abstract: This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help. The software package provides experimenters with a free, convenient, and very powerful tool to organize their experiments and sessions.

1,974 citations


Journal ArticleDOI
TL;DR: MDTraj is a modern, lightweight, and fast software package for analyzing MD simulations that simplifies the analysis of MD data and connects these datasets with the modern interactive data science software ecosystem in Python.

1,480 citations


Journal ArticleDOI
TL;DR: Ncorr is an open-source subset-based 2D DIC package that amalgamates modern DIC algorithms proposed in the literature with additional enhancements and several applications of Ncorr that both validate it and showcase its capabilities are discussed.
Abstract: Digital Image Correlation (DIC) is an important and widely used non-contact technique for measuring material deformation. Considerable progress has been made in recent decades in both developing new experimental DIC techniques and in enhancing the performance of the relevant computational algorithms. Despite this progress, there is a distinct lack of a freely available, high-quality, flexible DIC software. This paper documents a new DIC software package Ncorr that is meant to fill that crucial gap. Ncorr is an open-source subset-based 2D DIC package that amalgamates modern DIC algorithms proposed in the literature with additional enhancements. Several applications of Ncorr that both validate it and showcase its capabilities are discussed.

1,184 citations


Journal ArticleDOI
TL;DR: Qualimap 2 represents a next step in the QC analysis of HTS data, along with comprehensive single-sample analysis of alignment data, and includes new modes that allow simultaneous processing and comparison of multiple samples.
Abstract: Motivation: Detection of random errors and systematic biases is a crucial step of a robust pipeline for processing high-throughput sequencing (HTS) data. Bioinformatics software tools capable of performing this task are available, either for general analysis of HTS data or targeted to a specific sequencing technology. However, most of the existing QC instruments only allow processing of one sample at a time. Results: Qualimap 2 represents a next step in the QC analysis of HTS data. Along with comprehensive single-sample analysis of alignment data, it includes new modes that allow simultaneous processing and comparison of multiple samples. As with the first version, the new features are available via both graphical and command line interface. Additionally, it includes a large number of improvements proposed by the user community. Availability and implementation: The implementation of the software along with documentation is freely available at http://www.qualimap.org. Contact: ed.gpm.nilreb-biipm@reyem Supplementary information: Supplementary data are available at Bioinformatics online.

1,154 citations


Journal ArticleDOI
Pierre Hirel1
TL;DR: Atomsk is a unified program that allows to generate, convert and transform atomic systems for the purposes of ab initio calculations, classical atomistic simulations, or visualization, in the areas of computational physics and chemistry.

867 citations


Journal ArticleDOI
TL;DR: The development and the present state of the “tps” series of software for use in geometric morphometrics on Windows-based computers are described and used in hundreds of studies in mammals and other organisms.
Abstract: The development and the present state of the “tps” series of software for use in geometric morphometrics on Windows-based computers are described. These programs have been used in hundreds of studies in mammals and other organisms. Download the complete issue.

617 citations


Posted Content
TL;DR: oTree is an open-source and online software for implementing interactive experiments in the laboratory, online, the field or combinations thereof, and provides the source code, a library of standard game templates and demo games which can be played by anyone.
Abstract: oTree is an open-source and online software for implementing interactive experiments in the laboratory, online, the field or combinations thereof. oTree does not require installation of software on subjects’ devices; it can run on any device that has a web browser, be that a desktop computer, a tablet or a smartphone. Deployment can be internet-based without a shared local network, or local-network-based even without internet access. For coding, Python is used, a popular, open-source programming language. www.oTree.org provides the source code, a library of standard game templates and demo games which can be played by anyone.

577 citations


Journal ArticleDOI
01 Feb 2015
TL;DR: The machine learning techniques have the ability for predicting software fault proneness and can be used by software practitioners and researchers, however, the application of theMachine learning techniques in software fault prediction is still limited and more number of studies should be carried out in order to obtain well formed and generalizable results.
Abstract: Reviews studies from 1991-2013 to assess application of ML techniques for SFP.Identifies seven categories of the ML techniques.Identifies 64 studies to answer the established research questions.Selects primary studies according to the quality assessment of the studies.Systematic literature review performs the following:Summarize ML techniques for SFP models.Assess performance accuracy and capability of ML techniques for constructing SFP models.Provide comparison between the ML and statistical techniques.Provide comparison of performance accuracy of different ML techniques.Summarize the strength and weakness of the ML techniques.Provides future guidelines to software practitioners and researchers. BackgroundSoftware fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. MethodIn this study we perform a systematic review of studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the statistical techniques and other machine learning techniques. Further the strengths and weaknesses of machine learning techniques are summarized. ResultsIn this paper we have identified 64 primary studies and seven categories of the machine learning techniques. The results prove the prediction capability of the machine learning techniques for classifying module/class as fault prone or not fault prone. The models using the machine learning techniques for estimating software fault proneness outperform the traditional statistical models. ConclusionBased on the results obtained from the systematic review, we conclude that the machine learning techniques have the ability for predicting software fault proneness and can be used by software practitioners and researchers. However, the application of the machine learning techniques in software fault prediction is still limited and more number of studies should be carried out in order to obtain well formed and generalizable results. We provide future guidelines to practitioners and researchers based on the results obtained in this work.

483 citations


Journal ArticleDOI
TL;DR: This work focuses on the computational aspects of super-resolution microscopy and presents a comprehensive evaluation of localization software packages, reflecting the various tradeoffs of SMLM software packages and helping users to choose the software that fits their needs.
Abstract: The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

397 citations


Book
04 Nov 2015
TL;DR: Evidence-Based Software Engineering and Systematic Reviews provides a clear introduction to the use of an evidence-based model for software engineering research and practice, explaining the roles of primary studies as elements of an over-arching evidence model, rather than as disjointed elements in the empirical spectrum.
Abstract: In the decade since the idea of adapting the evidence-based paradigm for software engineering was first proposed, it has become a major tool of empirical software engineering. Evidence-Based Software Engineering and Systematic Reviews provides a clear introduction to the use of an evidence-based model for software engineering research and practice. The book explains the roles of primary studies (experiments, surveys, case studies) as elements of an over-arching evidence model, rather than as disjointed elements in the empirical spectrum. Supplying readers with a clear understanding of empirical software engineering best practices, it provides up-to-date guidance on how to conduct secondary studies in software engineeringreplacing the existing 2004 and 2007 technical reports. The book is divided into three parts. The first part discusses the nature of evidence and the evidence-based practices centered on a systematic review, both in general and as applying to software engineering. The second part examines the different elements that provide inputs to a systematic review (usually considered as forming a secondary study), especially the main forms of primary empirical study currently used in software engineering. The final part provides practical guidance on how to conduct systematic reviews (the guidelines), drawing together accumulated experiences to guide researchers and students in planning and conducting their own studies. The book includes an extensive glossary and an appendix that provides a catalogue of reviews that may be useful for practice and teaching.

380 citations


Journal ArticleDOI
TL;DR: Bonsai is described, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams and demonstrated how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience.
Abstract: The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation.

Proceedings ArticleDOI
16 May 2015
TL;DR: This work motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models, and proposes avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts.
Abstract: Deep learning subsumes algorithms that automatically learn compositional representations The ability of these models to generalize well has ushered in tremendous advances in many fields such as natural language processing (NLP) Recent research in the software engineering (SE) community has demonstrated the usefulness of applying NLP techniques to software corpora Hence, we motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models Our deep learning models are applicable to source code files (since they only require lexically analyzed source code written in any programming language) and other types of artifacts We show how a particular deep learning model can remember its state to effectively model sequential data, eg, Streaming software tokens, and the state is shown to be much more expressive than discrete tokens in a prefix Then we instantiate deep learning models and show that deep learning induces high-quality models compared to n-grams and cache-based n-grams on a corpus of Java projects We experiment with two of the models' hyper parameters, which govern their capacity and the amount of context they use to inform predictions, before building several committees of software language models to aid generalization Then we apply the deep learning models to code suggestion and demonstrate their effectiveness at a real SE task compared to state-of-the-practice models Finally, we propose avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts Thus, our work serves as the first step toward deep learning software repositories

Journal ArticleDOI
TL;DR: Two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions.
Abstract: Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

Proceedings ArticleDOI
28 Oct 2015
TL;DR: MoonGen as discussed by the authors is a high-speed packet generator that can saturate 10 GbE links with minimum-sized packets while using only a single CPU core by running on top of the packet processing framework DPDK.
Abstract: We present MoonGen, a flexible high-speed packet generator. It can saturate 10 GbE links with minimum-sized packets while using only a single CPU core by running on top of the packet processing framework DPDK. Linear multi-core scaling allows for even higher rates: We have tested MoonGen with up to 178.5 Mpps at 120 Gbit/s. Moving the whole packet generation logic into user-controlled Lua scripts allows us to achieve the highest possible flexibility. In addition, we utilize hardware features of commodity NICs that have not been used for packet generators previously. A key feature is the measurement of latency with sub-microsecond precision and accuracy by using hardware timestamping capabilities of modern commodity NICs. We address timing issues with software-based packet generators and apply methods to mitigate them with both hardware support and with a novel method to control the inter-packet gap in software. Features that were previously only possible with hardware-based solutions are now provided by MoonGen on commodity hardware. MoonGen is available as free software under the MIT license in our git repository at https://github.com/emmericp/MoonGen

Journal ArticleDOI
TL;DR: MuxViz as discussed by the authors is a collection of algorithms for the analysis of multilayer networks, which are an important way to represent a large variety of complex systems throughout science and engineering.
Abstract: Multilayer relationships among entities and information about entities must be accompanied by the means to analyse, visualize and obtain insights from such data. We present open-source software (muxViz) that contains a collection of algorithms for the analysis of multilayer networks, which are an important way to represent a large variety of complex systems throughout science and engineering. We demonstrate the ability of muxViz to analyse and interactively visualize multilayer data using empirical genetic, neuronal and transportation networks. Our software is available at https://github.com/manlius/muxViz.

Journal ArticleDOI
TL;DR: ClearVolume makes live imaging truly live by enabling direct real-time inspection of the specimen imaged in light-sheet microscopes through an interface for stabilization of the sample.
Abstract: Further, we demonstrated the use of ClearVolume for longterm time-lapse imaging with an OpenSPIM microscope5. ClearVolume was readily integrated into the mManager (www. micro-manager.org/) plug-in to allow live 3D visualization. We used ClearVolume to remotely monitor a developing Drosophila melanogaster embryo to check on sample drift, photobleaching and other artifacts (Supplementary Video 5). Time-shifting allows inspection of the data at any given point in time during acquisition (Supplementary Video 6). In addition, ClearVolume can be used for aligning and calibrating light-sheet microscopes. The overall 3D point spread function of the optical system and the full 3D structure of the beam (for example, Gaussian or Bessel beam) can be visualized (Supplementary Videos 7 and 8). To aid manual alignment of the microscope, ClearVolume computes image sharpness in real time and provides audiovisual feedback (Supplementary Video 9). Finally, ClearVolume computes and visualizes sample drift trajectories and makes that information available through an interface for stabilization of the sample (Supplementary Video 10). ClearVolume can also be used on other types of microscopes such as confocal microscopes (Supplementary Video 11). Going beyond microscopy, ClearVolume is integrated with the popular Fiji6 (http://fiji.sc) and KNIME (http://www.knime.org/) software packages, bringing real-time 3D+t multichannel volume visualization to a larger community (Supplementary Videos 12–14). Furthermore, ClearVolume’s modularity allows any user to implement additional CPUor GPU-based modules for image analysis and visualization. In summary, ClearVolume makes live imaging truly live by enabling direct real-time inspection of the specimen imaged in light-sheet microscopes. The source code of ClearVolume is available at http://clearvolume.github.io.

Journal ArticleDOI
TL;DR: A new upper limb dynamic model is presented as a tool to evaluate potential differences in predictive behavior between platforms and to benchmark the benchmarking comparison using SIMM–Dynamics Pipeline–SD/Fast and OpenSim platforms.
Abstract: Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

Journal ArticleDOI
TL;DR: The software package IPO (‘Isotopologue Parameter Optimization’) was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices and the potential of IPO to increase the reliability of metabolomics data was shown.
Abstract: Background Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing.

Book ChapterDOI
27 Apr 2015
TL;DR: The pace of change and increasing complexity of modern code makes it difficult to produce error-free software, so available tools are often lacking in helping programmers develop more reliable and secure applications.
Abstract: For organisations like Facebook, high quality software is important. However, the pace of change and increasing complexity of modern code makes it difficult to produce error-free software. Available tools are often lacking in helping programmers develop more reliable and secure applications.

Journal ArticleDOI
TL;DR: The Chaospy software toolbox is compared to similar packages and demonstrates a stronger focus on defining reusable software building blocks that can easily be assembled to construct new, tailored algorithms for uncertainty quantification.

Journal ArticleDOI
TL;DR: This paper describes both the philosophy and strategy of the approach, and a software implementation, DiffPy Complex Modeling Infrastructure (DiffPy-CMI), for regularizing ill posed structure and nanostructure scattering inverse problems from complex material structures.
Abstract: A strategy is described for regularizing ill posed structure and nanostructure scattering inverse problems (i.e. structure solution) from complex material structures. This paper describes both the philosophy and strategy of the approach, and a software implementation, DiffPy Complex Modeling Infrastructure (DiffPy-CMI).

Proceedings ArticleDOI
09 Nov 2015
TL;DR: This paper proposes CodeHow, a code search technique that can recognize potential APIs a user query refers to and performs code retrieval by applying the Extended Boolean model, which considers the impact of both text similarity and potential APIs on code search.
Abstract: Over the years of software development, a vast amount of source code has been accumulated. Many code search tools were proposed to help programmers reuse previously-written code by performing free-text queries over a large-scale codebase. Our experience shows that the accuracy of these code search tools are often unsatisfactory. One major reason is that existing tools lack of query understanding ability. In this paper, we propose CodeHow, a code search technique that can recognize potential APIs a user query refers to. Having understood the potentially relevant APIs, CodeHow expands the query with the APIs and performs code retrieval by applying the Extended Boolean model, which considers the impact of both text similarity and potential APIs on code search. We deploy the backend of CodeHow as a Microsoft Azure service and implement the front-end as a Visual Studio extension. We evaluate CodeHow on a large-scale codebase consisting of 26K C# projects downloaded from GitHub. The experimental results show that when the top 1 results are inspected, CodeHow achieves a precision score of 0.794 (i.e., 79.4% of the first returned results are relevant code snippets). The results also show that CodeHow outperforms conventional code search tools. Furthermore, we perform a controlled experiment and a survey of Microsoft developers. The results confirm the usefulness and effectiveness of CodeHow in programming practices.

Journal ArticleDOI
TL;DR: In episode 217 of Software Engineering Radio, host Charles Anderson talks with James Turnbull, a software developer and security specialist who's vice president of services at Docker.
Abstract: In episode 217 of Software Engineering Radio, host Charles Anderson talks with James Turnbull, a software developer and security specialist who's vice president of services at Docker. Lightweight Docker containers are rapidly becoming a tool for deploying microservice-based architectures.

Proceedings ArticleDOI
15 Nov 2015
TL;DR: This work introduces Spack, a novel, recursive specification syntax to invoke parametric builds of packages and dependencies and shows through real-world use cases that Spack supports diverse and demanding applications, bringing order to HPC software chaos.
Abstract: Large HPC centers spend considerable time supporting software for thousands of users, but the complexity of HPC software is quickly outpacing the capabilities of existing software management tools. Scientific applications require specific versions of compilers, MPI, and other dependency libraries, so using a single, standard software stack is infeasible. However, managing many configurations is difficult because the configuration space is combinatorial in size. We introduce Spack, a tool used at Lawrence Livermore National Laboratory to manage this complexity. Spack provides a novel, recursive specification syntax to invoke parametric builds of packages and dependencies. It allows any number of builds to coexist on the same system, and it ensures that installed packages can find their dependencies, regardless of the environment. We show through real-world use cases that Spack supports diverse and demanding applications, bringing order to HPC software chaos.

Journal ArticleDOI
01 Jan 2015
TL;DR: A new software design based on the SWC format, a standardized neuromorphometric format that has been widely used for analyzing neuronal morphologies or sharing neuron reconstructions via online archives such as NeuroMorpho.org is presented.
Abstract: Brain circuit mapping requires digital reconstruction of neuronal morphologies in complicated networks. Despite recent advances in automatic algorithms, reconstruction of neuronal structures is still a bottleneck in circuit mapping due to a lack of appropriate software for both efficient reconstruction and user-friendly editing. Here we present a new software design based on the SWC format, a standardized neuromorphometric format that has been widely used for analyzing neuronal morphologies or sharing neuron reconstructions via online archives such as NeuroMorpho.org. We have also implemented the design in our open-source software called neuTube 1.0. As specified by the design, the software is equipped with parallel 2D and 3D visualization and intuitive neuron tracing/editing functions, allowing the user to efficiently reconstruct neurons from fluorescence image data and edit standard neuron structure files produced by any other reconstruction software. We show the advantages of neuTube 1.0 by comparing it to two other software tools, namely Neuromantic and Neurostudio. The software is available for free at http://www.neutracing.com, which also hosts complete software documentation and video tutorials.

DOI
16 May 2015
TL;DR: This paper presents and discusses a software obfuscation prototype tool based on the LLVM compilation suite that supports basic instruction substitutions, insertion of bogus control-flow constructs mixed with opaque predicates, control- Flow flattening, procedures merging as well as a code tamper-proofing algorithm embedding code and data checksums directly in the control- flow flattening mechanism.
Abstract: Software security with respect to reverse-engineering is a challenging discipline that has been researched for several years and which is still active. At the same time, this field is inherently practical, and thus of industrial relevance: indeed, protecting a piece of software against tampering, malicious modifications or reverse-engineering is a very difficult task. In this paper, we present and discuss a software obfuscation prototype tool based on the LLVM compilation suite. Our tool is built as different passes, where some of them have been open-sourced and are freely available, that work on the LLVM Intermediate Representation (IR) code. This approach brings several advantages, including the fact that it is language-agnostic and mostly independent of the target architecture. Our current prototype supports basic instruction substitutions, insertion of bogus control-flow constructs mixed with opaque predicates, control-flow flattening, procedures merging as well as a code tamper-proofing algorithm embedding code and data checksums directly in the control-flow flattening mechanism.

Patent
10 Nov 2015
TL;DR: In this article, a system and method for a container service that obtains a software image of a software container that has been configured to be executed within a computer system instance registered to a cluster by one or more processors is presented.
Abstract: A system and method for a container service that obtains a software image of a software container that has been configured to be executed within a computer system instance registered to a cluster by one or more processors. The container service is configured to receive a request to launch the software image in accordance with a task definition, wherein the task definition specifies an allocation of resources for the software container. The container service may then determine, according to a placement scheme, a subset of a set of container instances registered to the cluster in which to launch the software image in accordance with the task definition. Upon determining the subset of the set of container instances, the container service may launch the software image as one or more running software containers in the set of container instances in accordance with the task definition.

Journal ArticleDOI
01 Aug 2015
TL;DR: The experimental results showed that a cost-sensitive neural network can be created successfully by using the ABC optimization algorithm for the purpose of software defect prediction, and a different classification approach for this problem is proposed.
Abstract: Software defect prediction model was built by Artificial Neural Network (ANN).ANN connection weights were optimized by Artificial Bee Colony (ABC).Parametric cost-sensitivity feature was added to ANN by using a new error function.Model was applied to five publicly available datasets from the NASA repository.Results were compared with other cost-sensitive and non-cost-sensitive studies. The software development life cycle generally includes analysis, design, implementation, test and release phases. The testing phase should be operated effectively in order to release bug-free software to end users. In the last two decades, academicians have taken an increasing interest in the software defect prediction problem, several machine learning techniques have been applied for more robust prediction. A different classification approach for this problem is proposed in this paper. A combination of traditional Artificial Neural Network (ANN) and the novel Artificial Bee Colony (ABC) algorithm are used in this study. Training the neural network is performed by ABC algorithm in order to find optimal weights. The False Positive Rate (FPR) and False Negative Rate (FNR) multiplied by parametric cost coefficients are the optimization task of the ABC algorithm. Software defect data in nature have a class imbalance because of the skewed distribution of defective and non-defective modules, so that conventional error functions of the neural network produce unbalanced FPR and FNR results. The proposed approach was applied to five publicly available datasets from the NASA Metrics Data Program repository. Accuracy, probability of detection, probability of false alarm, balance, Area Under Curve (AUC), and Normalized Expected Cost of Misclassification (NECM) are the main performance indicators of our classification approach. In order to prevent random results, the dataset was shuffled and the algorithm was executed 10 times with the use of n-fold cross-validation in each iteration. Our experimental results showed that a cost-sensitive neural network can be created successfully by using the ABC optimization algorithm for the purpose of software defect prediction.


01 Jan 2015
TL;DR: This paper presents SoftNIC, a hybrid softwarehardware architecture to bridge the gap between limited hardware capabilities and ever changing user demands, and provides a programmable platform that allows applications to leverage NIC features implemented in software and hardware, without sacrificing performance.
Abstract: As the main gateway for network traffic to a server, the network interface card (NIC) is an ideal place to incorporate diverse network functionality, such as traffic control, protocol offloading, and virtualization. However, the slow evolution and inherent inflexibility of NIC hardware have failed to support evolving protocols, emerging applications, and rapidly changing system/network architectures. The traditional software approach to this problem—implementing NIC features in the host network stack—is unable to meet increasingly challenging performance requirements. In this paper we present SoftNIC, a hybrid softwarehardware architecture to bridge the gap between limited hardware capabilities and ever changing user demands. SoftNIC provides a programmable platform that allows applications to leverage NIC features implemented in software and hardware, without sacrificing performance. Our evaluation results show that SoftNIC achieves multi-10G performance even on a single core and scales further with multiple cores. We also present a variety of use cases to show the potential of software NIC augmentation.