scispace - formally typeset
Search or ask a question
Topic

Software

About: Software is a research topic. Over the lifetime, 130577 publications have been published within this topic receiving 2028987 citations. The topic is also known as: computer software & computational tool.


Papers
More filters
Journal ArticleDOI
TL;DR: A general but tailorable cost-benefit model is devised and the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, which indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness.
Abstract: A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.

325 citations

Journal ArticleDOI
TL;DR: Estimation results suggest that introductory pricing is an effective practice at the beginning of the product cycle, and expanding software variety becomes more effective later, and a degree of inertia in the software market that does not exist in the hardware market.
Abstract: We examine the importance of indirect network effects in the U.S. video game market between 1994 and 2002. The diffusion of game systems is analyzed by the interaction between console adoption decisions and software supply decisions. Estimation results suggest that introductory pricing is an effective practice at the beginning of the product cycle, and expanding software variety becomes more effective later. We also find a degree of inertia in the software market that does not exist in the hardware market. This observation implies that software providers continue to exploit the installed base of hardware users after hardware demand has slowed.

325 citations

Proceedings ArticleDOI
01 May 2010
TL;DR: An automated technique that combines traceability with a machine learning technique known as topic modeling is proposed that automatically records traceability links during the software development process and learns a probabilistic topic model over artifacts.
Abstract: Software traceability is a fundamentally important task in software engineering. The need for automated traceability increases as projects become more complex and as the number of artifacts increases. We propose an automated technique that combines traceability with a machine learning technique known as topic modeling. Our approach automatically records traceability links during the software development process and learns a probabilistic topic model over artifacts. The learned model allows for the semantic categorization of artifacts and the topical visualization of the software system. To test our approach, we have implemented several tools: an artifact search tool combining keyword-based search and topic modeling, a recording tool that performs prospective traceability, and a visualization tool that allows one to navigate the software architecture and view semantic topics associated with relevant artifacts and architectural components. We apply our approach to several data sets and discuss how topic modeling enhances software traceability, and vice versa.

325 citations

Journal ArticleDOI
TL;DR: An assessment of several published statistical regression models that relate software development effort to software size measured in function points finds a problem with the current method for measuring function points that constrains the effective use of function points in regression models and suggests a modification to the approach that should enhance the accuracy of prediction models based on function points.
Abstract: This paper presents an assessment of several published statistical regression models that relate software development effort to software size measured in function points. The principal concern with published models has to do with the number of observations upon which the models were based and inattention to the assumptions inherent in regression analysis. The research describes appropriate statistical procedures in the context of a case study based on function point data for 104 software development projects and discusses limitations of the resulting model in estimating development effort. The paper also focuses on a problem with the current method for measuring function points that constrains the effective use of function points in regression models and suggests a modification to the approach that should enhance the accuracy of prediction models based on function points in the future. >

324 citations

Journal ArticleDOI
TL;DR: This work describes various applications of signature matching as a tool for using software libraries, inspired by the use of the implementation of a function signature matcher written in Standard ML.
Abstract: Signature matching is a method for organizing, navigating through, and retrieving from software libraries. We consider two kinds of software library components—functions and modules—and hence two kinds of matching—function matching and module matching. The signature of a function is simply its type; the signature of a module is a multiset of user-defined types and a multiset of function signatures. For both functions and modules, we consider not just exact match but also various flavors of relaxed match. We describe various applications of signature matching as a tool for using software libraries, inspired by the use of our implementation of a function signature matcher written in Standard ML.

324 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
87% related
Cluster analysis
146.5K papers, 2.9M citations
86% related
Support vector machine
73.6K papers, 1.7M citations
86% related
The Internet
213.2K papers, 3.8M citations
85% related
Information system
107.5K papers, 1.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20246
20235,523
202213,625
20213,455
20205,268
20195,982