scispace - formally typeset
Search or ask a question
Topic

Software

About: Software is a research topic. Over the lifetime, 130577 publications have been published within this topic receiving 2028987 citations. The topic is also known as: computer software & computational tool.


Papers
More filters
Journal ArticleDOI
TL;DR: The development and empirical validation of a model of software piracy by individuals in the workplace indicates that individual attitudes, subjective norms, and perceived behavioral control are significant precursors to the intention to illegally copy software.
Abstract: Theft of software and other intellectual property has become one of the most visible problems in computing today. This paper details the development and empirical validation of a model of software piracy by individuals in the workplace. The model was developed from the results of prior research into software piracy, and the reference disciplines of the theory of planned behavior, expected utility theory, and deterrence theory. A survey of 201 respondents was used to test the model. The results indicate that individual attitudes, subjective norms, and perceived behavioral control are significant precursors to the intention to illegally copy software. In addition, punishment severity, punishment certainty, and software cost have direct effects on the individual's attitude toward software piracy, whereas punishment certainty has a significant effect on perceived behavioral control. Consequently, strategies to reduce software piracy should focus on these factors. The results add to a growing stream of information systems research into illegal software copying behavior and have significant implications for organizations and industry groups aiming to reduce software piracy.

509 citations

Journal ArticleDOI
TL;DR: Two methods of machine learning are described, which are used to build estimators of software development effort from historical data, which indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained.
Abstract: Accurate estimation of software development effort is critical in software engineering. Underestimates lead to time pressures that may compromise full functional development and thorough testing of software. In contrast, overestimates can result in noncompetitive contract bids and/or over allocation of development resources and personnel. As a result, many models for estimating software development effort have been proposed. This article describes two methods of machine learning, which we use to build estimators of software development effort from historical data. Our experiments indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained. This cautionary note applies to any model-construction strategy that relies on historical data. All such models for software effort estimation should be evaluated by exploring model sensitivity on a variety of historical data. >

508 citations

Journal ArticleDOI
R.G. Dromey1
TL;DR: The model supports building quality into software, definition of language-specific coding standards, systematically classifying quality defects, and the development of automated code auditors for detecting defects in software.
Abstract: A model for software product quality is defined, it has been formulated by associating a set of quality-carrying properties with each of the structural forms that are used to define the statements and statement components of a programming language. These quality-carrying properties are in turn linked to the high-level quality attributes of the International Standard for Software Product Evaluation ISO-9126. The model supports building quality into software, definition of language-specific coding standards, systematically classifying quality defects, and the development of automated code auditors for detecting defects in software. >

507 citations

Book
01 Jan 2013
TL;DR: Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' Lev Manovich develops his own theory for this rapidly-growing, always-changing field.
Abstract: Software has replaced a diverse array of physical, mechanical, and electronic technologies used before 21st century to create, store, distribute and interact with cultural artifacts. It has become our interface to the world, to others, to our memory and our imagination - a universal language through which the world speaks, and a universal engine on which the world runs. What electricity and combustion engine were to the early 20th century, software is to the early 21st century. Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' the author of The Language of New Media (2001) develops his own theory for this rapidly-growing, always-changing field.What was the thinking and motivations of people who in the 1960 and 1970s created concepts and practical techniques that underlie contemporary media software such as Photoshop, Illustrator, Maya, Final Cut and After Effects? How do their interfaces and tools shape the visual aesthetics of contemporary media and design? What happens to the idea of a 'medium' after previously media-specific tools have been simulated and extended in software? Is it still meaningful to talk about different mediums at all? Lev Manovich answers these questions and supports his theoretical arguments by detailed analysis of key media applications such as Photoshop and After Effects, popular web services such as Google Earth, and the projects in motion graphics, interactive environments, graphic design and architecture. Software Takes Command is a must for all practicing designers and media artists and scholars concerned with contemporary media.

507 citations

Journal ArticleDOI
TL;DR: RevBayes is a new open-source software package based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models that outperforms competing software for several standard analyses and needs to explicitly specify each part of the model and analysis.
Abstract: Programs for Bayesian inference of phylogeny currently implement a unique and fixed suite of models Consequently, users of these software packages are simultaneously forced to use a number of programs for a given study, while also lacking the freedom to explore models that have not been implemented by the developers of those programs We developed a new open-source software package, RevBayes, to address these problems RevBayes is entirely based on probabilistic graphical models, a powerful generic framework for specifying and analyzing statistical models Phylogenetic-graphical models can be specified interactively in RevBayes, piece by piece, using a new succinct and intuitive language called Rev Rev is similar to the R language and the BUGS model-specification language, and should be easy to learn for most users The strength of RevBayes is the simplicity with which one can design, specify, and implement new and complex models Fortunately, this tremendous flexibility does not come at the cost of slower computation; as we demonstrate, RevBayes outperforms competing software for several standard analyses Compared with other programs, RevBayes has fewer black-box elements Users need to explicitly specify each part of the model and analysis Although this explicitness may initially be unfamiliar, we are convinced that this transparency will improve understanding of phylogenetic models in our field Moreover, it will motivate the search for improvements to existing methods by brazenly exposing the model choices that we make to critical scrutiny RevBayes is freely available at http://wwwRevBayescom [Bayesian inference; Graphical models; MCMC; statistical phylogenetics]

505 citations


Network Information
Related Topics (5)
User interface
85.4K papers, 1.7M citations
87% related
Cluster analysis
146.5K papers, 2.9M citations
86% related
Support vector machine
73.6K papers, 1.7M citations
86% related
The Internet
213.2K papers, 3.8M citations
85% related
Information system
107.5K papers, 1.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20246
20235,523
202213,625
20213,455
20205,268
20195,982