scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Software in 2008"


Journal ArticleDOI
TL;DR: FindBugs evaluates what kinds of defects can be effectively detected with relatively simple techniques and helps developers understand how to incorporate such tools into software development.
Abstract: Static analysis examines code in the absence of input data and without running the code. It can detect potential security violations (SQL injection), runtime errors (dereferencing a null pointer) and logical inconsistencies (a conditional test that can't possibly be true). Although a rich body of literature exists on algorithms and analytical frameworks used by such tools, reports describing experiences in industry are much harder to come by. The authors describe FindBugs, an open source static-analysis tool for Java, and experiences using it in production settings. FindBugs evaluates what kinds of defects can be effectively detected with relatively simple techniques and helps developers understand how to incorporate such tools into software development.

494 citations


Journal ArticleDOI
TL;DR: An analysis of data from 16 software development organizations reveals seven agile RE practices, along with their benefits and challenges, and seven traditional requirements-engineering approaches that should be considered.
Abstract: An analysis of data from 16 software development organizations reveals seven agile RE practices, along with their benefits and challenges. The rapidly changing business environment in which most organizations operate is challenging traditional requirements-engineering (RE) approaches. Software development organizations often must deal with requirements that tend to evolve quickly and become obsolete even before project completion. Rapid changes in competitive threats, stakeholder preferences, development technology, and time-to-market pressures make prespecified requirements inappropriate.

427 citations


Journal ArticleDOI
TL;DR: Results from the global Web survey of IT departments in 2005 and 2007 suggest that, although the overall project failure rate is high, word of a software crisis is exaggerated.
Abstract: Results from our global Web survey of IT departments in 2005 and 2007 suggest that, although the overall project failure rate is high, word of a software crisis is exaggerated.

347 citations


Journal ArticleDOI
TL;DR: This paper discusses how component-based reuse of the form Douglas Mcllroy envisaged in the 1960s is still the exception rather than the rule, and most of the systematic software reuse practiced today uses heavyweight approaches such as product-line engineering or domain-specific frameworks.
Abstract: For many years, the IT industry has sought to accelerate the software development process by assembling new applications from existing software assets. However, true component-based reuse of the form Douglas Mcllroy envisaged in the 1960s is still the exception rather than the rule, and most of the systematic software reuse practiced today uses heavyweight approaches such as product-line engineering or domain-specific frameworks. By component, we mean any cohesive and compact unit of software functionality with a well-defined interface - from simple programming language classes to more complex artifacts such as Web services and Enterprise JavaBeans.

180 citations


Journal ArticleDOI
TL;DR: Five principles are proposed that can help programmers to choose the most appropriate refactoring tools and also help toolsmiths to design tools that fit the programmer's purpose.
Abstract: Refactoring is the process of changing software's structure while preserving its external behavior. Refactoring tools can improve the speed and accuracy with which developers create and maintain software-but only if they are used. In practice, tools are not used as much as they could be; this seems to be because sometimes they do not align with the refactoring tactic preferred by most programmers, a tactic the authors call "floss refactoring." They propose five principles that characterize successful floss-refactoring tools - principles that can help programmers to choose the most appropriate refactoring tools and also help toolsmiths to design tools that fit the programmer's purpose.

151 citations


Journal ArticleDOI
TL;DR: Most software developers aren't primarily interested in security, but the software engineering community is slowly beginning to realize that information security is also important for software whose primary function isn't related to security.
Abstract: Most software developers aren't primarily interested in security. For decades, the focus has been on implementing as much functionality as possible before the deadline, and patching the inevitable bugs when it's time for the next release or hot fix. However, the software engineering community is slowly beginning to realize that information security is also important for software whose primary function isn't related to security. Security features or mechanisms typically aren't prominent in such software's user interface.

142 citations


Journal ArticleDOI
TL;DR: Three new tools from Microsoft combine techniques from static program analysis, dynamic analysis, model checking, and automated constraint solving while targeting different application domains.
Abstract: During the last 10 years, code inspection for standard programming errors has largely been automated with static code analysis. During the next 10 years, we expect to see similar progress in automating testing, and specifically test generation, thanks to advances in program analysis, efficient constraint solvers, and powerful computers. Three new tools from Microsoft combine techniques from static program analysis, dynamic analysis, model checking, and automated constraint solving while targeting different application domains.

139 citations


Journal ArticleDOI
TL;DR: Computational scientists developing software for HPC systems face unique software engineering issues and attempts to transfer SE technologies to this domain must take these issues into account.
Abstract: Computational scientists developing software for HPC systems face unique software engineering issues. Attempts to transfer SE technologies to this domain must take these issues into account.

137 citations


Journal ArticleDOI
TL;DR: Sebastian Schaffert et al. as discussed by the authors describe semantic wikis and explain how to model wiki knowledge and content for improved usability, which is similar to our approach.
Abstract: Lean knowledge management is today implemented mostly through wikis, which let users enter text and other data, such as files, and connect the content through hyperlinks. Easy setup and a huge variety of editing support are primary reasons for wiki use in all types of intranet- and Internet-based information sharing (see P. Louridas, "Using Wikis in Software Development," IEEE Software, Mar. 2006, pp. 88- 91). The drawbacks show up when you need to structure data as opposed to just edit text. Many wikis have tons of useful content, but the volume and lack of structure make it inaccessible over time. This is where semantic wikis enter the picture. Sebastian Schaffert and his colleagues describe them here and explain how to model wiki knowledge and content for improved usability. I look forward to hearing from both readers and prospective authors about this column and the technologies you want to know more about.

121 citations


Journal ArticleDOI
TL;DR: The results indicate that test-first programmers are more likely to write software in more and smaller units that are less complex and more highly tested, and claims that TDD improves cohesion while lowering coupling are not confirmed.
Abstract: Support for test-driven development [TDD] is growing in many development contexts beyond its common association with extreme programming. By focusing on how TDD influences design characteristics, we hope to raise awareness of TDD as a design approach and assist others in decisions on whether and how to adopt TDD. Our results indicate that test-first programmers are more likely to write software in more and smaller units that are less complex and more highly tested. We weren't able to confirm claims that TDD improves cohesion while lowering coupling, but we anticipate ways to clarify the questions these design characteristics raised. In particular, we're working to eliminate the confounding factor of accessor usage in the cohesion metrics.

117 citations


Journal ArticleDOI
TL;DR: Companies must consider the advantages and disadvantages of open source software before adopting it and attach different levels of importance to these advantages or factors related to the adoption decision.
Abstract: Many organizations use open source infrastructure software such as Linux, and open source software (OSS) is generally considered a viable technology. Both professional and academic literature devote much attention to the OSS phenomenon. However, decision makers considering the adoption of OSS face a plethora of books, research papers, and articles highlighting OSS's advantages and disadvantages. Different articles attach different levels of importance to these advantages or factors related to the adoption decision. Reasons for adopting OSS vary from the pragmatic. Organizations must consider the advantages and disadvantages of open source software before adopting it.

Journal ArticleDOI
TL;DR: The next time you find yourself struggling with the details of your software's functionality, back up and tell a story to keep you on the right track.
Abstract: The next time you find yourself struggling with the details of your software's functionality, back up and tell a story. Talking through simple user scenarios can help keep you on the right track.

Journal ArticleDOI
TL;DR: This special issue explores the question of how the development of scientific software can be improved and provides some flavor of the variety of that variety.
Abstract: Not all scientific computing is high-performance computing—the variety of scientific software is huge. Such software might be complex simulation software developed and running on a high-performance computer, or software developed on a PC for embedding into instruments; for manipulating, analyzing or visualizing data or for orchestrating workflows. This special issue provides some flavor of that variety. It also explores the question of how the development of scientific software can be improved.

Journal ArticleDOI
TL;DR: Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software and indicated that the scientists used a set of strategies to address risk.
Abstract: The development of scientific software involves risk in the underlying theory, its implementation, and its use. Through a series of interviews, the authors explored how research scientists at two Canadian universities developed their software. These interviews indicated that the scientists used a set of strategies to address risk. They also suggested where the software engineering community could perform research focused on specific problems faced by scientific software developers.

Journal ArticleDOI
TL;DR: Critical issues that have enabled major achievements include the development of good model forms, criteria for evaluating models, methods for integrating expert judgment and statistical data analysis, and processes for developing new models that cover new software development approaches.
Abstract: This article summarizes major achievements and challenges of software resource estimation over the last 40 years, emphasizing the Cocomo suite of models. Critical issues that have enabled major achievements include the development of good model forms, criteria for evaluating models, methods for integrating expert judgment and statistical data analysis, and processes for developing new models that cover new software development approaches. The article also projects future trends in software development and evolution processes, along with their implications and challenges for future software resource estimation capabilities.

Journal ArticleDOI
TL;DR: Quper provides concepts for reasoning about quality in relation to cost and value and can be used in combination with existing prioritization approaches to address requirements prioritization in a market-driven context.
Abstract: Would slightly better performance be significantly more valuable from a market perspective? Would significantly better performance be just slightly more expensive to implement? When dealing with performance, usability, reliability, and so on, you often end up in difficult trade-off analysis. You must take into account aspects such as release targets, end-user experience, and business opportunities. At the same time, you must consider what is feasible with the evolving system architecture and the available development resources.Quality requirements are of major importance in the development of systems for software-intensive products. To be successful, a company must find the right balance among competing quality attributes. How should you balance, for example, investments for improved usability of a mobile phone's phone book and better mobile positioning? In the context of quality requirements, decision making typically combines market considerations and design issues in activities such as roadmapping, release planning, and platform scoping. Models that address requirements prioritization in a market-driven context often emphasize functional aspects. (For a comparison of other relevant techniques with Quper, see the sidebar.) Quper provides concepts for reasoning about quality in relation to cost and value and can be used in combination with existing prioritization approaches.

Journal Article
TL;DR: There was little evidence that Premarin, the most widely prescribed estrogen for postmenopausal use, had positive effects on cognitive function and most studies showed no evidence of an effect on verbal or visuospatial memory, mental rotations, speed or accuracy measures.
Abstract: BACKGROUND: As estrogens have been found in animal models to be associated with the maintenance and protection of brain structures, it is biologically plausible that maintaining high levels of estrogens in postmenopausal women by medication could be protective against cognitive decline. OBJECTIVES: To investigate the effect of ERT (estrogens only) or HRT (estrogens combined with a progestagen) in comparison with placebo in RCTs on cognitive function in postmenopausal women. SEARCH STRATEGY: The CDCIG Specialized Register was searched 7 March 2006. Additional searches were made of MEDLINE (1966-2006/02); EMBASE (1985-2006/02); PsycINFO (1967-2006/02) and CINAHL (1982-2006/01). SELECTION CRITERIA: All double-blind RCTs trials of the effect of ERT or HRT on cognitive function over a treatment period of at least two weeks in postmenopausal women. DATA COLLECTION AND ANALYSIS: Selection of studies, assessment of quality and extraction of data were undertaken independently by three reviewers with disagreements resolved by discussion. MAIN RESULTS: In total, 24 trials were included, but only 16 (10,114 women) had analysable data. Meta-analyses showed no effects of either ERT or HRT on prevention of cognitive impairment after five and four years of treatment, respectively (odds ratio 1.34, 95% CI 0.95 to 1.9; odds ratio 1.05, 95% CI 0.72 to 1.54 respectively) (trend favouring control in both instances). Analyses assessing the effects of treatment over time found that both ERT and HRT did not maintain or improve cognitive function and may even adversely affect this outcome (WMD = -0.45, 95% CI -0.99 to 0.09; WMD = -0.16, 95% CI -0.58 to 0.26, respectively at maximum follow up). Negative effects were found for ERT after one year and HRT after three and four years of therapy. Results from smaller trials assessing effects on individual cognitive domains mostly reported no evidence of benefit. AUTHORS' CONCLUSIONS: There is good evidence that both ERT and HRT do not prevent cognitive decline in older postmenopausal women when given as short term or longer term (up to five years) therapy. It is not known whether either specific types of ERT or HRT have specific effects in subgroups of women, although there was evidence that combined hormone therapy in similarly aged women was associated with a decrement in a number of verbal memory tests and a small improvement in a test of figural memory. There is insufficient evidence to determine whether subgroups of women using specific types of hormone therapy could benefit from treatment. It remains to be determined whether factors such as younger age (< 60 years of age), type of menopause (surgical or natural) and type of treatment (type of estrogen with or without a progestagen), mode of delivery (transdermal, oral or intramuscular) and dosage have positive effects at a clinically relevant level. In addition, whether the absence or presence of menopausal symptoms can modify treatment effects should be investigated in more detail. Large RCTs currently underway in the USA may be able to provide answers to these uncertainties by the year 2010. In the meantime, based on the available evidence, ERT or HRT cannot be recommended for overall cognitive improvement or maintenance in older postmenopausal women without cognitive impairment.

Journal ArticleDOI
J. Boegh1
TL;DR: A new standard on software quality requirements, ISO/IEC 25030, takes a systems perspective and suggests specifying requirements as measures and associated target values.
Abstract: A new standard on software quality requirements, ISO/IEC 25030, takes a systems perspective and suggests specifying requirements as measures and associated target values.

Journal ArticleDOI
TL;DR: The authors use the findings from their in-depth review of the 92 studies published in the last 25 years on software engineer motivation to give an overview of what managers need to know to improve motivation among their employees.
Abstract: Software engineers will do better work and stay with a company if they are motivated - as a result the success of software projects is likely to improve. The authors use the findings from their in-depth review of the 92 studies published in the last 25 years on software engineer motivation to give an overview of what managers need to know to improve motivation among their employees.

Journal ArticleDOI
TL;DR: This article offers an overview of tools that aim to address quality control issues, and discusses their own flexible, open-source toolkit, which supports the creation of dashboards for quality control.
Abstract: Over time, software systems suffer gradual quality decay and therefore costs can rise if organizations fail to take proactive countermeasures. Quality control is the first step to avoiding this cost trap. Continuous quality assessments help users identify quality problems early, when their removal is still inexpensive; they also aid decision making by providing an integrated view of a software system's current status. As a side effect, continuous and timely feedback helps developers and maintenance personnel improve their skills and thereby decreases the likelihood of future quality defects. To make regular quality control feasible, it must be highly automated, and assessment results must be presented in an aggregated manner to avoid overwhelming users with data. This article offers an overview of tools that aim to address these issues. The authors also discuss their own flexible, open-source toolkit, which supports the creation of dashboards for quality control.

Journal ArticleDOI
J.H. Wesselius1
TL;DR: Inner-source-software (ISS) development applies OSS within a limited environment that has a closed border (such as a company, a division, or a consortium) so, companies using the ISS approach essentially establish an OSS community within the confines of their organization.
Abstract: Software product-line development lets organizations better optimize software development efficiency by building a shared set of assets for reuse in multiple products. This process introduces many challenges, not least of which is creating the initial set of reusable software assets. To accomplish this, organizations often establish a central software platform group. Such a group faces a serious problem: existing systems groups already have a large set of software components that they use to build their products. If companies are to successfully transition to product-line development, these systems groups must shift their investments from existing software components to the new reusable product-line assets. One way to encourage this is to create an internal open source software community that lets systems groups actively contribute their existing components to the platform. In OSS, a community works together to develop software. Because the software's users are part of the community, they can add the assets they need. Inner-source-software (ISS) development applies OSS within a limited environment that has a closed border (such as a company, a division, or a consortium). So, companies using the ISS approach essentially establish an OSS community within the confines of their organization.

Journal ArticleDOI
TL;DR: Two aspects of using quality attribute information are considered: communicating with stakeholders about quality attributes and incorporating quality attribute requirements into existing analysis and design methods.
Abstract: Quality attribute requirements are important both for customer and end-user satisfaction and for driving software system design. Yet asserting their importance raises many other questions. In particular, using quality attribute information in practice isn't obvious. Here, we consider two aspects of using such information: communicating with stakeholders about quality attributes and incorporating quality attribute requirements into existing analysis and design methods.

Journal ArticleDOI
TL;DR: The value-oriented approach to specifying quality requirements uses a range of potential representations chosen on the basis of assessing risk instead of quantifying everything.
Abstract: When quality requirements are elicited from stakeholders, they're often stated qualitatively, such as "the response time must be fast" or "we need a highly available system". However, qualitatively represented requirements are ambiguous and thus difficult to verify. The value-oriented approach to specifying quality requirements uses a range of potential representations chosen on the basis of assessing risk instead of quantifying everything.

Journal ArticleDOI
TL;DR: Two start-ups are described that have opportunistically and pragmatically developed their products, reusing functionality from others that they could never have built independently.
Abstract: Both practitioners and academics often frown on pragmatic and opportunistic reuse. Large organizations report that structured reuse methods and software product lines are often the way to go when it comes to efficient software reuse. However, opportunistic that is, nonstructured reuse has proven profitable for small to medium-sized organizations. Here, we describe two start-ups that have opportunistically and pragmatically developed their products, reusing functionality from others that they could never have built independently. We demonstrate that opportunistic and pragmatic reuse is necessary to rapidly develop innovative software products. We define pragmatic reuse as extending software with functionality from a third-party software supplier that was found without a formal search-and-procurement process and might not have been built with reuse in mind. We define opportunistic reuse as extending software with functionality from a third-party software supplier that wasn't originally intended to be integrated and reused.

Journal ArticleDOI
TL;DR: The model-view-controller design pattern can improve concern separation in a self-representation to model system functionality concerns, and is one of the foundations of distributed systems.
Abstract: A self-management infrastructure requires a self-representation to model system functionality concerns. The model-view-controller design pattern can improve concern separation in a self-representation. Future computing initiatives such as ubiquitous and pervasive computing, large-scale distribution, and on-demand computing will foster unpredictable and complex environments with challenging demands. Next-generation systems will require flexible system infrastructures that can adapt to both dynamic changes in operational requirements and environmental conditions, while providing predictable behavior in areas such as throughput, scalability, dependability, and security. Successful projects, once deployed, will require skilled administration personnel to install, configure, maintain, and provide 24/7 support. Message-oriented middleware is one of the foundations of distributed systems.

Journal ArticleDOI
TL;DR: The evidence on model-based testing (MBT) was looked at to extract some useful knowledge and also discuss some issues that are relevant to other software technologies with similar types of evidence.
Abstract: A rich body of experiences hasn't yet been published on all the software development techniques researchers have proposed. In fact, by some estimates, the techniques for which we do have substantial experience are few and far between. When we started looking at the evidence on model-based testing (MBT), we thought we'd come across some strong studies that showed this approach's capabilities compared to conventional testing techniques-this wasn't the case. However, we can still extract some useful knowledge and also discuss some issues that are relevant to other software technologies with similar types of evidence.

Journal ArticleDOI
TL;DR: Expecting highly accurate effort estimates might be unrealistic because software development projects are inherently uncertain and software professionals' tendency toward overly optimistic estimates and their high level of estimation inconsistency suggest potential for improving effort estimation processes.
Abstract: Software projects average about 30 percent accuracy in effort estimation.1 Expecting highly accurate effort estimates might be unrealistic because software development projects are inherently uncertain. Nevertheless, software professionals' tendency toward overly optimistic estimates and their high level of estimation inconsistency suggest potential for improving effort estimation processes.

Journal ArticleDOI
TL;DR: The elicitation, analysis, and specification of quality requirements involve careful balancing of a broad spectrum of competing priorities and developers must focus on identifying qualities and designing solutions that optimize the product's value to its stakeholders.
Abstract: The elicitation, analysis, and specification of quality requirements involve careful balancing of a broad spectrum of competing priorities. Developers must therefore focus on identifying qualities and designing solutions that optimize the product's value to its stakeholders.

Journal ArticleDOI
TL;DR: This issue's column, Axel Uhl, chief development architect in SAP's Office of the CTO, looks into MDD methodologies and tool support and shares his many practical experiences to help you master the ramp-up for your own enterprise.
Abstract: For decades, model-driven development has been the perfect example of software-engineering hype. Just as bees are attracted to honey, we software engineers look for ways of simplifying our work and automating endless change cycles. Today, after many years of experimenting with MDD, mostly in limited-size scientific environments, the three ingredients of methodology, notation, and tools seem to fit and support each other. Round-trip engineering might still be some years from day-to-day practice, but simple forward engineering with MDD is readily available to software practitioners now. And it works. In this issue's column, Axel Uhl, chief development architect in SAP's Office of the CTO, looks into MDD methodologies and tool support. He shares his many practical experiences to help you master the ramp-up for your own enterprise.

Journal ArticleDOI
Les Hatton1
TL;DR: Results showed no evidence that checklists significantly improved these inspections, and it was revealed that individual inspection performance varied by a factor of 10 in terms of faults found per unit time, and individuals found on average about 53 percent of the faults.
Abstract: Checklists are an important part of code and design inspections. Ideally, they aim to increase the number of faults found per inspection hour by highlighting known areas of previous failure. In practice, although some researchers have quantified checklists' benefits, the conclusions' statistical robustness hasn't been as well represented. The author subjects checklists' effectiveness to formal statistical testing, using data from 308 inspections by industrial engineers over a three-year period. The results showed no evidence that checklists significantly improved these inspections. Further analysis revealed that individual inspection performance varied by a factor of 10 in terms of faults found per unit time, and individuals found on average about 53 percent of the faults. Two-person teams found on average 76 percent of the faults.