scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2007"


Journal ArticleDOI
TL;DR: Semantic Clustering is introduced, a technique based on Latent Semantic Indexing and clustering to group source artifacts that use similar vocabulary that interpret them as linguistic topics that reveal the intention of the code.
Abstract: Many of the existing approaches in Software Comprehension focus on program structure or external documentation. However, by analyzing formal information the informal semantics contained in the vocabulary of source code are overlooked. To understand software as a whole, we need to enrich software analysis with the developer knowledge hidden in the code naming. This paper proposes the use of information retrieval to exploit linguistic information found in source code, such as identifier names and comments. We introduce Semantic Clustering, a technique based on Latent Semantic Indexing and clustering to group source artifacts that use similar vocabulary. We call these groups semantic clusters and we interpret them as linguistic topics that reveal the intention of the code. We compare the topics to each other, identify links between them, provide automatically retrieved labels, and use a visualization to illustrate how they are distributed over the system. Our approach is language independent as it works at the level of identifier names. To validate our approach we applied it on several case studies, two of which we present in this paper. Note: Some of the visualizations presented make heavy use of colors. Please obtain a color copy of the article for better understanding.

505 citations


Journal ArticleDOI
TL;DR: The standardized effect sizes computed from the reviewed experiments were equal to observations in psychology studies and slightly larger than standard conventions in behavioral science.
Abstract: An effect size quantifies the effects of an experimental treatment. Conclusions drawn from hypothesis testing results might be erroneous if effect sizes are not judged in addition to statistical significance. This paper reports a systematic review of 92 controlled experiments published in 12 major software engineering journals and conference proceedings in the decade 1993-2002. The review investigates the practice of effect size reporting, summarizes standardized effect sizes detected in the experiments, discusses the results and gives advice for improvements. Standardized and/or unstandardized effect sizes were reported in 29% of the experiments. Interpretations of the effect sizes in terms of practical importance were not discussed beyond references to standard conventions. The standardized effect sizes computed from the reviewed experiments were equal to observations in psychology studies and slightly larger than standard conventions in behavioral science.

370 citations


Journal ArticleDOI
TL;DR: This research proposes, justify, and validate a model based on critical success factors (CSFs) that will constitute a guide for companies in the implementation and diagnosis of a CRM strategy.
Abstract: Most organizations have perceived the customer relationship management (CRM) concept as a technological solution for problems in individual areas, accompanied by a great deal of uncoordinated initiatives. Nevertheless, CRM must be conceived as a strategy, due to its human, technological, and processes implications, at the time an organization decides to implement it. On this basis, the main goal stated in this research is to propose, justify, and validate a model based on critical success factors (CSFs) that will constitute a guide for companies in the implementation and diagnosis of a CRM strategy. The model is conformed by a set of 13 CSFs with their 55 corresponding metrics, which will serve as a guide for organizations wishing to apply this type of strategy. These factors cover the three key aspects of every CRM strategy (human factor, processes, and technology); giving a global focus and propitiating success in the implementation of a CRM strategy. These CSFs - and their metrics - were evaluated by a group of internationally experts allowing determining guidelines for a CRM implementation as well as the probable causes of the deficiencies in past projects.

310 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to describe both the selection and usage of grounded theory in this study and evaluate its effectiveness as a research methodology for software process researchers.
Abstract: Software process improvement (SPI) aims to understand the software process as it is used within an organisation and thus drive the implementation of changes to that process to achieve specific goals such as increasing development speed, achieving higher product quality or reducing costs. Accordingly, SPI researchers must be equipped with the methodologies and tools to enable them to look within organisations and understand the state of practice with respect to software process and process improvement initiatives, in addition to investigating the relevant literature. Having examined a number of potentially suitable research methodologies, we have chosen Grounded Theory as a suitable approach to determine what was happening in actual practice in relation to software process and SPI, using the indigenous Irish software product industry as a test-bed. The outcome of this study is a theory, grounded in the field data, that explains when and why SPI is undertaken by the software industry. The objective of this paper is to describe both the selection and usage of grounded theory in this study and evaluate its effectiveness as a research methodology for software process researchers. Accordingly, this paper will focus on the selection and usage of grounded theory, rather than results of the SPI study itself.

236 citations


Journal ArticleDOI
TL;DR: Light is shed on the similarities and differences between six variability modeling techniques, by exemplifying the techniques with one running example, and classifying them using a framework of key characteristics for variability modeling.
Abstract: Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The publications regarding these techniques were written from different viewpoints, use different examples, and rely on a different technical background. This paper sheds light on the similarities and differences between six variability modeling techniques, by exemplifying the techniques with one running example, and classifying them using a framework of key characteristics for variability modeling. It furthermore discusses the relation between differences among those techniques, and the scope, size, and application domain of product families.

218 citations


Journal ArticleDOI
TL;DR: This work describes a more general approach that allows causal models to be applied to any lifecycle and enables decision-makers to reason in a way that is not possible with regression-based models.
Abstract: An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.

211 citations


Journal ArticleDOI
Kevin Crowston1, Qing Li1, Kangning Wei1, U. Yeliz Eseryel1, James Howison1 
TL;DR: It is found that 'self-assignment' was the most common mechanism across three FLOSS projects and this mechanism is consistent with expectations for distributed and largely volunteer teams.
Abstract: This paper provides empirical evidence about how free/libre open source software development teams self-organize their work, specifically, how tasks are assigned to project team members. Following a case study methodology, we examined developer interaction data from three active and successful FLOSS projects using qualitative research methods, specifically inductive content analysis, to identify the task-assignment mechanisms used by the participants. We found that 'self-assignment' was the most common mechanism across three FLOSS projects. This mechanism is consistent with expectations for distributed and largely volunteer teams. We conclude by discussing whether these emergent practices can be usefully transferred to mainstream practice and indicating directions for future research.

201 citations


Journal ArticleDOI
TL;DR: A number of challenging issues were found, including bridging communication gaps between marketing and development, selecting the right level of process support, basing the release plan on uncertain estimates, and managing the constant flow of requirements.
Abstract: Requirements engineering for market-driven software development entails special challenges. This paper presents results from an empirical study that investigates these challenges, taking a qualitative approach using interviews with fourteen employees at eight software companies and a focus group meeting with practitioners. The objective of the study is to increase the understanding of the area of market-driven requirements engineering and provide suggestions for future research by describing encountered challenges. A number of challenging issues were found, including bridging communication gaps between marketing and development, selecting the right level of process support, basing the release plan on uncertain estimates, and managing the constant flow of requirements.

200 citations


Journal ArticleDOI
TL;DR: Among the two neural network based software fault prediction models, Probabilistic Neural Networks outperform in predicting the fault proneness of the Object-Oriented modules developed.
Abstract: This paper introduces two neural network based software fault prediction models using Object-Oriented metrics. They are empirically validated using a data set collected from the software modules developed by the graduate students of our academic institution. The results are compared with two statistical models using five quality attributes and found that neural networks do better. Among the two neural networks, Probabilistic Neural Networks outperform in predicting the fault proneness of the Object-Oriented modules developed.

171 citations


Journal ArticleDOI
TL;DR: This paper will draw on a sustained series of qualitative studies of software development practice, focusing on social factors, using an ethnographically-informed approach to address four areas of software practice: software quality management systems, the emergence of object technology, professional end user development and agile development.
Abstract: Over the past decade we have performed a sustained series of qualitative studies of software development practice, focusing on social factors. Using an ethnographically-informed approach, we have addressed four areas of software practice: software quality management systems, the emergence of object technology, professional end user development and agile development. Several issues have arisen from this experience, including the nature of research questions that such studies can address, the advantages and challenges associated with being a member of the community under study, and how to maintain rigour in data collection. In this paper, we will draw on our studies to illustrate our approach and to discuss these and other issues.

133 citations


Journal ArticleDOI
TL;DR: The structure of the design problem determines the aspects of rational and naturalistic decision making used and the more structured the design decision, the less a designer considers options.
Abstract: Despite the impact of design decisions on software design, we have little understanding about how design decisions are made. This hinders our ability to provide design metrics, processes and training that support inherent design work. By interviewing 25 software designers and using content analysis and explanation building as our analysis technique, we provide qualitative and quantitative results that highlight aspects of rational and naturalistic decision making in software design. Our qualitative multi-case study results in a model of design decision making to answer the question: how do software designers make design decisions? We find the structure of the design problem determines the aspects of rational and naturalistic decision making used. The more structured the design decision, the less a designer considers options.

Journal ArticleDOI
TL;DR: It is argued that the early ADLs focused almost exclusively on the technological aspects of architecture, and mostly ignored the application domain and business contexts within which software systems, and development organizations, exist.
Abstract: In 2000, we published an extensive study of existing software architecture description languages (ADLs), which has served as a useful reference to software architecture researchers and practitioners. Since then, circumstances have changed. The Unified Modeling Language (UML) has gained popularity and wide adoption, and many of the ADLs we studied have been pushed into obscurity. We argue that this progression can be attributed to early ADLs' nearly exclusive focus on technological aspects of architecture, ignoring application domain and business contexts within which software systems and development organizations exist. These three concerns - technology, domain, and business - constitute three ''lampposts'' needed to appropriately ''illuminate'' software architecture and architectural description.

Journal ArticleDOI
TL;DR: A set of mutation operators for SQL queries that retrieve information from a database is developed and tested against a set of queries drawn from the NIST SQL Conformance Test Suite, and can be helpful in assessing the adequacy of database test cases and their development.
Abstract: A set of mutation operators for SQL queries that retrieve information from a database is developed and tested against a set of queries drawn from the NIST SQL Conformance Test Suite. The mutation operators cover a wide spectrum of SQL features, including the handling of null values. Additional experiments are performed to explore whether the cost of executing mutants can be reduced using selective mutation or the test suite size can be reduced by using an appropriate ordering of the mutants. The SQL mutation approach can be helpful in assessing the adequacy of database test cases and their development, and as a tool for systematically injecting faults in order to compare different database testing techniques.

Journal ArticleDOI
TL;DR: The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.
Abstract: Correct functioning of object-oriented software depends upon the successful integration of classes. While individual classes may function correctly, several new faults can arise when these classes are integrated together. In this paper, we present a technique to enhance testing of interactions among modal classes. The technique combines UML collaboration diagrams and statecharts to automatically generate an intermediate test model, called SCOTEM (State COllaboration TEst Model). The SCOTEM is then used to generate valid test paths. We also define various coverage criteria to generate test paths from the SCOTEM model. In order to assess our technique, we have developed a tool and applied it to a case study to investigate its fault detection capability. The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.

Journal ArticleDOI
TL;DR: This paper summarises the set of metrics defined to measure the understandability (a quality subcharacteristic) of conceptual models for DWs, and presents their theoretical validation to assure their correct definition.
Abstract: Due to the principal role of Data warehouses (DW) in making strategy decisions, data warehouse quality is crucial for organizations. Therefore, we should use methods, models, techniques and tools to help us in designing and maintaining high quality DWs. In the last years, there have been several approaches to design DWs from the conceptual, logical and physical perspectives. However, from our point of view, none of them provides a set of empirically validated metrics (objective indicators) to help the designer in accomplishing an outstanding model that guarantees the quality of the DW. In this paper, we firstly summarise the set of metrics we have defined to measure the understandability (a quality subcharacteristic) of conceptual models for DWs, and present their theoretical validation to assure their correct definition. Then, we focus on deeply describing the empirical validation process we have carried out through a family of experiments performed by students, professionals and experts in DWs. This family of experiments is a very important aspect in the process of validating metrics as it is widely accepted that only after performing a family of experiments, it is possible to build up the cumulative knowledge to extract useful measurement conclusions to be applied in practice. Our whole empirical process showed us that several of the proposed metrics seems to be practical indicators of the understandability of conceptual models for DWs.

Journal ArticleDOI
TL;DR: The overall results suggest that success is more likely if the project manager is involved in schedule negotiations, adequate requirements information is available when the estimates are made, initial effort estimates are good, take staff leave into account, and staff are not added late to meet an aggressive schedule.
Abstract: During discussions with a group of U.S. software developers we explored the effect of schedule estimation practices and their implications for software project success. Our objective is not only to explore the direct effects of cost and schedule estimation on the perceived success or failure of a software development project, but also to quantitatively examine a host of factors surrounding the estimation issue that may impinge on project outcomes. We later asked our initial group of practitioners to respond to a questionnaire that covered some important cost and schedule estimation topics. Then, in order to determine if the results are generalizable, two other groups from the US and Australia, completed the questionnaire. Based on these convenience samples, we conducted exploratory statistical analyses to identify determinants of project success and used logistic regression to predict project success for the entire sample, as well as for each of the groups separately. From the developer point of view, our overall results suggest that success is more likely if the project manager is involved in schedule negotiations, adequate requirements information is available when the estimates are made, initial effort estimates are good, take staff leave into account, and staff are not added late to meet an aggressive schedule. For these organizations we found that developer input to the estimates did not improve the chances of project success or improve the estimates. We then used the logistic regression results from each single group to predict project success for the other two remaining groups combined. The results show that there is a reasonable degree of generalizability among the different groups.

Journal ArticleDOI
TL;DR: This paper proposes a new index-based KNN join approach using the iDistance as the underlying index structure and presents its basic algorithm and proposes two different enhancements, one of which exploits the reduced dimensions of data space.
Abstract: In many advanced database applications (e.g., multimedia databases), data objects are transformed into high-dimensional points and manipulated in high-dimensional space. One of the most important but costly operations is the similarity join that combines similar points from multiple datasets. In this paper, we examine the problem of processing K-nearest neighbor similarity join (KNN join). KNN join between two datasets, R and S, returns for each point in R its K most similar points in S. We propose a new index-based KNN join approach using the iDistance as the underlying index structure. We first present its basic algorithm and then propose two different enhancements. In the first enhancement, we optimize the original KNN join algorithm by using approximation bounding cubes. In the second enhancement, we exploit the reduced dimensions of data space. We conducted an extensive experimental study using both synthetic and real datasets, and the results verify the performance advantage of our schemes over existing KNN join algorithms.

Journal ArticleDOI
TL;DR: An automatic grading approach is presented based on program semantic similarity that can evaluate how close a student's source code is to a correct solution and give a matching accuracy.
Abstract: An automatic grading approach is presented based on program semantic similarity. Automatic grading of a student program is achieved by calculating semantic similarities between the student program and each correct model program after they are standardized. This approach was implemented in an on-line examination system for the programming language C. Different form other existing approaches, it can evaluate how close a student's source code is to a correct solution and give a matching accuracy.

Journal ArticleDOI
TL;DR: This paper explains how a knowledge extraction technique is adapted to the knowledge needs specific to software maintenance and explains how the knowledge discovered on a legacy software during maintenance is recorded for future use.
Abstract: Creating and maintaining software systems is a knowledge intensive task. One needs to have a good understanding of the application domain, the problem to solve and all its requirements, the software process used, technical details of the programming language(s), the system's architecture and how the different parts fit together, how the system interacts with its environment, etc. All this knowledge is difficult and costly to gather. It is also difficult to store and usually lives only in the mind of the software engineers who worked on a particular project. If this is a problem for development of new software, it is even more for maintenance, when one must rediscover lost information of an abstract nature from legacy source code among a swarm of unrelated details. In this paper, we submit that this lack of knowledge is one of the prominent problems in software maintenance. To try to solve this problem, we adapted a knowledge extraction technique to the knowledge needs specific to software maintenance. We explain how we explicit the knowledge discovered on a legacy software during maintenance so that it may be recorded for future use. Some applications on industry maintenance projects are reported.

Journal ArticleDOI
TL;DR: The goal in creating this special issue is to make existing qualitative research more visible and further the understanding of qualitative research and its importance in the software engineering community.
Abstract: Almost twenty years have passed since the first qualitative research study in software engineering was published [14]. Using qualitative methods and a qualitative analytical framework, Curtis, et al. found communication and cooperation to be critical factors in developing large-scale software systems. Given the importance of this study, it is perhaps surprising that research publications using qualitative methods are still scarce. Therefore, our goal in creating this special issue is to make existing qualitative research more visible and further the understanding of qualitative research and its importance in the software engineering community. Qualitative research has its main strength in exploring and illuminating the in situ practice of software engineering. This is the everyday practice where software engineers interpret, appropriate, and implement the methods, techniques and processes of the trade. A better understanding of these in situ methods can – in turn – provide a base for their improvement. From 23 submissions we selected eleven articles. The articles represent a diverse set of theoretical frameworks and methods, while focusing on a wide range of software engineering activities from requirements engineering, project management to software process improvement. The selected articles illustrate the richness of existing research and showcase important and applicable results which will further the discussion of qualitative methods – not only about the concrete results but importantly so, on the value of qualitative research in software engineering in general. To make the papers more accessible to an audience that may not be used to qualitative research, we introduce this special issue more thoroughly than what is normally seen. By doing so we answer a number of questions related to the practice of qualitative research in the software engineering domain. In turn, each article selected for this special issue addresses some or all of the questions in the context of real research problems. The editorial proceeds as follows. First qualitative research is introduced and related to the tradition of software engineering. Then we present an overview of the different discourses in which qualitative research on software engineering is published followed by a discussion of potential quality criteria for qualitative research. We end with an introduction to the articles published in this special issue. There is no 'one way' of doing qualitative research. The only common denominator of qualitative research is that it is based on qualitative data. Some see the possibility to develop an understanding of the software engineering process from a …

Journal ArticleDOI
TL;DR: The proposed approach, Process Configuration, tells how to create a project-specific method from an existing one, taking into account the project circumstances, and tends to be more flexible and easier to implement in practice as it introduces few simplifications.
Abstract: Both practitioners and researchers agree that if software development methods were more adjustable to project-specific situations, this would increase their use in practice. Empirical investigations show that otherwise methods exist just on paper while in practice developers avoid them or do not follow them rigorously. In this paper we present an approach that deals with this problem. Process Configuration, as we named the approach, tells how to create a project-specific method from an existing one, taking into account the project circumstances. Compared to other approaches that deal with the creation of project-specific methods, our approach tends to be more flexible and easier to implement in practice as it introduces few simplifications. The proposed approach is practice-driven, i.e. it has been developed in cooperation with software development companies.

Journal ArticleDOI
TL;DR: It is shown how an analysis and evaluation of NFRs can be applied to a process model developed with role activity diagramming (RAD) to operationalise desirable quality features more explicitly in the model.
Abstract: This paper presents an approach to the identification and inclusion of 'non-functional' aspects of a business process in modelling for business improvement. The notion of non-functional requirements (NFRs) is borrowed from software engineering, and a method developed in that field for linking NFRs to conceptual models is adapted and applied to business process modelling. Translated into this domain, NFRs are equated with the general or overall quality attributes of a business process, which, though essential aspects of any effective process, are not well captured in a functionally oriented process model. Using an example of a healthcare process (cancer registration in Jordan). We show how an analysis and evaluation of NFRs can be applied to a process model developed with role activity diagramming (RAD) to operationalise desirable quality features more explicitly in the model. This gives a useful extension to RAD and similar modelling methods, as well as providing a basis for business improvement.

Journal ArticleDOI
TL;DR: The findings indicate that, in general, a high level of uncertainty is associated with higher effort estimation errors while increased use of estimation development processes and estimation management processes, as well as greater estimator experience, are correlated with lower duration estimation errors.
Abstract: The purpose of this research was to fill a gap in the literature pertaining to the influence of project uncertainty and managerial factors on duration and effort estimation errors. Four dimensions were considered: project uncertainty, use of estimation development processes, use of estimation management processes, and the estimator's experience. Correlation analysis and linear regression models were used to test the model and the hypotheses on the relations between the four dimensions and estimation errors, using a sample of 43 internal software development projects executed during the year 2002 in the IT division of a large government organization in Israel. Our findings indicate that, in general, a high level of uncertainty is associated with higher effort estimation errors while increased use of estimation development processes and estimation management processes, as well as greater estimator experience, are correlated with lower duration estimation errors. From a practical perspective, the specific findings of this study can be used as guidelines for better duration and effort estimation. Accounting for project uncertainty while managing expectations regarding estimate accuracy; investing more in detailed planning and selecting estimators based on the number of projects they have managed rather than their cumulative experience in project management, may reduce estimation errors.

Journal ArticleDOI
TL;DR: This paper presents a framework that draws on Structuration theory and dialectical hermeneutics to explicate the dynamics of software process improvement (SPI) in a packaged software organisation and shows SPI to be an emergent rather than a deterministic activity.
Abstract: This paper presents a framework that draws on Structuration theory and dialectical hermeneutics to explicate the dynamics of software process improvement (SPI) in a packaged software organisation. Adding to the growing body of qualitative research, this approach overcomes some of the criticisms of interpretive studies, especially the need for the research to be reflexive in nature. Our longitudinal analysis of the case study shows SPI to be an emergent rather than a deterministic activity: the design and action of the change process are shown to be intertwined and shaped by their context. This understanding is based upon a structurational perspective that highlights how the unfolding/realisation of the process improvement (intent) are enabled and constrained by their context. The work builds on the recognition that the improvements can be understood from an organisational learning perspective. Fresh insights to the improvement process are developed by recognising the role of the individual to influence the improvement through facilitating or resisting the changes. The understanding gained here can be applied by organisations to enable them to improve the effectiveness of their SPI programmes, and so improve the quality of their software.

Journal ArticleDOI
TL;DR: A new bootstrapping method is proposed which can automatically classify requirements sentences into each topic category using only topic words as the representative of the analysts' views to provide an effective function for an Internet-based requirements analysis-supporting system.
Abstract: In order to efficiently develop large-scale and complicated software, it is important for system engineers to correctly understand users' requirements. Most requirements in large-scale projects are collected from various stakeholders located in various regions, and they are generally written in natural language. Therefore, the initial collected requirements must be classified into various topics prior to analysis phases in order to be usable as input in several requirements analysis methods. If this classification process is manually done by analysts, it becomes a time-consuming task. To solve this problem, we propose a new bootstrapping method which can automatically classify requirements sentences into each topic category using only topic words as the representative of the analysts' views. The proposed method is verified through experiments using two requirements data sets: one written in English and the other in Korean. The significant performances were achieved in the experiments: the 84.28 and 87.91 F1 scores for the English and Korean data sets, respectively. As a result, the proposed method can provide an effective function for an Internet-based requirements analysis-supporting system so as to efficiently gather and analyze requirements from various and distributed stakeholders by using the Internet.

Journal ArticleDOI
TL;DR: The use of object-oriented design patterns in game development is being evaluated and a qualitative and a quantitative evaluation of open source projects is being performed.
Abstract: The use of object-oriented design patterns in game development is being evaluated in this paper. Games' quick evolution, demands great flexibility, code reusability and low maintenance costs. Games usually differentiate between versions, in respect of changes of the same type (additional animation, terrains etc). Consequently, the application of design patterns in them can be beneficial regarding maintainability. In order to investigate the benefits of using design patterns, a qualitative and a quantitative evaluation of open source projects is being performed. For the quantitative evaluation, the projects are being analyzed by reverse engineering techniques and software metrics are calculated.

Journal ArticleDOI
TL;DR: This work presents a method to generate cluster level test cases based on UML communication diagrams by constructing a tree representation of communication diagrams and transforming the conditional predicates on the communication diagram and applying function minimization technique to generate the test data.
Abstract: We present a method to generate cluster level test cases based on UML communication diagrams. In our approach, we first construct a tree representation of communication diagrams. We then carry out a post-order traversal of the constructed tree for selecting conditional predicates from the communication diagram. We transform the conditional predicates on the communication diagram and apply function minimization technique to generate the test data. The generated test cases achieve message paths coverage as well as boundary coverage. We have implemented our technique and tested it on several example problems.

Journal ArticleDOI
TL;DR: A competence approach to understanding software project management places the responsibility for success firmly on the shoulders of the people involved, project members, project leaders, managers, which is also partly generalisable to theory.
Abstract: Traditional software project management theory often focuses on desk-based development of software and algorithms, much in line with the traditions of the classical project management and software engineering. This can be described as a tools and techniques perspective, which assumes that software project management success is dependent on having the right instruments available, rather than on the individual qualities of the project manager or the cumulative qualities and skills of the software organisation. Surprisingly, little is known about how (or whether) these tools techniques are used in practice. This study, in contrast, uses a qualitative grounded theory approach to develop the basis for an alternative theoretical perspective: that of competence. A competence approach to understanding software project management places the responsibility for success firmly on the shoulders of the people involved, project members, project leaders, managers. The competence approach is developed through an investigation of the experiences of project managers in a medium sized software development company (WM-data) in Denmark. Starting with a simple model relating project conditions, project management competences and desired project outcomes, we collected data through interviews, focus groups and one large plenary meeting with most of the company's project managers. Data analysis employed content analysis for concept (variable) development and causal mapping to trace relationships between variables. In this way we were able to build up a picture of the competences project managers use in their daily work at WM-data, which we argue is also partly generalisable to theory. The discrepancy between the two perspectives is discussed, particularly in regard to the current orientation of the software engineering field. The study provides many methodological and theoretical starting points for researchers wishing to develop a more detailed competence perspective of software project managers' work.

Journal ArticleDOI
TL;DR: An exemplar field study that examined the use of documentation in software maintenance environments afforded a better understanding of the complex relationship between project personnel and documentation, including individuals' roles as pointers, gatekeepers, or barriers to documentation.
Abstract: War stories are a form of qualitative data that capture informants' specific accounts of surmounting great challenges. The rich contextual detail afforded by this approach warrants its inclusion in the methodological arsenal of empirical software engineering research. We ground this assertion in an exemplar field study that examined the use of documentation in software maintenance environments. Specific examples are unpacked to reveal a depth of insight that would not have been possible using standard interviews. This afforded a better understanding of the complex relationship between project personnel and documentation, including individuals' roles as pointers, gatekeepers, or barriers to documentation.

Journal ArticleDOI
TL;DR: A quality framework for developing and evaluating original components is proposed in this paper, based on the ISO9126 quality model which is modified and refined so as to reflect better the notion of original components.
Abstract: Component-based software development is being identified as the emerging method of developing complex applications consisting of heterogeneous systems. Although more research attention has been given to Commercial Off The Shelf (COTS) components, original software components are also widely used in the software industry. Original components are smaller in size, they have a narrower functional scope and they usually find more uses when it comes to specific and dedicated functions. Therefore, their need for interoperability is equal or greater, than that of COTS components. A quality framework for developing and evaluating original components is proposed in this paper, along with an application methodology that facilitates their evaluation. The framework is based on the ISO9126 quality model which is modified and refined so as to reflect better the notion of original components. The quality model introduced can be tailored according to the organization-reuser and the domain needs of the targeted component. The proposed framework is demonstrated and validated through real case examples, while its applicability is assessed and discussed.