scispace - formally typeset
Search or ask a question
Author

Ross Jeffery

Other affiliations: University of New South Wales
Bio: Ross Jeffery is an academic researcher from Commonwealth Scientific and Industrial Research Organisation. The author has contributed to research in topics: Software development & Personal software process. The author has an hindex of 20, co-authored 85 publications receiving 1275 citations. Previous affiliations of Ross Jeffery include University of New South Wales.


Papers
More filters
BookDOI
01 Jan 2014
TL;DR: Bottom-up estimation usually provides more accurate estimates, but it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly, so software companies should apply a bottom-up strategy unless the estimator have experience from, or access to, very similar projects.
Abstract: ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.

85 citations

Journal ArticleDOI
TL;DR: CASE aims at greater automation of software production and the software improvement paradigms of the Software Engineering Institute (SEI), Software Process Capability Maturity Model (CMM), and the University of Mary-land's Tailoring a Measurement Environment (TAME) project are examined.
Abstract: activity in the developed world strives not only to maintain status quo activities and lifestyles, but to improve on them. In particular, the application of technology has been focused on improvement. Technology is central to organized society's efforts to improve the lot of individuals and organizations, regardless of one's opinions of the success or failure of instances of technological application or of the ultimate nature of improvement. This ethos of improvement or \"doing better\" has strongly influenced attitudes toward software development and maintenance. From the software crisis of the mid-1960s, well described in [6], to the present day, many concepts, meth-odologies, languages, tools and techniques have been introduced with the aim of improving the software process and its products. Particular initiatives which we will examine here are CASE and the software improvement paradigms of the Software Engineering Institute (SEI), Software Process Capability Maturity Model (CMM), [10, 13, 14] and the University of Mary-land's Tailoring a Measurement Environment (TAME) project [ 1, 2]. CASE aims at greater automation of software production. Just as CAD/CAM has brought integrated design tools to the engineering of physical systems, CASE is bringing analogous tools to the more abstract engineering of software. Ultimately , the motivation for tool use is economic-for competitive advantage. There are many aspects to competitive advantage, including time-to-market, productivity, quality , product differentiation, distribution and support. Software engineering , however, has a narrower scope, comprising software definition , design, production, and maintenance. CASE aims to improve these activities through the use and integration of software tools. Software improvement has recently received more explicit emphasis , together with a firmer conceptual al and empirical basis, through the work of the SEI on the CMM [10, 13, 14] and the work of Basili and Rombach on the TAME project [1, 2]. Central to both of these major research efforts has been the characterization and improvement of the software process. There are differences in the two improvement paradigms, which

83 citations

Proceedings ArticleDOI
04 Jan 2012
TL;DR: It was found that Scrum offers a distinctive advantage in mitigating geographical and socio-cultural but not temporal distance-based GSD coordination challenges.
Abstract: Global software development is a major trend in software engineering. Practitioners are increasingly trying Agile methods in distributed projects to tap into the benefits experienced by co-located teams. This paper considers the issue by examining whether Scrum practices, used in four global software development projects to leverage the benefits of Agile methods over traditional software engineering methods, provided any distinctive advantage in mitigating coordination challenges. Four temporal, geographical and socio-cultural distance-based coordination challenges and seven scrum practices are identified from the literature. The cases are analyzed for evidence of use of the Scrum practices to mitigate each challenge and whether the mitigation mechanisms employed relate to any distinctive characteristics of the Scrum method. While some mechanisms used were common to other/ traditional methods, it was found that Scrum offers a distinctive advantage in mitigating geographical and socio-cultural but not temporal distance-based GSD coordination challenges. Implications are discussed.

68 citations

Book
07 May 2014
TL;DR: This book presents a comprehensive look at the principles of software effort estimation, explains popular estimation methods, summarizes estimation best-practices, and provides guidelines for continuously improving estimation capability.
Abstract: Software effort estimation is one of the oldest and most important problems in software project management, and thus today there are a large number of models, each with its own unique strengths and weaknesses in general, and even more importantly, in relation to the environment and context in which it is to be applied.Trendowicz and Jeffery present a comprehensive look at the principles of software effort estimation and support software practitioners in systematically selecting and applying the most suitable effort estimation approach. Their book not only presents what approach to take and how to apply and improve it, but also explains why certain approaches should be used in specific project situations. Moreover, it explains popular estimation methods, summarizes estimation best-practices, and provides guidelines for continuously improving estimation capability. Additionally, the book offers invaluable insights into project management in general, discussing issues including project trade-offs, risk assessment, and organizational learning.Overall, the authors deliver an essential reference work for software practitioners responsible for software effort estimation and planning in their daily work and who want to improve their estimation skills. At the same time, for lecturers and students the book can serve as the basis of a course in software processes, software estimation, or project management.

65 citations

Journal ArticleDOI
TL;DR: To improve the state-of-the-art, a new dependency model is proposed to tackle the problems identified from the case study and the related literature, and suggests nine dependency types with precise definitions as its initial set.
Abstract: Context: The dependencies between individual requirements have an important influence on software engineering activities e.g., project planning, architecture design, and change impact analysis. Although dozens of requirement dependency types were suggested in the literature from different points of interest, there still lacks an evaluation of the applicability of these dependency types in requirements engineering. Objective: Understanding the effect of these requirement dependencies to software engineering activities is useful but not trivial. In this study, we aimed to first investigate whether the existing dependency types are useful in practise, in particular for change propagation analysis, and then suggest improvements for dependency classification and definition. Method: We conducted a case study that evaluated the usefulness and applicability of two well-known generic dependency models covering 25 dependency types. The case study was conducted in a real-world industry project with three participants who offered different perspectives. Results: Our initial evaluation found that there exist a number of overlapping and/or ambiguous dependency types among the current models; five dependency types are particularly useful in change propagation analysis; and practitioners with different backgrounds possess various viewpoints on change propagation. To improve the state-of-the-art, a new dependency model is proposed to tackle the problems identified from the case study and the related literature. The new model classifies dependencies into intrinsic and additional dependencies on the top level, and suggests nine dependency types with precise definitions as its initial set. Conclusions: Our case study provides insights into requirement dependencies and their effects on change propagation analysis for both research and practise. The resulting new dependency model needs further evaluation and improvement.

63 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2003

3,093 citations

Journal Article
TL;DR: This sales letter may not influence you to be smarter, but the book that this research methods in social relations will evoke you to being smarter.
Abstract: This sales letter may not influence you to be smarter, but the book that we offer will evoke you to be smarter. Yeah, at least you'll know more than others who don't. This is what called as the quality life improvisation. Why should this research methods in social relations? It's because this is your favourite theme to read. If you like this theme about, why don't you read the book to enrich your discussion?

2,382 citations

Journal ArticleDOI
TL;DR: A preliminary set of research guidelines aimed at stimulating discussion among software researchers, intended to assist researchers, reviewers, and meta-analysts in designing, conducting, and evaluating empirical studies.
Abstract: Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers, and meta-analysts in designing, conducting, and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection, and analysis and reporting of empirical studies.

1,541 citations

Journal ArticleDOI
TL;DR: It is argued that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.
Abstract: Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.

1,010 citations