scispace - formally typeset
Search or ask a question
Author

Emilia Mendes

Bio: Emilia Mendes is an academic researcher from Blekinge Institute of Technology. The author has contributed to research in topics: Web application & Web development. The author has an hindex of 45, co-authored 238 publications receiving 6699 citations. Previous affiliations of Emilia Mendes include College of Information Technology & Zayed University.


Papers
More filters
Journal ArticleDOI
TL;DR: It is clear that some organizations would be ill-served by cross-company models whereas others would benefit, and experimenters need to standardize their experimental procedures to enable formal meta-analysis of the results.
Abstract: The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e., $20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3.

408 citations

Journal ArticleDOI
TL;DR: A systematic literature review of empirical studies that investigated factors affecting the effectiveness of pair programming for CS/SE students and studies that measured the effectiveness, showing two clear gaps in this research field, showed a lack of studies focusing on pair compatibility factors aimed at making PP an effective pedagogical tool.
Abstract: The objective of this paper is to present the current evidence relative to the effectiveness of pair programming (PP) as a pedagogical tool in higher education CS/SE courses. We performed a systematic literature review (SLR) of empirical studies that investigated factors affecting the effectiveness of PP for CS/SE students and studies that measured the effectiveness of PP for CS/SE students. Seventy-four papers were used in our synthesis of evidence, and 14 compatibility factors that can potentially affect PP's effectiveness as a pedagogical tool were identified. Results showed that students' skill level was the factor that affected PP's effectiveness the most. The most common measure used to gauge PP's effectiveness was time spent on programming. In addition, students' satisfaction when using PP was overall higher than when working solo. Our meta-analyses showed that PP was effective in improving students' grades on assignments. Finally, in the studies that used quality as a measure of effectiveness, the number of test cases succeeded, academic performance, and expert opinion were the quality measures mostly applied. The results of this SLR show two clear gaps in this research field: 1) a lack of studies focusing on pair compatibility factors aimed at making PP an effective pedagogical tool and 2) a lack of studies investigating PP for software design/modeling tasks in conjunction with programming tasks.

321 citations

Journal ArticleDOI
TL;DR: A comparison of the prediction accuracy of three CBR techniques used to estimate the effort to develop Web hypermedia applications and to choose the one with the best estimates is presented.
Abstract: Software cost models and effort estimates help project managers allocate resources, control costs and schedule and improve current practices, leading to projects finished on time and within budget. In the context of Web development, these issues are also crucial, and very challenging given that Web projects have short schedules and very fluidic scope. In the context of Web engineering, few studies have compared the accuracy of different types of cost estimation techniques with emphasis placed on linear and stepwise regressions, and case-based reasoning (CBR). To date only one type of CBR technique has been employed in Web engineering. We believe results obtained from that study may have been biased, given that other CBR techniques can also be used for effort prediction. Consequently, the first objective of this study is to compare the prediction accuracy of three CBR techniques to estimate the effort to develop Web hypermedia applications and to choose the one with the best estimates. The second objective is to compare the prediction accuracy of the best CBR technique against two commonly used prediction models, namely stepwise regression and regression trees. One dataset was used in the estimation process and the results showed that the best predictions were obtained for stepwise regression.

223 citations

Proceedings ArticleDOI
15 Oct 2009
TL;DR: There is little evidence on the effectiveness of software maintainability prediction techniques and models, according to a systematic review of studies targeted at the software quality attribute of maintainability.
Abstract: This paper presents the results of a systematic review conducted to collect evidence on software maintainability prediction and metrics. The study was targeted at the software quality attribute of maintainability as opposed to the process of software maintenance. The evidence was gathered from the selected studies against a set of meaningful and focused questions. 710 studies were initially retrieved; however of these only 15 studies were selected; their quality was assessed; data extraction was performed; and data was synthesized against the research questions. Our results suggest that there is little evidence on the effectiveness of software maintainability prediction techniques and models.

221 citations

Proceedings ArticleDOI
17 Sep 2014
TL;DR: Subjective estimation techniques, e.g. expert judgment-based techniques, planning poker or the use case points method are the one used the most in agile effort estimation studies, thus suggesting numerous possible avenues for future work.
Abstract: Context: Ever since the emergence of agile methodologies in 2001, many software companies have shifted to Agile Software Development (ASD), and since then many studies have been conducted to investigate effort estimation within such context; however to date there is no single study that presents a detailed overview of the state of the art in effort estimation for ASD. Objectives: The aim of this study is to provide a detailed overview of the state of the art in the area of effort estimation in ASD. Method: To report the state of the art, we conducted a systematic literature review in accordance with the guidelines proposed in the evidence-based software engineering literature. Results: A total of 25 primary studies were selected; the main findings are: i) Subjective estimation techniques (e.g. expert judgment, planning poker, use case points estimation method) are the most frequently applied in an agile context; ii) Use case points and story points are the most frequently used size metrics respectively; iii) MMRE (Mean Magnitude of Relative Error) and MRE (Magnitude of Relative Error) are the most frequently used accuracy metrics; iv) team skills, prior experience and task size are cited as the three important cost drivers for effort estimation in ASD; and v) Extreme Programming (XP) and SCRUM are the only two agile methods that are identified in the primary studies. Conclusion: Subjective estimation techniques, e.g. expert judgment-based techniques, planning poker or the use case points method, are the one used the most in agile effort estimation studies. As for the size metrics, the ones that were used the most in the primary studies were story points and use case points. Several research gaps were identified, relating to the agile methods, size metrics and cost drivers, thus suggesting numerous possible avenues for future work.

165 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners and European researchers appear to be the leading exponents of systematic literature reviews.
Abstract: Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners.

2,843 citations

Proceedings ArticleDOI
26 Jun 2008
TL;DR: This work describes how to conduct a systematic mapping study in software engineering and provides guidelines for conducting systematic maps, and compares systematic maps with systematic reviews by systematically analyzing existing systematic reviews.
Abstract: BACKGROUND: A software engineering systematic map is a defined method to build a classification scheme and structure a software engineering field of interest. The analysis of results focuses on frequencies of publications for categories within the scheme. Thereby, the coverage of the research field can be determined. Different facets of the scheme can also be combined to answer more specific research questions. OBJECTIVE: We describe how to conduct a systematic mapping study in software engineering and provide guidelines. We also compare systematic maps and systematic reviews to clarify how to chose between them. This comparison leads to a set of guidelines for systematic maps. METHOD: We have defined a systematic mapping process and applied it to complete a systematic mapping study. Furthermore, we compare systematic maps with systematic reviews by systematically analyzing existing systematic reviews. RESULTS: We describe a process for software engineering systematic mapping studies and compare it to systematic reviews. Based on this, guidelines for conducting systematic maps are defined. CONCLUSIONS: Systematic maps and reviews are different in terms of goals, breadth, validity issues and implications. Thus, they should be used complementarily and require different methods (e.g., for analysis).

2,486 citations

Proceedings ArticleDOI
13 May 2014
TL;DR: It is concluded that using snowballing, as a first search strategy, may very well be a good alternative to the use of database searches.
Abstract: Background: Systematic literature studies have become common in software engineering, and hence it is important to understand how to conduct them efficiently and reliably.Objective: This paper presents guidelines for conducting literature reviews using a snowballing approach, and they are illustrated and evaluated by replicating a published systematic literature review.Method: The guidelines are based on the experience from conducting several systematic literature reviews and experimenting with different approaches.Results: The guidelines for using snowballing as a way to search for relevant literature was successfully applied to a systematic literature review.Conclusions: It is concluded that using snowballing, as a first search strategy, may very well be a good alternative to the use of database searches.

2,279 citations