scispace - formally typeset
Search or ask a question
Author

James Miller

Bio: James Miller is an academic researcher from University of Alberta. The author has contributed to research in topics: Software inspection & Software development. The author has an hindex of 36, co-authored 181 publications receiving 4055 citations. Previous affiliations of James Miller include University of Strathclyde & University of Western Australia.


Papers
More filters
Journal ArticleDOI
TL;DR: Adding DNA barcoding to the inventory of the caterpillars, their food plants and parasitoids in northwestern Costa Rica has substantially improved the quality and depth of the inventory, and greatly multiplied the number of situations requiring further taxonomic work for resolution.
Abstract: Inventory of the caterpillars, their food plants and parasitoids began in 1978 for today's Area de Conservacion Guanacaste (ACG), in northwestern Costa Rica This complex mosaic of 120 000 ha of conserved and regenerating dry, cloud and rain forest over 0–2000 m elevation contains at least 10 000 species of non-leaf-mining caterpillars used by more than 5000 species of parasitoids Several hundred thousand specimens of ACG-reared adult Lepidoptera and parasitoids have been intensively and extensively studied morphologically by many taxonomists, including most of the co-authors DNA barcoding — the use of a standardized short mitochondrial DNA sequence to identify specimens and flush out undisclosed species — was added to the taxonomic identification process in 2003 Barcoding has been found to be extremely accurate during the identification of about 100 000 specimens of about 3500 morphologically defined species of adult moths, butterflies, tachinid flies, and parasitoid wasps Less than 1% of the species have such similar barcodes that a molecularly based taxonomic identification is impossible No specimen with a full barcode was misidentified when its barcode was compared with the barcode library Also as expected from early trials, barcoding a series from all morphologically defined species, and correlating the morphological, ecological and barcode traits, has revealed many hundreds of overlooked presumptive species Many but not all of these cryptic species can now be distinguished by subtle morphological and/or ecological traits previously ascribed to ‘variation’ or thought to be insignificant for species-level recognition Adding DNA barcoding to the inventory has substantially improved the quality and depth of the inventory, and greatly multiplied the number of situations requiring further taxonomic work for resolution

334 citations

Journal ArticleDOI
TL;DR: Findings are not at all obvious that object-oriented software is going to be more maintainable in the long run, but they are sufficiently important that attempts to verify the results should be made by independent researchers.
Abstract: This empirical research was undertaken as part of a multi-method programme of research to investigate unsupported claims made of object-oriented technology. A series of subject-based laboratory experiments, including an internal replication, tested the effect of inheritance depth on the maintainability of object-oriented software. Subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of three levels of inheritance depth and equivalent object-based software with no inheritance. This was then replicated with more experienced subjects. In a second experiment of similar design, subjects were timed performing identical maintenance tasks on object-oriented software with a hierarchy of five levels of inheritance depth and the equivalent object-based software. The collected data showed that subjects maintaining object-oriented software with three levels of inheritance depth performed the maintenance tasks significantly quicker than those maintaining equivalent object-based software with no inheritance. In contrast, subjects maintaining the object-oriented software with five levels of inheritance depth took longer, on average, than the subjects maintaining the equivalent object-based software (although statistical significance was not obtained). Subjects' source code solutions and debriefing questionnaires provided some evidence suggesting subjects began to experience difficulties with the deeper inheritance hierarchy. It is not at all obvious that object-oriented software is going to be more maintainable in the long run. These findings are sufficiently important that attempts to verify the results should be made by independent researchers.

153 citations

Journal ArticleDOI
TL;DR: The paper outlines various ideas from other disciplines for controlling this variation and describes a number of recommendations for controlling and reporting empirical work to advance the discipline towards a position, where meta-analysis can be profitably employed.

150 citations

Journal ArticleDOI
TL;DR: This replication is broadly supportive of the results from the original experiment, namely, that the Scenario approach is superior to the Checklist approach; and that the meeting component of a software inspection is not an effective defect detection mechanism.
Abstract: Software inspection is one of the best methods of verifying software documents. Software inspection is a complex process, with many possible variations, most of which have received little or no evaluation. This paper reports on the evaluation of one component of the inspection process, detection aids, specifically using Scenario or Checklist approaches. The evaluation is by subject-based experimentation, and is currently one of three independent experiments on the same hypothesis. The paper describes the experimental process, the resulting analysis of the experimental data, and attempts to compare the results in this experiment with the other experiments. This replication is broadly supportive of the results from the original experiment, namely, that the Scenario approach is superior to the Checklist approach; and that the meeting component of a software inspection is not an effective defect detection mechanism. This experiment also tentatively proposes additional relationships between general academic performance and individual inspection performance; and between meeting loss and group inspection performance.

136 citations

Journal ArticleDOI
TL;DR: A meta-study of the empirical literature on trust in e-commerce systems is conducted, and a qualitative model incorporating the various factors that have been empirically found to influence consumer trust in E-commerce is proposed.
Abstract: Trust is at once an elusive, imprecise concept, and a critical attribute that must be engineered into e-commerce systems. Trust conveys a vast number of meanings, and is deeply dependent upon context. The literature on engineering trust into e-commerce systems reflects these ambiguous meanings; there are a large number of articles, but there is as yet no clear theoretical framework for the investigation of trust in e-commerce. E-commerce, however, is predicated on trust; indeed, any e-commerce vendor that fails to establish a trusting relationship with their customers is doomed. There is a very clear need for specific guidance on e-commerce system attributes and business operations that will effectively promote consumer trust. To address this need, we have conducted a meta-study of the empirical literature on trust in e-commerce systems. This area of research is still immature, and hence our meta-analysis is qualitative rather than quantitative. We identify the major theoretical frameworks that have been proposed in the literature, and propose a qualitative model incorporating the various factors that have been empirically found to influence consumer trust in e-commerce. As this model is too complex to be of practical use, we explore subsets of this model that have the strongest support in the literature, and discuss the implications of this model for Web site design. Finally, we outline key conceptual and methodological needs for future work on this topic.

128 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners and European researchers appear to be the leading exponents of systematic literature reviews.
Abstract: Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners.

2,843 citations

Proceedings ArticleDOI
13 May 2014
TL;DR: It is concluded that using snowballing, as a first search strategy, may very well be a good alternative to the use of database searches.
Abstract: Background: Systematic literature studies have become common in software engineering, and hence it is important to understand how to conduct them efficiently and reliably.Objective: This paper presents guidelines for conducting literature reviews using a snowballing approach, and they are illustrated and evaluated by replicating a published systematic literature review.Method: The guidelines are based on the experience from conducting several systematic literature reviews and experimenting with different approaches.Results: The guidelines for using snowballing as a way to search for relevant literature was successfully applied to a systematic literature review.Conclusions: It is concluded that using snowballing, as a first search strategy, may very well be a good alternative to the use of database searches.

2,279 citations

01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Book
16 Jun 2012
TL;DR: The purpose of Experimentation in Software Engineering is to introduce students, teachers, researchers, and practitioners to empirical studies in software engineering, using controlled experiments, and provides indispensable information regarding empirical Studies in particular for experiments, but also for case studies, systematic literature reviews, and surveys.
Abstract: Like other sciences and engineering disciplines, software engineering requires a cycle of model building, experimentation, and learning. Experiments are valuable tools for all software engineers who are involved in evaluating and choosing between different methods, techniques, languages and tools. The purpose of Experimentation in Software Engineering is to introduce students, teachers, researchers, and practitioners to empirical studies in software engineering, using controlled experiments. The introduction to experimentation is provided through a process perspective, and the focus is on the steps that we have to go through to perform an experiment. The book is divided into three parts. The first part provides a background of theories and methods used in experimentation. Part II then devotes one chapter to each of the five experiment steps: scoping, planning, execution, analysis, and result presentation. Part III completes the presentation with two examples. Assignments and statistical material are provided in appendixes. Overall the book provides indispensable information regarding empirical studies in particular for experiments, but also for case studies, systematic literature reviews, and surveys. It is a revision of the authors book, which was published in 2000. In addition, substantial new material, e.g. concerning systematic literature reviews and case study research, is introduced. The book is self-contained and it is suitable as a course book in undergraduate or graduate studies where the need for empirical studies in software engineering is stressed. Exercises and assignments are included to combine the more theoretical material with practical aspects. Researchers will also benefit from the book, learning more about how to conduct empirical studies, and likewise practitioners may use it as a cookbook when evaluating new methods or techniques before implementing them in their organization.

2,079 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations