scispace - formally typeset
Search or ask a question
Author

Aziz Nanthaamornphong

Other affiliations: University of Alabama
Bio: Aziz Nanthaamornphong is an academic researcher from Prince of Songkla University. The author has contributed to research in topics: Software development & Software construction. The author has an hindex of 6, co-authored 29 publications receiving 129 citations. Previous affiliations of Aziz Nanthaamornphong include University of Alabama.

Papers
More filters
Journal ArticleDOI
TL;DR: The results of this survey indicate the need for additional empirical evaluation of the use of TDD for the development of scientific software to help organizations make better decisions.
Abstract: Scientific software developers are increasingly employing various software engineering practices. Specifically, scientists are beginning to use Test-Driven Development (TDD). Even with this increasing use of TDD, the effect of TDD on scientific software development is not fully understood. To help scientific developers determine whether TDD is appropriate for their scientific projects, we surveyed scientific developers who use TDD to understand: (1) TDDs effectiveness, (2) the benefits and challenges of using TDD, and (3) the use of refactoring practices (an important part of the TDD process). Some key positive results include: (1) TDD helps scientific developers increase software quality, in particular functionality and reliability; and (2) TDD helps scientific developers reduce the number of problems in the early phase of projects. Conversely, some key challenges include: (1) TDD may not be effective for all types of scientific projects; and (2) Writing a good test is the most difficult task in TDD, particularly in a parallel computing environment. To summarize, TDD generally has a positive effect on the quality of scientific software, but it often requires a large effort investment. The results of this survey indicate the need for additional empirical evaluation of the use of TDD for the development of scientific software to help organizations make better decisions.

21 citations

Journal ArticleDOI
TL;DR: A software tool to extract unified modeling language (UML) class diagrams from Fortran code facilitates the developers' ability to examine the entities and their relationships in the software system.
Abstract: Many scientists who implement computational science and engineering software have adopted the object-oriented (OO) Fortran paradigm. One of the challenges faced by OO Fortran developers is the inability to obtain high level software design descriptions of existing applications. Knowledge of the overall software design is not only valuable in the absence of documentation, it can also serve to assist developers with accomplishing different tasks during the software development process, especially maintenance and refactoring. The software engineering community commonly uses reverse engineering techniques to deal with this challenge. A number of reverse engineering-based tools have been proposed, but few of them can be applied to OO Fortran applications. In this paper, we propose a software tool to extract unified modeling language (UML) class diagrams from Fortran code. The UML class diagram facilitates the developers' ability to examine the entities and their relationships in the software system. The extracted diagrams enhance software maintenance and evolution. The experiments carried out to evaluate the proposed tool show its accuracy and a few of the limitations.

18 citations

Proceedings ArticleDOI
21 Sep 2011
TL;DR: Two experiments, using graduate student participants, to study whether design patterns improve the software quality, specifically maintainability and understandability revealed that the design patterns did not improve either the maintainability or the understandability of the software.
Abstract: Design patterns are widely used within the software engineer community. Researchers claim that design patterns improve software quality. In this paper, we describe two experiments, using graduate student participants, to study whether design patterns improve the software quality, specifically maintainability and understandability. We replicated a controlled experiment to compare the maintainability of two implementations of an application, one using a design pattern and the other using a simpler alternative. The maintenance tasks in this replication experiment required the participants to answer questions about a Java program and then modify that program. Prior to the replication, we performed a preliminary exercise to investigate whether design patterns improve the understandability of software designs. We gave the participants the graphical design of the systems that would be used in the replication study. The participant received either the version of the design containing the design pattern or the version containing the simpler alternative. We asked the participants a series of questions to see how well they understood the given design. The results of two experiments revealed that the design patterns did not improve either the maintainability or the understandability of the software. We found that there was no significant correlation between the maintainability and the understandability of the software even though the participants had received the design of the systems before they performed the maintenance tasks.

16 citations

Journal ArticleDOI
TL;DR: A close replication of an experiment investigating the impact of design patterns (Decorator and Abstract Factory) on software maintenance is performed and the Decorator pattern is found to be preferable to a simpler solution during maintenance, as long as the developer has at least some prior knowledge of the pattern.
Abstract: Context. Several empirical studies have explored the benefits of software design patterns, but their collective results are highly inconsistent. Resolving the inconsistencies requires investigating moderators—i.e., variables that cause an effect to differ across contexts. Objectives. Replicate a design patterns experiment at multiple sites and identify sufficient moderators to generalize the results across prior studies. Methods. We perform a close replication of an experiment investigating the impact (in terms of time and quality) of design patterns (Decorator and Abstract Factory) on software maintenance. The experiment was replicated once previously, with divergent results. We execute our replication at four universities—spanning two continents and three countries—using a new method for performing distributed replications based on closely coordinated, small-scale instances (“joint replication”). We perform two analyses: 1) a post-hoc analysis of moderators, based on frequentist and Bayesian statistics; 2) an a priori analysis of the original hypotheses, based on frequentist statistics. Results. The main effect differs across the previous instances of the experiment and across the sites in our distributed replication. Our analysis of moderators (including developer experience and pattern knowledge) resolves the differences sufficiently to allow for cross-context (and cross-study) conclusions. The final conclusions represent 126 participants from five universities and 12 software companies, spanning two continents and at least four countries. Conclusions. The Decorator pattern is found to be preferable to a simpler solution during maintenance, as long as the developer has at least some prior knowledge of the pattern. For Abstract Factory, the simpler solution is found to be mostly equivalent to the pattern solution. Abstract Factory is shown to require a higher level of knowledge and/or experience than Decorator for the pattern to be beneficial.

15 citations

Proceedings ArticleDOI
18 May 2013
TL;DR: Some of the software-engineering practices adopted in a scientific-software application for a laser-induced incandescence community model for a collaborative model that is to be extended, modified, and used by different researchers are discussed.
Abstract: The multidisciplinary requirements of current computational modeling problems preclude the development of scientific software that is maintained and used by selected scientists. The multidisciplinary nature of these efforts requires the development of large scale software projects established with a wide developer and user base in mind. This article describes some of the software-engineering practices adopted in a scientific-software application for a laser-induced incandescence community model. The project uses an Agile and Test-Driven Development approach to implement the infrastructure for the development of a collaborative model that is to be extended, modified, and used by different researchers. We discuss some of the software-engineering practices that can be exploited through the life of a project, starting with its inception when only a hand full of developers are contributing to the project and the mechanism we have put in place in order to allow the natural expansion of the model.

13 citations


Cited by
More filters
20 Jan 2017
TL;DR: The Grounded Theory: A Practical Guide through Qualitative Analysis as mentioned in this paper, a practical guide through qualitative analysis through quantitative analysis, is a good starting point for such a study.
Abstract: การวจยเชงคณภาพ เปนเครองมอสำคญอยางหนงสำหรบทำความเขาใจสงคมและพฤตกรรมมนษย การวจยแบบการสรางทฤษฎจากขอมล กเปนหนงในหลายระเบยบวธการวจยเชงคณภาพทกำลงไดรบความสนใจ และเปนทนยมเพมสงขนเรอยๆ จากนกวชาการ และนกวจยในสาขาสงคมศาสตร และศาสตรอนๆ เชน พฤตกรรมศาสตร สงคมวทยา สาธารณสขศาสตร พยาบาลศาสตร จตวทยาสงคม ศกษาศาสตร รฐศาสตร และสารสนเทศศกษา ดงนน หนงสอเรอง “ConstructingGrounded Theory: A Practical Guide through Qualitative Analysis” หรอ “การสรางทฤษฎจากขอมล:แนวทางการปฏบตผานการวเคราะหเชงคณภาพ” จะชวยใหผอานมความรความเขาใจถงพฒนาการของปฏบตการวจยแบบสรางทฤษฎจากขอมล ตลอดจนแนวทาง และกระบวนการปฏบตการวจยอยางเปนระบบ จงเปนหนงสอทควรคาแกการอานโดยเฉพาะนกวจยรนใหม เพอเปนแนวทางในการนำความรความเขาใจไประยกตในงานวจยของตน อกทงนกวจยผเชยวชาญสามารถอานเพอขยายมโนทศนดานวจยใหกวางขวางขน

4,417 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations

Journal ArticleDOI
TL;DR: A review of laser-induced incandescence (LII) for combustion diagnostics can be found in this paper, where the authors consider two variants of LII, one that is based on pulsed-laser excitation and has been mainly used in combustion diagnostic and emissions measurements, and an alternate approach that relies on continuous-wave lasers and has become increasingly popular for measuring black carbon in environmental applications.

300 citations

Journal ArticleDOI
TL;DR: Use of software engineering practices could increase the correctness of scientific software and the efficiency of its development.
Abstract: Context: Scientists have become increasingly reliant on software in order to perform research that is too time-intensive, expensive, or dangerous to perform physically Because the results produced by the software drive important decisions, the software must be correct and developed efficiently Various software engineering practices have been shown to increase correctness and efficiency in the development of traditional software It is unclear whether these observations will hold in a scientific contextObjective: This paper evaluates claims from software engineers and scientific software developers about 12 different software engineering practices and their use in developing scientific softwareMethod: We performed a systematic literature review examining claims about how scientists develop software Of the 189 papers originally identified, 43 are included in the literature review These 43 papers contain 33 different claims about 12 software engineering practicesResults: The majority of the claims indicated that software engineering practices are useful for scientific software development Every claim was supported by evidence (ie personal experience, interview/survey, or case study) with slightly over half supported by multiple forms of evidence For those claims supported by only one type of evidence, interviews/surveys were the most common The claims that received the most support were: "The effectiveness of the testing practices currently used by scientific software developers is limited" and "Version control software is necessary for research groups with more than one developer" Additionally, many scientific software developers have unconsciously adopted an agile-like development methodologyConclusion: Use of software engineering practices could increase the correctness of scientific software and the efficiency of its development While there is still potential for increased use of these practices, scientific software developers have begun to embrace software engineering practices to improve their software Additionally, software engineering practices still need to be tailored to better fit the needs of scientific software development

74 citations