scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 2003"


Journal ArticleDOI
TL;DR: DMDX is a Windows-based program designed primarily for language-processing experiments that uses the features of Pentium class CPUs and the library routines provided in DirectX to provide accurate timing and synchronization of visual and audio output.
Abstract: DMDX is a Windows-based program designed primarily for language-processing experiments. It uses the features of Pentium class CPUs and the library routines provided in DirectX to provide accurate timing and synchronization of visual and audio output. A brief overview of the design of the program is provided, together with the results of tests of the accuracy of timing. The Web site for downloading the software is given, but the source code is not available.

2,541 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore how the mundane but necessary task of field support is organized in the case of Apache web server software, and why some project participants are motivated to provide this service gratis to others.

1,364 citations


Journal ArticleDOI
TL;DR: The motives of 141 contributors to a large Open Source Software project (the Linux kernel) was explored with an Internet-based questionnaire study and activities in these teams were particularly determined by participants’ evaluation of the team goals as well as by their perceived indispensability and self-efficacy.

1,338 citations


Journal ArticleDOI
TL;DR: This work describes a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware and quantifies the effectiveness of the data acquisition and signal conditioning algorithm under a variety of conditions.
Abstract: Background: Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. Results: We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. Conclusions: We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.

1,223 citations


Journal ArticleDOI
TL;DR: MatGAT (Matrix Global Alignment Tool), a simple, easy to use computer application that generates similarity/identity matrices for DNA or protein sequences without needing pre-alignment of the data, is developed.
Abstract: The rapid increase in the amount of protein and DNA sequence information available has become almost overwhelming to researchers. So much information is now accessible that high-quality, functional gene analysis and categorization has become a major goal for many laboratories. To aid in this categorization, there is a need for non-commercial software that is able to both align sequences and also calculate pairwise levels of similarity/identity. We have developed MatGAT (Matrix Global Alignment Tool), a simple, easy to use computer application that generates similarity/identity matrices for DNA or protein sequences without needing pre-alignment of the data. The advantages of this program over other software are that it is open-source freeware, can analyze a large number of sequences simultaneously, can visualize both sequence alignment and similarity/identity values concurrently, employs global alignment in calculations, and has been formatted to run under both the Unix and the Microsoft Windows Operating Systems. We are presently completing the Macintosh-based version of the program.

892 citations


Journal ArticleDOI
TL;DR: Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images and on-demand data reslicing implemented within the program.
Abstract: Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

891 citations


Book
01 Aug 2003
TL;DR: This chapter discusses Modeling Paradigms and Tool Support, which focuses on Designing Objects for Relational Databases, and the Pitfalls of Infrastructure-Driven Packaging.
Abstract: Foreword. Preface. Acknowledgments. I. PUTTING THE DOMAIN MODEL TO WORK. 1. Crunching Knowledge. Ingredients of Effective Modeling. Knowledge Crunching. Continuous Learning. Knowledge-Rich Design. Deep Models. 2. Communication and the Use of Language. UBIQUITOUS LANGUAGE. Modeling Out Loud. One Team, One Language. Documents and Diagrams. Written Design Documents. Executable Bedrock. Explanatory Models. 3. Binding Model and Implementation. MODEL-DRIVEN DESIGN. Modeling Paradigms and Tool Support. Letting the Bones Show: Why Models Matter to Users. HANDS-ON MODELERS. II. THE BUILDING BLOCKS OF A MODEL-DRIVEN DESIGN. 4. Isolating the Domain. LAYERED ARCHITECTURE. Relating the Layers. Architectural Frameworks. The Domain Layer Is Where the Model Lives. THE SMART UI "ANTI-PATTERN" Other Kinds of Isolation. 5. A Model Expressed in Software. Associations. ENTITIES (A.K.A. REFERENCE OBJECTS). Modeling ENTITIES. Designing the Identity Operation. VALUE OBJECTS. Designing VALUE OBJECTS. Designing Associations That Involve VALUE OBJECTS. SERVICES. SERVICES and the Isolated Domain Layer. Granularity. Access to SERVICES. MODULES (A.K.A. PACKAGES). Agile MODULES. The Pitfalls of Infrastructure-Driven Packaging. Modeling Paradigms. Why the Object Paradigm Predominates. Nonobjects in an Object World. Sticking with MODEL-DRIVEN DESIGN When Mixing Paradigms. 6. The Life Cycle of a Domain Object. AGGREGATES. FACTORIES. Choosing FACTORIES and Their Sites. When a Constructor Is All You Need. Designing the Interface. Where Does Invariant Logic Go? ENTITY FACTORIES Versus VALUE OBJECT FACTORIES. Reconstituting Stored Objects. REPOSITORIES. Querying a REPOSITORY. Client Code Ignores REPOSITORY Implementation Developers Do Not. Implementing a REPOSITORY. Working Within Your Frameworks. The Relationship with FACTORIES. Designing Objects for Relational Databases. 7. Using the Language: An Extended Example. Introducing the Cargo Shipping System. Isolating the Domain: Introducing the Applications. Distinguishing ENTITIES and VALUE OBJECTS. Role and Other Attributes. Designing Associations in the Shipping Domain. AGGREGATE Boundaries. Selecting REPOSITORIES. Walking Through Scenarios. Sample Application Feature: Changing the Destination of a Cargo. Sample Application Feature: Repeat Business. Object Creation. FACTORIES and Constructors for Cargo. Adding a Handling Event. Pause for Refactoring: An Alternative Design of the Cargo AGGREGATE. MODULES in the Shipping Model. Introducing a New Feature: Allocation Checking. Connecting the Two Systems. Enhancing the Model: Segmenting the Business. Performance Tuning. A Final Look. III. REFACTORING TOWARD DEEPER INSIGHT. 8. Breakthrough. Story of a Breakthrough. A Decent Model, and Yet... The Breakthrough. A Deeper Model. A Sobering Decision. The Payoff. Opportunities. Focus on Basics. Epilogue: A Cascade of New Insights. 9. Making Implicit Concepts Explicit. Digging Out Concepts. Listen to Language. Scrutinize Awkwardness. Contemplate Contradictions. Read the Book. Try, Try Again. How to Model Less Obvious Kinds of Concepts. Explicit Constraints. Processes as Domain Objects. SPECIFICATION Applying and Implementing SPECIFICATION. 10. Supple Design. INTENTION-REVEALING INTERFACES. SIDE-EFFECT-FREE FUNCTIONS. ASSERTIONS. CONCEPTUAL CONTOURS. STANDALONE CLASSES. CLOSURE OF OPERATIONS. DECLARATIVE DESIGN. Domain-Specific Languages. A Declarative Style of Design. Extending SPECIFICATIONS in a Declarative Style. Angles of Attack. Carve Off Subdomains. Draw on Established Formalisms, When You Can. 11. Applying Analysis Patterns. 12. Relating Design Patterns to the Model. STRATEGY (A.K.A. POLICY). COMPOSITE. Why Not FLYWEIGHT? 13. Refactoring Toward Deeper Insight. Initiation. Exploration Teams. Prior Art. A Design for Developers. Timing. Crisis as Opportunity. IV. STRATEGIC DESIGN. 14. Maintaining Model Integrity. BOUNDED CONTEXT. Recognizing Splinters Within a BOUNDED CONTEXT CONTINUOUS INTEGRATION. CONTEXT MAP. Testing at the CONTEXT Boundaries. Organizing and Documenting CONTEXT MAPS. Relationships Between BOUNDED CONTEXTS. SHARED KERNEL. CUSTOMER/SUPPLIER DEVELOPMENT TEAMS. CONFORMIST. ANTICORRUPTION LAYER. Designing the Interface of the ANTICORRUPTION LAYER. Implementing the ANTICORRUPTION LAYER. A Cautionary Tale. SEPARATE WAYS. OPEN HOST SERVICE. PUBLISHED LANGUAGE. Unifying an Elephant. Choosing Your Model Context Strategy. Team Decision or Higher. Putting Ourselves in Context. Transforming Boundaries. Accepting That Which We Cannot Change: Delineating the External Systems. Relationships with the External Systems. The System Under Design. Catering to Special Needs with Distinct Models. Deployment. The Trade-off. When Your Project Is Already Under Way. Transformations. Merging CONTEXTS: SEPARATE WAYS-SHARED KERNEL. Merging CONTEXTS: SHARED KERNEL-CONTINUOUS INTEGRATION. Phasing Out a Legacy System. OPEN HOST SERVICE-PUBLISHED LANGUAGE. 15. Distillation. CORE DOMAIN. Choosing the CORE. Who Does the Work? An Escalation of Distillations. GENERIC SUBDOMAINS. Generic Doesn't Mean Reusable. Project Risk Management. DOMAIN VISION STATEMENT. HIGHLIGHTED CORE. The Distillation Document. The Flagged CORE. The Distillation Document as Process Tool. COHESIVE MECHANISMS. GENERIC SUBDOMAIN Versus COHESIVE MECHANISM. When a MECHANISM Is Part of the CORE DOMAIN. Distilling to a Declarative Style. SEGREGATED CORE. The Costs of Creating a SEGREGATED CORE. Evolving Team Decision. ABSTRACT CORE. Deep Models Distill. Choosing Refactoring Targets. 16. Large-Scale Structure. EVOLVING ORDER. SYSTEM METAPHOR. The "Naive Metaphor" and Why We Don't Need It. RESPONSIBILITY LAYERS. Choosing Appropriate Layers. KNOWLEDGE LEVEL. PLUGGABLE COMPONENT FRAMEWORK. How Restrictive Should a Structure Be? Refactoring Toward a Fitting Structure. Minimalism. Communication and Self-Discipline. Restructuring Yields Supple Design. Distillation Lightens the Load. 17. Bringing the Strategy Together. Combining Large-Scale Structures and BOUNDED CONTEXTS. Combining Large-Scale Structures and Distillation. Assessment First. Who Sets the Strategy? Emergent Structure from Application Development. A Customer-Focused Architecture Team. Six Essentials for Strategic Design Decision Making. The Same Goes for the Technical Frameworks. Beware the Master Plan. Conclusion. Appendix: The Use of Patterns in This Book. Glossary. References. Photo Credits. Index. 0321125215T08272003

885 citations


Journal ArticleDOI
TL;DR: CERR provides a powerful, convenient, and common framework which allows researchers to use common patient data sets, and compare and share research results.
Abstract: A software environment is described, called the computational environment for radiotherapy research (CERR, pronounced "sir"). CERR partially addresses four broad needs in treatment planning research: (a) it provides a convenient and powerful software environment to develop and prototype treatment planning concepts, (b) it serves as a software integration environment to combine treatment planning software written in multiple languages (MATLAB, FORTRAN, C/C++, JAVA, etc.), together with treatment plan information (computed tomography scans, outlined structures, dose distributions, digital films, etc.), (c) it provides the ability to extract treatment plans from disparate planning systems using the widely available AAPM/RTOG archiving mechanism, and (d) it provides a convenient and powerful tool for sharing and reproducing treatment planning research results. The functional components currently being distributed, including source code, include: (1) an import program which converts the widely available AAPM/RTOG treatment planning format into a MATLAB cell-array data object, facilitating manipulation; (2) viewers which display axial, coronal, and sagittal computed tomography images, structure contours, digital films, and isodose lines or dose colorwash, (3) a suite of contouring tools to edit and/or create anatomical structures, (4) dose-volume and dose-surface histogram calculation and display tools, and (5) various predefined commands. CERR allows the user to retrieve any AAPM/RTOG key word information about the treatment plan archive. The code is relatively self-describing, because it relies on MATLAB structure field name definitions based on the AAPM/RTOG standard. New structure field names can be added dynamically or permanently. New components of arbitrary data type can be stored and accessed without disturbing system operation. CERR has been applied to aid research in dose-volume-outcome modeling, Monte Carlo dose calculation, and treatment planning optimization. In summary, CERR provides a powerful, convenient, and common framework which allows researchers to use common patient data sets, and compare and share research results.

856 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed an inductive theory of the open source software innovation process by focussing on the creation of Freenet, a project aimed at developing a decentralized and anonymous peer-to-peer electronic file sharing network.

834 citations


Journal ArticleDOI
TL;DR: Responding to the Internet and open source systems, three traditional vendors of proprietary platforms experimented with hybrid strategies which attempted to combine the advantages of open source software while retaining control and differentiation.

788 citations



Journal ArticleDOI
TL;DR: Empirical evidence is provided supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects, and indicates that these metrics are significantly associated with defects.
Abstract: To produce high quality object-oriented (OO) applications, a strong emphasis on design aspects, especially during the early phases of software development, is necessary. Design metrics play an important role in helping developers understand design aspects of software and, hence, improve software quality and developer productivity. In this paper, we provide empirical evidence supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects. Our results, based on industry data from software developed in two popular programming languages used in OO development, indicate that, even after controlling for the size of the software, these metrics are significantly associated with defects. In addition, we find that the effects of these metrics on defects vary across the samples from two programming languages-C++ and Java. We believe that these results have significant implications for designing high-quality software products using the OO approach.

Journal ArticleDOI
Anton Cervin1, Dan Henriksson1, Bo Lincoln1, Johan Eker1, Karl-Erik Årzén1 
TL;DR: Jitterbug and TrueTime are described, which provide a simple and efficient way to analyze and simulate how timing affects control performance in systems with limited computer resources.
Abstract: To achieve good performance in systems with limited computer resources, the constraints of the implementation platform must be taken into account at design time. To facilitate this, software tools are needed to analyze and simulate how timing affects control performance. This article describes two such tools: Jitterbug and TrueTime.

Proceedings ArticleDOI
22 Sep 2003
TL;DR: An approach is introduced for populating a release history database that combines version data with bug tracking data and adds missing data not covered by version control systems such as merge points to obtain meaningful views showing the evolution of a software project.
Abstract: Version control and bug tracking systems contain large amounts of historical information that can give deep insight into the evolution of a software project. Unfortunately, these systems provide only insufficient support for a detailed analysis of software evolution aspects. We address this problem and introduce an approach for populating a release history database that combines version data with bug tracking data and adds missing data not covered by version control systems such as merge points. Then simple queries can be applied to the structured data to obtain meaningful views showing the evolution of a software project. Such views enable more accurate reasoning of evolutionary aspects and facilitate the anticipation of software evolution. We demonstrate our approach on the large open source project Mozilla that offers great opportunities to compare results and validate our approach.

Journal ArticleDOI
TL;DR: The software as a service model composes services dynamically, as needed, by binding several lower-level services-thus overcoming many limitations that constrain traditional software use, deployment, and evolution.
Abstract: The software as a service model composes services dynamically, as needed, by binding several lower-level services-thus overcoming many limitations that constrain traditional software use, deployment, and evolution.

Journal ArticleDOI
TL;DR: Based on a metamodel with formal semantics that developers can use to capture designs, Metropolis provides an environment for complex electronic-system design that supports simulation, formal analysis, and synthesis.
Abstract: Today, the design chain lacks adequate support, with most system-level designers using a collection of unlinked tools. The implementation then proceeds with informal techniques involving numerous human-language interactions that create unnecessary and unwanted iterations among groups of designers in different companies or different divisions. The move toward programmable platforms shifts the design implementation task toward embedded software design. When embedded software reaches the complexity typical of today's designs, the risk that the software will not function correctly increases exponentially. The Metropolis project seeks to develop a unified framework that can cope with this challenge. Based on a metamodel with formal semantics that developers can use to capture designs, Metropolis provides an environment for complex electronic-system design that supports simulation, formal analysis, and synthesis.

Journal ArticleDOI
TL;DR: The development and empirical validation of a model of software piracy by individuals in the workplace indicates that individual attitudes, subjective norms, and perceived behavioral control are significant precursors to the intention to illegally copy software.
Abstract: Theft of software and other intellectual property has become one of the most visible problems in computing today. This paper details the development and empirical validation of a model of software piracy by individuals in the workplace. The model was developed from the results of prior research into software piracy, and the reference disciplines of the theory of planned behavior, expected utility theory, and deterrence theory. A survey of 201 respondents was used to test the model. The results indicate that individual attitudes, subjective norms, and perceived behavioral control are significant precursors to the intention to illegally copy software. In addition, punishment severity, punishment certainty, and software cost have direct effects on the individual's attitude toward software piracy, whereas punishment certainty has a significant effect on perceived behavioral control. Consequently, strategies to reduce software piracy should focus on these factors. The results add to a growing stream of information systems research into illegal software copying behavior and have significant implications for organizations and industry groups aiming to reduce software piracy.

Journal Article
TL;DR: A Microsoft Excel macro called MapDraw is constructed to draw genetic linkage maps on PC computer based on given genetic linkage data.
Abstract: MAPMAKER is one of the most widely used computer software package for constructing genetic linkage maps.However, the PC version, MAPMAKER 3.0 for PC, could not draw the genetic linkage maps that its Macintosh version, MAPMAKER 3.0 for Macintosh,was able to do. Especially in recent years, Macintosh computer is much less popular than PC. Most of the geneticists use PC to analyze their genetic linkage data. So a new computer software to draw the same genetic linkage maps on PC as the MAPMAKER for Macintosh to do on Macintosh has been crying for. Microsoft Excel,one component of Microsoft Office package, is one of the most popular software in laboratory data processing. Microsoft Visual Basic for Applications (VBA) is one of the most powerful functions of Microsoft Excel. Using this program language, we can take creative control of Excel, including genetic linkage map construction, automatic data processing and more. In this paper, a Microsoft Excel macro called MapDraw is constructed to draw genetic linkage maps on PC computer based on given genetic linkage data. Use this software,you can freely construct beautiful genetic linkage map in Excel and freely edit and copy it to Word or other application. This software is just an Excel format file. You can freely copy it from ftp://211.69.140.177 or ftp://brassica.hzau.edu.cn and the source code can be found in Excel's Visual Basic Editor.

Journal ArticleDOI
TL;DR: This software has a Windows-compatible mouse-driven graphical interface which gives full control over all structural elements and provides the user with tools to construct topological networks, visualize interpenetrating or overlapping fragments, and analyse networks constructed fully or partially by exploiting short interactions.
Abstract: We have developed new software (OLEX) for the visualization and analysis of extended crystal structures. This software has a Windows-compatible mouse-driven graphical interface which gives full control over all structural elements. OLEX provides the user with tools to construct topological networks, visualize interpenetrating or overlapping fragments, and analyse networks constructed fully or partially by exploiting short interactions. It is also easy to generate conventional ellipsoid, ball-and-stick or packing plots.

Proceedings ArticleDOI
13 Nov 2003
TL;DR: A new approach to reverse engineer a model represented as structures called a GUI forest, event-flowgraphs and an integration tree directly from the executable GUI is described, which requires very little human intervention and is especially useful for regression testing of software that is modified frequently.
Abstract: Graphical user interfaces (GUIs) are important parts oftoday's software and their correct execution is required toensure the correctness of the overall software. A populartechnique to detect defects in GUIs is to test them by executingtest cases and checking the execution results. Testcases may either be created manually or generated automaticallyfrom a model of the GUI. While manual testingis unacceptably slow for many applications, our experiencewith GUI testing has shown that creating a model that canbe used for automated test case generation is difficult.We describe a new approach to reverse engineer a modelrepresented as structures called a GUI forest, event-flowgraphs and an integration tree directly from the executableGUI. We describe "GUI Ripping", a dynamic process inwhich the software's GUI is automatically "traversed" byopening all its windows and extracting all their widgets(GUI objects), properties, and values. The extracted informationis then verified by the test designer and used to automaticallygenerate test cases. We present algorithms for theripping process and describe their implementation in a toolsuite that operates on Java and Microsoft Windows' GUIs.We present results of case studies which show that ourapproach requires very little human intervention and is especiallyuseful for regression testing of software that is modifiedfrequently. We have successfully used the "GUI Ripper"in several large experiments and have made it availableas a downloadable tool.

Journal ArticleDOI
TL;DR: The concept of a polymetric view is presented, a lightweight software visualization technique enriched with software metrics information that helps to understand the structure and detect problems of a software system in the initial phases of a reverse engineering process.
Abstract: Reverse engineering software systems has become a major concern in software industry because of their sheer size and complexity. This problem needs to be tackled since the systems in question are of considerable worth to their owners and maintainers. In this article, we present the concept of a polymetric view, a lightweight software visualization technique enriched with software metrics information. Polymetric views help to understand the structure and detect problems of a software system in the initial phases of a reverse engineering process. We discuss the benefits and limits of several predefined polymetric views we have implemented in our tool CodeCrawler. Moreover, based on clusters of different polymetric views, we have developed a methodology which supports and guides a software engineer in the first phases of a reverse engineering of a large software system. We have refined this methodology by repeatedly applying it on industrial systems and illustrate it by applying a selection of polymetric views to a case study.

Journal ArticleDOI
TL;DR: New algorithms for test-suite reduction and prioritization that can be tailored effectively for use with modified condition/decision coverage (MC/DC) adequate are presented.
Abstract: Software testing is particularly expensive for developers of high-assurance software, such as software that is produced for commercial airborne systems. One reason for this expense is the Federal Aviation Administration's requirement that test suites be modified condition/decision coverage (MC/DC) adequate. Despite its cost, there is evidence that MC/DC is an effective verification technique and can help to uncover safety faults. As the software is modified and new test cases are added to the test suite, the test suite grows and the cost of regression testing increases. To address the test-suite size problem, researchers have investigated the use of test-suite reduction algorithms, which identify a reduced test suite that provides the same coverage of the software according to some criterion as the original test suite, and test-suite prioritization algorithms, which identify an ordering of the test cases in the test suite according to some criteria or goals. Existing test-suite reduction and prioritization techniques, however, may not be effective in reducing or prioritizing MC/DC-adequate test suites because they do not consider the complexity of the criterion. This paper presents new algorithms for test-suite reduction and prioritization that can be tailored effectively for use with MC/DC. The paper also presents the results of empirical studies of these algorithms.

Book
01 Jan 2003
TL;DR: This chapter discusses parallelism in the context of scientific computing, which has applications in environment and energy, problem-Solving Environments, and more.
Abstract: I. Parallelism 1. Introduction 2. Parallel Computer Architectures 3. Parallel Programming Considerations II. Applications 4. General Application Issues 5. Parallel Computing in CFD 6. Parallel Computing in Environment and Energy 7. Parallel Computational Chemistry 8. Application Overviews III. Software technologies 9. Software Technologies 10. Message Passing and Threads 11. Parallel I/O 12. Languages and Compilers 13. Parallel Object-Oriented Libraries 14. Problem-Solving Environments 15. Tools for Performance Tuning and Debugging 16. The 2-D Poisson Problem IV. Enabling Technologies and Algorithms 17. Reusable Software and Algorithms 18. Graph Partitioning for Scientific Simulations 19. Mesh Generation 20. Templates and Numerical Linear Algebra 21. Software for the Scalable Solutions of PDEs 22. Parallel Continuous Optimization 23. Path Following in Scientific Computing 24. Automatic Differentiation V. Conclusion 25. Wrap-up and Features

Book
01 Jan 2003
TL;DR: Embedded System Design can be used as a text book for courses on embedded systems and as a source which provides pointers to relevant material in the area for PhD students and teachers.
Abstract: Until the late eighties, information processing was associated with large mainframe computers and huge tape drives. During the nineties, this trend shifted towards information processing with personal computers, or PCs. The trend towards miniaturization continues. In the future, most of the information processing systems will be quite small and embedded into larger products such as transportation and fabrication equipment. Hence, these kinds of systems are called embedded systems. It is expected that the total market volume of embedded systems will be significantly larger than that of traditional information processing systems such as PCs and mainframes. Embedded systems share a number of common characteristics. For example, they must be dependable, efficient, meet real-time constraints and require customized user interfaces (instead of generic keyboard and mouse interfaces). Therefore, it makes sense to consider common principles of embedded system design. Embedded System Design starts with an introduction into the area and a survey of specification languages for embedded systems.A brief overview is provided of hardware devices used for embedded systems and also presents the essentials of software design for embedded systems. Real-time operating systems and real-time scheduling are covered briefly.Techniques for implementing embedded systems are also discussed, using hardware/software codesign. It closes with a survey on validation techniques. Embedded System Designcan be used as a text book for courses on embedded systems and as a source which provides pointers to relevant material in the area for PhD students and teachers. The book assumes a basic knowledge of information processing hardware and software.

Proceedings ArticleDOI
03 May 2003
TL;DR: Initial results are presented suggesting that heuristic search techniques are more effective than some of the known greedy methods for finding smaller sized test suites for software interaction testing.
Abstract: Software system faults are often caused by unexpected interactions among components. Yet the size of a test suite required to test all possible combinations of interactions can be prohibitive in even a moderately sized project. Instead, we may use pairwise or t-way testing to provide a guarantee that all pairs or t-way combinations of components are tested together This concept draws on methods used in statistical testing for manufacturing and has been extended to software system testing. A covering array, CA(N; t, k, v), is an N/spl times/k array on v symbols such that every N x t sub-array contains all ordered subsets from v symbols of size t at least once. The properties of these objects, however do not necessarily satisfy real software testing needs. Instead we examine a less studied object, the mixed level covering array and propose a new object, the variable strength covering array, which provides a more robust environment for software interaction testing. Initial results are presented suggesting that heuristic search techniques are more effective than some of the known greedy methods for finding smaller sized test suites. We present a discussion of an integrated approach for finding covering arrays and discuss how application of these techniques can be used to construct variable strength arrays.

Patent
16 Dec 2003
TL;DR: In this article, the system produces a unique tag for every instance of software, and each user device runs a supervising program that ensures, by use of the tag, that no software instance will be used infringing on the software owner's rights.
Abstract: Methods and apparatus to enable owners and vendors of software to protect intellectual property and to charge per-use. The system produces a unique tag for every instance of software. Each user device runs a supervising program that ensures, by use of the tag, that no software instance will be used infringing on the software owner's rights. When installing or using a software instance, the supervising program verifies the associated tag and stores the tag. When installing or using untagged software, the supervising program fingerprints selected portions of the software and stores the fingerprints. A user device's supervising program periodically calls up, or is called up by a guardian center. The guardian center detects unauthorized use of software by comparison of current call-up data with records of past call-ups. The guardian center completes the call-up by enabling or disabling continued use of the monitored software instances.

Journal ArticleDOI
TL;DR: Analyzing data from multiple sources on the Freenet software development process, the constructs of "joining script", "specialization", "contribution barriers", and "feature gifts" are generated and relationships among these are proposed.
Abstract: This paper develops an inductive theory of the open source software innovation process by focusing on the creation of Freenet, a project aimed at developing a decentralized and anonymous peer-to-peer electronic file sharing network. We are particularly interested in the strategies and processes by which new people join the existing community of software developers and how they initially contribute code. Analyzing date from multiple sources on the Freenet software development process, we generate the constructs of "joining script", "specialization", "contribution barriers", and "feature gifts", and propose relationships among these. Implications for theory and research are discussed.

Journal ArticleDOI
TL;DR: The results confirm the widely held belief that SEs typically do not update documentation as timely or completely as software process personnel and managers advocate, however, the results also reveal that out-of-date software documentation remains useful in many circumstances.
Abstract: Software engineering is a human task, and as such we must study what software engineers do and think. Understanding the normative practice of software engineering is the first step toward developing realistic solutions to better facilitate the engineering process. We conducted three studies using several data-gathering approaches to elucidate the patterns by which software engineers (SEs) use and update documentation. Our objective is to more accurately comprehend and model documentation use, usefulness, and maintenance, thus enabling better decision making and tool design by developers and project managers. Our results confirm the widely held belief that SEs typically do not update documentation as timely or completely as software process personnel and managers advocate. However, the results also reveal that out-of-date software documentation remains useful in many circumstances.

BookDOI
01 Sep 2003
TL;DR: This chapter discusses Graph Drawing Server on the Internet, a Static and Dynamic Graph Drawing Tools, and 3D Visualization of Object-oriented Programs.
Abstract: References.- Technical Foundations.- 1 Introduction.- 2 Graphs and Their Representation.- 3 Graph Planarity and Embeddings.- 4 Graph Drawing Methods.- References.- WilmaScope - A 3D Graph Visualization System.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Pajek - Analysis and Visualization of Large Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Tulip - A Huge Graph Visualization Framework.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Graphviz and Dynagraph - Static and Dynamic Graph Drawing Tools.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- AGD - A Library of Algorithms for Graph Drawing.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- yFiles - Visualization and Automatic Layout of Graphs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GDS - A Graph Drawing Server on the Internet.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- BioPath - Exploration and Visualization of Biochemical Pathways.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- DBdraw - Automatic Layout of Relational Database Schemas.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GoVisual - A Diagramming Software for UML Class Diagrams.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- CrocoCosmos - 3D Visualization of Large Object-oriented Programs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- ViSta - Visualizing Statecharts.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- visone - Analysis and Visualization of Social Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Polyphemus and Hermes - Exploration and Visualization of Computer Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.

ReportDOI
01 Aug 2003
TL;DR: The Trilinos Project is an effort to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific applications.
Abstract: The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. In particular, our goal is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific applications. Our emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all abstract interfaces. Trilinos uses a two-level software structure designed around collections of packages. A Trilinos package is an integral unit usually developed by a small team of experts in a particular algorithms area such as algebraic preconditioners, nonlinear solvers, etc. Packages exist underneath the Trilinos top level, which provides a common look-and-feel, including configuration, documentation, licensing, and bug-tracking. Trilinos packages are primarily written in C++, but provide some C and Fortran user interface support. We provide an open architecture that allows easy integration with other solver packages and we deliver our software to the outside community via the Gnu Lesser General Public License (LGPL). This report provides an overview of Trilinos, discussing the objectives, history, current development and future plans of the project.