scispace - formally typeset
Search or ask a question

Showing papers in "Software - Practice and Experience in 2001"


Journal ArticleDOI
TL;DR: To ease large‐scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems.
Abstract: To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright © 2001 John Wiley & Sons, Ltd.

579 citations


Journal ArticleDOI
TL;DR: This article describes the security architecture of Ajanta, a Java‐based system for mobile agent programming that provides mechanisms to protect server resources from malicious agents, agent data from tampering by malicious servers and communication channels during its travel, and protection of name service data and the global namespace.
Abstract: A mobile agent is an object which can autonomously migrate in a distributed system to perform tasks on behalf of its creator. Security issues in regard to the protection of host resources, as well as the agent themselves, raise significant obstacles in practical applications of the agent paradigm. This article describes the security architecture of Ajanta, a Java-based system for mobile agent programming. This architecture provides mechanisms to protect server resources from malicious agents, agent data from tampering by malicious servers and communication channels during its travel, and protection of name service data and the global namespace. We present here a proxy based mechanism for secure access to server resources by agents. Using Java's class loader model and thread group mechanism, isolated execution domains are created for agents at a server. An agent can contain three kinds of protected objects: read-only objects whose tampering can be detected, encrypted objects for specific servers, and a secure append-only log of objects. A generic authentication protocol is used for all client–server interactions when protection is required. Using this mechanism, the security model of Ajanta enforces protection of namespaces, and secure execution of control primitives such as agent recall or abort. Ajanta also supports communication between agents using RMI, which can be controlled if required by the servers' security policies. Copyright © 2001 John Wiley & Sons, Ltd.

185 citations


Journal ArticleDOI
TL;DR: An unshared object can be accessed without regard to possible conflicts with other parts of a system, whether concurrent or single‐threaded.
Abstract: An unshared object can be accessed without regard to possible conflicts with other parts of a system, whether concurrent or single-threaded. A unique variable (sometimes known as a ‘free’ or ‘linear’ variable) is one that either is null or else refers to an unshared object. Being able to declare and check which variables are unique improves a programmer's ability to avoid program faults. In previously described uniqueness extensions to imperative languages, a unique variable can be accessed only with a destructive read, which nullifies it after the value is obtained. This approach suffers from several disadvantages: the use of destructive reads increases the complexity of the program which must continually restore nullified values; adding destructive reads changes the semantics of the programming language; and many of the nullifications are actually unnecessary. We demonstrate instead that uniqueness can be preserved through the use of existing language features. We give a modular static analysis that checks (nonexecutable) uniqueness annotations superimposed on an imperative programming language without destructive reads. The ‘alias-burying’ intuition is that aliases that are ‘dead’ (will never be used again) can be safely ‘buried’ (made undefined). Copyright © 2001 John Wiley & Sons, Ltd.

170 citations


Journal ArticleDOI
TL;DR: This work provides a structured process to recover grammars including the adaptation of raw extracted Grammars and the derivation of parsers and was the first to publish a (Web‐enabled) grammar specification so that others can use this result to construct their own grammar‐based tools for VS COBOL II or derivatives.
Abstract: SUMMARY We propose an approach to the construction of grammars for existing languages. The main characteristic of the approach is that the grammars are not constructed from scratch but they are rather recovered by extracting them from language references, compilers, and other artifacts. We provide a structured process to recover grammars including the adaptation of raw extracted grammars and the derivation of parsers. The process is applicable to possibly all existing languages for which business critical applications exist. We illustrate the approach with a non-trivial case study. Using our process and some basic tools, we constructed in a few weeks a complete and correct VS COBOL II grammar specification for IBM mainframes. In addition, we constructed a parser for VS COBOL II, and were the first to publish a (web-enabled) grammar specification so that others can use this result to construct their own grammar-based tools for VS COBOL II or derivatives.

164 citations


Journal ArticleDOI
TL;DR: Nrgrep is a new pattern‐matching tool designed for efficient search of complex patterns based on a single and uniform concept: the bit‐parallel simulation of a non‐deterministic suffix automaton that can find from simple patterns to regular expressions, exactly or allowing errors in the matches.
Abstract: We present nrgrep (‘non-deterministic reverse grep’), a new pattern-matching tool designed for efficient search of complex patterns. Unlike previous tools of the grep family, such as agrep and Gnu grep, nrgrep is based on a single and uniform concept: the bit-parallel simulation of a non-deterministic suffix automaton. As a result, nrgrep can find from simple patterns to regular expressions, exactly or allowing errors in the matches, with an efficiency that degrades smoothly as the complexity of the searched pattern increases. Another concept that is fully integrated into nrgrep and that contributes to this smoothness is the selection of adequate subpatterns for fast scanning, which is also absent in many current tools. We show that the efficiency of nrgrep is similar to that of the fastest existing string-matching tools for the simplest patterns, and is by far unmatched for more complex patterns. Copyright © 2001 John Wiley & Sons, Ltd.

134 citations


Journal ArticleDOI
TL;DR: Shimba is a reverse engineering environment to support the understanding of Java software systems and integrates the Rigi and SCED tools to analyze and visualize the static and dynamic aspects of a subject system.
Abstract: Shimba is a reverse engineering environment to support the understanding of Java software systems. Shimba integrates the Rigi and SCED tools to analyze and visualize the static and dynamic aspects of a subject system. The static software artifacts and their dependencies are extracted from Java byte code and viewed as directed graphs using the Rigi reverse engineering environment. The run-time information is generated by running the target software under a customized SDK debugger. The generated information is viewed as sequence diagrams using the SCED tool. In SCED, statechart diagrams can be synthesized automatically from sequence diagrams, allowing the user to investigate the overall run-time behavior of objects in the target system. Shimba provides facilities to manage the different diagrams and to trace artifacts and relations across views. In Shimba, SCED sequence diagrams are used to slice the static dependency graphs produced by Rigi. In turn, Rigi graphs are used to guide the generation of SCED sequence diagrams and to raise their level of abstraction. We show how the information exchange among the views enables goal-driven reverse engineering tasks and aids the overall understanding of the target software system. The FUJABA software system serves as a case study to illustrate and validate the Shimba reverse engineering environment. Copyright © 2001 John Wiley & Sons, Ltd.

126 citations


Journal ArticleDOI
TL;DR: Alto, a link‐time optimizer for the Compaq Alpha architecture, is described, able to realize significant performance improvements even for programs compiled with a good optimizing compiler with a high level of optimization.
Abstract: Traditional optimizing compilers are limited in the scope of their optimizations by the fact that only a single function, or possibly a single module, is available for analysis and optimization. In particular, this means that library routines cannot be optimized to specific calling contexts. Other optimization opportunities, exploiting information not available before link time, such as addresses of variables and the final code layout, are often ignored because linkers are traditionally unsophisticated. A possible solution is to carry out whole‐program optimization at link time. This paper describes alto, a link‐time optimizer for the Compaq Alpha architecture. It is able to realize significant performance improvements even for programs compiled with a good optimizing compiler with a high level of optimization. The resulting code is considerably faster than that obtained using the OM link‐time optimizer, even when the latter is used in conjunction with profile‐guided and inter‐file compile‐time optimizations. Copyright © 2001 John Wiley & Sons, Ltd.

121 citations


Journal ArticleDOI
TL;DR: In this article, the authors present inexpensive syntactic constraints that strengthen encapsulation by imposing static restrictions on the spread of references in object-oriented languages and introduce confined types to impose a static scoping discipline on dynamic references and anonymous methods.
Abstract: The sharing and transfer of references in object-oriented languages is difficult to control. Without any constraint, practical experience has shown that even carefully engineered object-oriented code can be brittle, and subtle security deficiencies can go unnoticed. In this paper, we present inexpensive syntactic constraints that strengthen encapsulation by imposing static restrictions on the spread of references. In particular, we introduce confined types to impose a static scoping discipline on dynamic references and anonymous methods to loosen confinement somewhat to allow code reuse. We have implemented a verifier which performs a modular analysis of Java programs and provides a static guarantee that confinement is respected. Copyright © 2001 John Wiley & Sons, Ltd.

94 citations


Journal ArticleDOI
TL;DR: Two dynamic compilation techniques are presented that enable high performance execution while reducing the effect of this compilation overhead, decreasing the amount of compilation performed, and overlapping compilation with execution.
Abstract: The execution model for mobile, dynamically-linked, object–oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, dynamic compilation, which is performed while the application is running, introduces execution delay. In this paper we present two dynamic compilation techniques that enable high performance execution while reducing the effect of this compilation overhead. These techniques can be classified as: 1) decreasing the amount of compilation performed ( Compilation), and 2) overlapping compilation with execution (Background Compilation). We first evaluate the effectiveness of lazy compilation. In lazy compilation, individual methods are compiled on demand upon their first invocation. This is in contrast toEager Compilation, in which all methods in a class are compiled when a new class is loaded. Our experimental results (obtained by executing the SpecJVM Java programs on the Jalape ˜ no JVM) show that, compared to eager compilation, lazy compilation results in % fewer methods being compiled and reductions in total time (compilation plus execution time) of %t o %. Next, we present profile-driven, background compilation, a technique that augments lazy compilation by using idle cycles in multiprocessor systems to overlap compilation with application execution. Profile information is used to prioritize methods as candidates for background compilation. Our results show that background compilation can deliver significant reductions in total time ( %t o %), compared to eager compilation.

76 citations


Journal ArticleDOI
TL;DR: This work has developed a conceptual model for frameworks and a set of guidelines to build object oriented frameworks that adhere to this model, and focuses on improving the flexibility, reusability and usability of frameworks.
Abstract: Object-oriented frameworks provide software developers with the means to build an infrastructure for their applications. Unfortunately, frameworks do not always deliver on their promises of reusability and flexibility. To address this, we have developed a conceptual model for frameworks and a set of guidelines to build object oriented frameworks that adhere to this model. Our guidelines focus on improving the flexibility, reusability and usability (i.e. making it easy to use a framework) of frameworks. Copyright (C) 2001 John Wiley & Sons, Ltd.

62 citations


Journal ArticleDOI
TL;DR: The method recovers an ‘as is’ design from C++ software releases, compares recovered designs at the class interface level, and helps the user to deal with inconsistencies by pointing out regions of code where differences are concentrated.
Abstract: This paper presents a method to build and maintain traceability links and properties of a set of object-oriented software releases. The method recovers an ‘as is’ design from C++ software releases, compares recovered designs at the class interface level, and helps the user to deal with inconsistencies by pointing out regions of code where differences are concentrated. The comparison step exploits edit distance and a maximum match algorithm. The method has been experimented with on two freely available C++ systems. Results as well as examples of applications to the visualization of the traceability information and to the estimation of the size of changes during maintenance are reported in the paper. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper proposes an approach that allows full referential object sharing but adds transitive access control to object references to limit the effects of aliasing, and is presented as an extension of Java, called JAC (Java with Access Control).
Abstract: SUMMARYUnwanted effects of aliasing cause encapsulation problems in object orientedprogramming.Nevertheless,aliasingispartofcommonandefficientprogrammingtechniquesforexpressing sharing ofobjectsandassuchitsgeneralrestrictionisnotanoptioninpractice.Thispaperproposesanapproachthatallowsfullreferentialobjectsharingbutaddstransitiveaccesscontroltoobjectreferencestolimitthe e ects ofaliasing.Theapproachreliesonwell-knownpropertiesofobject-orientedtypesystemsbutexploitstheminanovelwaytosupportanaccess-right-basedmodelofencapsulation.ItispresentedasanextensionofJava,calledJAC(JavawithAccessControl).Copyrightc2000JohnWileyS aliasing; access rights; security; Java 1. INTRODUCTIONEncapsulation is widelyaccepted as a keyfeature of object-oriented programming.Unfortunately, manypeople seem to relyon a rather vague understanding ofwhat encapsulation means, rather than giving a precise definition [1]. So what isencapsulation?Encapsulation.ItturnsoutthatmostauthorswouldagreeonanearlydefinitionbyAlanSnyder[2]employinginterfaces:“Encapsulationisatechniqueforminimizinginterdependencies among separately-written modules by defining strict externalinterfaces.”This definition is mainlymotivated bythe desire to ease change management inlargesoftwaresystems.Hidingofamodule’sdetailsbehindabstractinterfacesreducesthedependenciesbetweenmodulesanmakesthemlessvulnerablebychangesinothermodules.Wecallthisinterface-basedaspectofencapsulation

Journal ArticleDOI
TL;DR: The suitability of M‐mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre‐calculating P's expected outcomes for given test data.
Abstract: SUMMARY A strategy described as ‘testing using M model programs’ (abbreviated to ‘M-mp testing’) is investigated as a practical alternative to software testing based on manual outcome prediction. A model program implements suitably selected parts of the functional specification of the software to be tested. The M-mp testing strategy requires that M (M ≥ 1) model programs as well as the program under test, P , should be independently developed. P and the M model programs are then subjected to the same test data. Difference analysis is conducted on the outputs and appropriate corrective action is taken. P and the M model programs jointly constitute an approximate test oracle. Both M-mp testing and manual outcome prediction are subject to the possibility of correlated failure. In general, the suitability of M-mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre-calculating P ’s expected outcomes for given test data. In many contexts, M-mp testing could also facilitate the attainment of higher test adequacy levels than would be possible with manual outcome prediction. A rigorous experiment in an industrial context is described in whichM-mp testing (withM = 1) was used to test algorithmically complex scheduling software. In this case, M-mp testing turned out to be significantly more cost effective than testing based on manual outcome prediction. Copyright  2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A method, based on vertex‐labeling, to generate algorithms for manipulating the Hilbert spacefilling curve is described, which leads to algorithms for computing the image of a point in R1; computing a pre‐image of a points in R2; drawing a finite approximation of the curve.
Abstract: We describe a method, based on vertex-labeling, to generate algorithms for manipulating the Hilbert spacefilling curve. The method leads to algorithms for: computing the image of a point in R1; computing a pre-image of a point in R2; drawing a finite approximation of the curve; finding neighbor cells in a decomposition ordered according to the curve. The method is straightforward and flexible, resulting in short, intuitive procedures that are as efficient as specialized procedures found in the literature. Moreover, the same method can be applied to many other spacefilling curves. We demonstrate vertex-labeling algorithms for the Sierpinski and Peano spacefilling curves, and variations. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work presents a technique which is simple and without the above drawbacks—allowing a token to simultaneously have different types—and shows how it can be applied to areas such as little language processing and fuzzy parsing.
Abstract: SUMMARY A common problem when writing compilers for programming languages or little, domain-specific languages is that an input token may have several interpretations, depending on context. Solutions to this problem demand programmer intervention, obfuscate the language’s grammar, and may introduce subtle bugs. We present a technique which is simple and without the above drawbacks—allowing a token to simultaneously have different types—and show how it can be applied to areas such as little language processing and fuzzy parsing. We also describe ways that compiler tools can support this technique. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper aims to popularize an efficient but little‐known algorithm for creating minimal ADFAs recognizing a finite language, invented independently by several authors.
Abstract: Minimal acyclic deterministic finite automata (ADFAs) can be used as a compact representation of finite string sets with fast access time. Creating them with traditional algorithms of DFA minimization is resource greedy when a large collection of strings is involved. This paper aims to popularize an efficient but little-known algorithm for creating minimal ADFAs recognizing a finite language, invented independently by several authors. The algorithm is presented for three variants of ADFAs, its minor improvements are discussed, and minimal ADFAs are compared to competitive data structures. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A periodic rotation scheme improves the speed of splaying by 27%, while other proposed heuristics are less effective, and the performance of efficient bit‐wise hashing and red–blacktrees for comparison is reported.
Abstract: Splay and randomized search trees (RSTs) are self-balancing binary tree structures with little or no space overhead compared to a standard binary search tree (BST). Both trees are intended for use in applications where node accesses are skewed, for example in gathering the distinct words in a large text collection for index construction. We investigate the efficiency of these trees for such vocabulary accumulation. Surprisingly, unmodified splaying and RSTs are on average around 25% slower than using a standard binary tree. We investigate heuristics to limit splay tree reorganization costs and show their effectiveness in practice. In particular, a periodic rotation scheme improves the speed of splaying by 27%, while other proposed heuristics are less effective. We also report the performance of efficient bit-wise hashing and red–blacktrees for comparison. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An approach to elimination of redundant access expressions that combines partial redundancy elimination (PRE) with type‐based alias analysis (TBAA) is described, and an optimization framework for Java class files incorporating TBAA‐based PRE over pointer access expressions is implemented.
Abstract: Pointer traversals pose significant overhead to the execution of object-oriented programs, since every access to an object’s state requires a pointer dereference. Eliminating redundant pointer traversals reduces both instructions executed as well as redundant memory accesses to relieve pressure on the memory subsystem. We describe an approach to elimination of redundant access expressions that combines partial redundancy elimination (PRE) with type-based alias analysis (TBAA). To explore the potential of this approach we have implemented an optimization framework for Java class files incorporating TBAA-based PRE over pointer access expressions. The framework is implemented as a classfileto-classfile transformer; optimized classes can then be run in any standard Java execution environment. Our experiments demonstrate improvements in the execution of optimized code for several Java benchmarks running in diverse execution environments: the standard interpreted JDK virtual machine, a virtual machine using “just-in-time” compilation, and native binaries compiled off-line (“way-ahead-of-time”). We isolate the impact of access path PRE using TBAA, and demonstrate that Java’s requirement of precise exceptions can noticeably impact code-motion optimizations like PRE.

Journal ArticleDOI
TL;DR: The demerits and constructive amendments to Chae's cohesion measure are discussed and the patterns of interactions among the constitute members of a class are considered.
Abstract: Although H. S. Chae's class cohesion measure considers not only the number of interactions, but also the patterns of the interactions among the constitute members of a class (which overcomes the limitations of previous class cohesion measures) it, however, only partly considers the patterns of interactions, and might cause the measuring results to be inconsistent with intuition in some cases. This paper discusses the demerits and proposes constructive amendments to Chae's cohesion measure. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A conceptual basis and prototype implementation for direct support for automated dimensional consistency checking and unit conversion within the framework of the standard Fortran 90 language is described.
Abstract: Physical dimensions and units form an essential part of the specification of constants and variables occurring in scientific programs, yet no standard compilable programming language implements direct support for automated dimensional consistency checking and unit conversion. This paper describes a conceptual basis and prototype implementation for such support within the framework of the standard Fortran 90 language. This is accomplished via an external module supplying appropriate user data types and operator interfaces. Legacy Fortran 77 scientific software can be easily modified to compile and run as ‘dimension-aware’ programs utilizing the proposed enhancements. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: As network‐enabled embedded devices and Java grow in their popularity, embedded system researchers start seeking ways to make these devices Java‐enabled, but it is a challenge to apply Java technology to these devices due to their shortage of resources.
Abstract: As network-enabled embedded devices and Java grow in their popularity, embedded system researchers start seeking ways to make these devices Java-enabled. However, it is a challenge to apply Java technology to these devices due to their shortage of resources. In this paper, we propose EJVM (Economic Java Virtual Machine), an economic way to run Java programs on network-enabled and resource-limited embedded devices. Espousing the architecture proposed by distributed JVM, we store all Java codes on the server to reduce the storage needs of the client devices. In addition, we use two novel techniques to reduce the client-side memory footprints: server-side class representation conversion and on-demand bytecode loading. Finally, we maintain client-side caches and provide performance evaluation on different caching policies. We implement EJVM by modifying a freely available JVM implementation, Kaffe. From the experiment results, we show that EJVM can reduce Java heap requirements by about 20–50% and achieve 90% of the original performance. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: JGAP can be viewed as a visual graph calculator for helping experiment with and teach graph algorithm design and includes a performance meter to measure the execution time of implemented algorithms.
Abstract: We describe JGAP, a web-based platform for designing and implementing Java-coded graph algorithms. The platform contains a library of common data structures for implementing graph algorithms, features a "plug-and-play" modular design for adding new algorithm modules, and includes a performance meter to measure the execution time of implemented algorithms. JGAP is also equipped with a graph editor to generate and modify graphs to have specific properties. JGAP"s graphic user interface further allows users to compose, in a functional way, computation sequences from existing algorithm modules so that output from an algorithm is used as input for another algorithm. Hence, JGAP can be viewed as a visual graph calculator for helping experiment with and teach graph algorithm design. Copyright 2001 John Wiley & Sons, Lt.

Journal ArticleDOI
TL;DR: Indicating that CBMC does not satisfy the monotonic property in terms of the number of interactions, Xu and Zhou proposed an augmented definition of CBMC by adopting cut set instead of glue methods, which clearly satisfies the monOTonic property.
Abstract: The authors insist that monotonicity is a necessary property of a good cohesion metric and the violation of the monotonicity property limits the application of CBMC. They also state that the augmented CBMC can also be used as a guideline for quality evaluation and restructuring of poorly designed classes. This paper raises the question about the necessity of monotonicity by analyzing the reason that causes CBMC to violate the monotonicity property. In addition, we give a detailed description of the restructuring procedure based on CBMC.

Journal ArticleDOI
TL;DR: Two kinds of methods for improving insertion are presented, which are about six to 320 times faster than that for insertion with the original double‐array for large sets of keys.
Abstract: A double-array is a compact and fast data structure for a trie, but it degrades the speed of insertion for a large set of keys. In this paper, two kinds of methods for improving insertion are presented. The basic functions for retrieval, insertion and deletion are implemented in the C language. Comparing with the original double-array for large sets of keys, the improved double-array is about six to 320 times faster than that for insertion. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The efforts to define a basic model‐based framework for rapid simulation and visualization are discussed, and it is illustrated how this framework was used to evaluate some classic algorithms.
Abstract: SUMMARY Over the last two decades, considerable research has been done in distributed operating systems, which can be attributed to faster processors and better communication technologies. A distributed operating system requires distributed algorithms to provide basic operating system functionality like mutual exclusion, deadlock detection, etc. A number of such algorithms have been proposed in the literature. Traditionally, these distributed algorithms have been presented in a theoretical way, with limited attempts to simulate actual working models. This paper discusses our experience in simulating distributed algorithms with the aid of some existing tools, including OPNET and Xplot. We discuss our efforts to define a basic model-based framework for rapid simulation and visualization, and illustrate how we used this framework to evaluate some classic algorithms. We have also shown how the performance of different algorithms can be compared based on some collected statistics. To keep the focus of this paper on the approach itself, and our experience with tool integration, we only discuss some relatively simple models. Yet, the approach can be applied to more complex algorithm specifications. Copyright  2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: An incremental parser based on LR parsing techniques and designed for use in a modeless syntax recognition editor is detailed, which aims to minimize disturbance to this representation, not only to ensure other system components can operate incrementally, but also to avoid unfortunate consequences for certain user‐oriented services.
Abstract: Incremental parsing has long been recognized as a technique of great utility in the construction of language-based editors, and correspondingly, the area currently enjoys a mature theory. Unfortunately, many practical considerations have been largely overlooked in previously published algorithms. Many user requirements for an editing system necessarily impact on the design of its incremental parser, but most approaches focus only on one: response time. This paper details an incremental parser based on LR parsing techniques and designed for use in a modeless syntax recognition editor. The nature of this editor places significant demands on the structure and quality of the document representation it uses, and hence, on the parser. The strategy presented here is novel in that both the parser and the representation it constructs are tolerant of the inevitable and frequent syntax errors that arise during editing. This is achieved by a method that differs from conventional error repair techniques, and that is more appropriate for use in an interactive context. Furthermore, the parser aims to minimize disturbance to this representation, not only to ensure other system components can operate incrementally, but also to avoid unfortunate consequences for certain user-oriented services. The algorithm is augmented with a limited form of predictive tree-building, and a technique is presented for the determination of valid symbols for menu-based insertion. Copyright (C) 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The main goal of this paper is to show the applicability of framework‐based reuse to videogames, and the predicted conditions under which building a framework is cost effective for the development of videogames similar to the ones from the studied domain.
Abstract: A framework is a high-level solution for the reuse of software pieces, a step forward in simple library-based reuse, that allows the sharing of not only common functions but also the generic logic of a domain application. It also ensures a better level of quality for the final product, given the fact that an important fraction of the application is already found within the framework and has therefore already been tested. This case study takes the systematic generation of hot-spot subsystems approach as a reference point to describe the underlying concepts in the design of a framework for the development of 2D action videogames for low-performance machines. The main goal of this paper is to show the applicability of framework-based reuse to videogames. Both standard and framework-based game implementations are compared and the results are analysed. Special attention is paid to the (potential) benefits that the use of frameworks brings to the fulfillment of maintenance tasks along the game's life cycle, a stage that normally consumes most resources in software projects. At the end of the paper, based on the implementation results, this study shows the predicted conditions under which building a framework is cost effective for the development of videogames similar to the ones from the studied domain. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A substantial performance boost is indicated when running a large set of applications using the method, compared to running these benchmark applications with the best fixed granularity.
Abstract: In this paper we propose a mechanism that provides distributed shared memory (DSM) systems with a flexible sharing granularity. The size of the shared memory units is dynamically determined by the system during runtime. This size can range from that of a single variable up to the size of the entire shared memory space. During runtime, the DSM transparently adapts the granularity to the memory access pattern of the application in each phase of its execution. This adaptation, called ComposedView, provides efficient data sharing in software DSM while preserving sequential consistency. Neither complex code analysis nor annotation by the programmer or the compiler are required. Our experiments indicate a substantial performance boost (up to 80% speed-up improvement) when running a large set of applications using our method, compared to running these benchmark applications with the best fixed granularity. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A complete framework‐based method which guides application developers in exactly determining application requirements and guides how to build them using the compositional framework MultiTEL from the collaborative and multimedia applications domain is proposed.
Abstract: Component-based software has become an important alternative for building applications, especially distributed ones, so it is essential to define new software development processes based on components. Within this trend, we propose a complete framework-based method which guides application developers in exactly determining application requirements. It also guides how to build them using the compositional framework MultiTEL from the collaborative and multimedia applications domain. Although many multimedia frameworks are available, none of them offer a design methodology for understanding and adapting the framework classes or components to each derived application. By applying an architecture description language (ADL) we are able to document the framework and help designers in: constructing; reusing, and connecting components; extending the framework architecture; and adding components to meet user requirements. Tools for the automatic generation of code from the ADL specifications are also described. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A constraint logic programming (CLP) approach to the solution of a job shop scheduling problem in the field of production planning in orthopaedic hospital departments is proposed by exploiting some well‐known operations research results and achieves a significant improvement with respect to the pure CLP(FD) approach.
Abstract: In this paper, we propose a constraint logic programming (CLP) approach to the solution of a job shop scheduling problem in the field of production planning in orthopaedic hospital departments. A pure CLP on finite domain (CLP(FD)) approach to the problem has been developed, leading to disappointing results. In fact, although CLP(FD) has been recognized as a suitable tool for solving combinatorial problems, it presents some drawbacks for optimization problems. The main reason concerns the fact that CLP(FD) solvers do not effectively handle the objective function and cost-based reasoning through the simple branch and bound scheme they embed. Therefore, we have proposed an improvement of the standard CLP branch and bound algorithm by exploiting some well-known operations research results. The branch and bound we integrate in a CLP environment is based on the optimal solution of a relaxation of the original problem. In particular, the relaxation used for the job shop scheduling problem considered is the well-known shifted bottleneck procedure considering single machine problems. The idea is to decompose the original problem into subproblems and solve each of them independently. Clearly, the solutions of each subproblem may violate constraints among different subproblems which are not taken into account. However, these solutions can be exploited in order to improve the pruning of the search space and to guide the search by defining cost-based heuristics. The resulting algorithm achieves a significant improvement with respect to the pure CLP(FD) approach that enables the solution of problems which are one order of magnitude greater than those solved by a pure CLP(FD) algorithm. In addition, the resulting code is less dependent on the input data configuration. Copyright © 2001 John Wiley & Sons, Ltd.