scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 1998"


Journal ArticleDOI
TL;DR: A model is introduced that handles all mentioned requirements and allows the task of system-synthesis to be specified as an optimization problem and the application and adaptation of an Evolutionary Algorithm to solve the tasks of optimization and design space exploration.
Abstract: In this paper, we consider system-level synthesis as the problem of optimally mapping a task-level specification onto a heterogeneous hardware/software architecture. This problem requires (1) the selection of the architecture (allocation) including general purpose and dedicated processors, ASICs, busses and memories, (2) the mapping of the specification onto the selected architecture in space (binding) and time (scheduling), and (3) the design space exploration with the goal to find a set of implementations that satisfy a number of constraints on cost and performance. Existing methodologies often consider a fixed architecture, perform the binding only, do not reflect the tight interdependency between binding and scheduling, do not consider communication (tasks and resources), or require long run-times preventing design space exploration, or yield only one implementation with optimal cost. Here, a model is introduced that handles all mentioned requirements and allows the task of system-synthesis to be specified as an optimization problem. The application and adaptation of an Evolutionary Algorithm to solve the tasks of optimization and design space exploration is described.

246 citations


Proceedings Article
27 Apr 1998
TL;DR: It is shown how QML can be used to capture QoS properties as part of designs, and UML, the de-facto standard object-oriented modeling language, is extended to support the concepts of QML.
Abstract: Traditional object-oriented design methods deal with the functional aspects of systems, but they do not address quality of service (QoS) aspects such as reliability, availability, performance, security, and timing. However, deciding which QoS properties should be provided by individual system components is an important part of the design process. Different decisions are likely to result in different component implementations and system structures. Thus, decisions about component-level QoS should be made at design time, before the implementation is begun. Since these decisions are an important part of the design process, they should be captured as part of the design. We propose a general Quality-of-Service specification language, which we call QML. In this paper we show how QML can be used to capture QoS properties as part of designs. In addition, we extend UML, the de-facto standard object-oriented modeling language, to support the concepts of QML. QML is designed to integrate with object-oriented features, such as interfaces, classes, and inheritance. In particular, it allows specification of QoS properties through refinement of existing QoS specifications. Although we exemplify the use of QML to specify QoS properties within the categories of reliability and performance, QML can be used for specification within any QoS category-QoS categories are user-defined types in QML.

240 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a general quality-of-service specification language, which they call QML, and extend UML, the de facto standard object-oriented modelling language, to support the concepts of QML.
Abstract: Traditional object-oriented design methods deal with the functional aspects of systems, but they do not address quality-of-service (QoS) aspects, such as reliability, availability, performance, security and timing. However, deciding which QoS properties should be provided by individual system components is an important part of the design process. Different decisions are likely to result in different component implementations and system structures. Thus, decisions about component-level QoS should commonly be made at design time, before the implementation is begun. Since these decisions are an important part of the design process, they should be captured as part of the design. We propose a general quality-of-service specification language, which we call QML. In this paper we show how QML can be used to capture QoS properties as part of designs. In addition, we extend UML, the de facto standard object-oriented modelling language, to support the concepts of QML. QML is designed to integrate with object-oriented features, such as interfaces, classes and inheritance. In particular, it allows specification of QoS properties through refinement of existing QoS specifications. Although we exemplify the use of QML to specify QoS properties within the categories of reliability and performance, QML can be used for specification within any QoS category - QoS categories are user-defined types in QML. Sometimes, QoS characteristics and requirements change dynamically due to changing user preferences, or changes in the environment. For such situations static specification is insufficient. To allow for dynamic systems that change and evolve over time, we provide a QoS specification runtime representation. This representation enables systems to create, manipulate and exchange QoS information, and thereby negotiate and adapt to changing QoS requirements and conditions.

189 citations


Book ChapterDOI
TL;DR: This chapter compares and summarizes the experiences from three case studies on Corporate Memories for supporting various aspects in the product life-cycles of three European corporations and sketches a general framework for the development methodology, architecture, and technical realization of a Corporate Memory.
Abstract: A core concept in discussions about technological support for knowledge management is the Corporate Memory. A Corporate or Organizational Memory can be characterized as a comprehensive computer system which captures a company’s accumulated know-how and other knowledge assets and makes them available to enhance the efficiency and effectiveness of knowledge-intensive work processes. The successful development of such a system requires a careful analysis of established work practices and available information-technology (IT) infrastructure. This is essential for providing a cost-effective solution which will be accepted by the users and can be evolved in the future. This chapter compares and summarizes our experiences from three case studies on Corporate Memories for supporting various aspects in the product life-cycles of three European corporations. Based on the conducted analyses and prototypical implementations, we sketch a general framework for the development methodology, architecture, and technical realization of a Corporate Memory.

156 citations


Book ChapterDOI
01 Jan 1998
TL;DR: This chapter describes the integrated specification- and theorem proving environment of KIV, an advanced tool for developing high assurance systems.
Abstract: The aim of this chapter is to describe the integrated specification- and theorem proving environment of KIV. KIV is an advanced tool for developing high assurance systems. It supports: hierarchical formal specification of software and system designs specification of safety/security models proving properties of specifications modular implementation of specification components modular verification of implementations incremental verification and error correction reuse of specifications, proofs, and verified components

89 citations


Patent
11 Mar 1998
TL;DR: In this paper, a method of developing a software system using Object Oriented Technology (OOT) and frameworks is presented. But this method is applicable in the technical field of application development of software systems, eg for a business application as Financial or Logistic and Distribution.
Abstract: The present invention relates to a method of developing a software system using Object Oriented Technology The present invention addresses the problem of providing a technical foundation for the development of software applications using Object Oriented Technology and frameworks The present invention solves this problem with a framework allowing the modeling of businesses with a multiple level organizational structure The present invention is applicable in the technical field of application development of software systems, eg for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation

86 citations


Patent
11 Mar 1998
TL;DR: In this article, the problem of allowing an object to acquire and lose ability and function and to modify responsibilities on an object dynamically or, in other words, to allow an object acquiring and losing the ability to do things dynamically, is addressed.
Abstract: A method of developing a software system using Object Oriented Technology and frameworks. The problem of allowing an object to acquire and lose ability and function and to modify responsibilities on an object dynamically or, in other words, to allow an object to acquire and lose the ability to do things dynamically, is addressed. This problem is solved with a framework to be used for developing a software system, e.g. for a business application. The framework comprises a number of classes which are to be processed by a computer system. The framework further comprises a Life Cycle as a description of state transitions through which an object can proceed as it is processed by an application. This is applicable in the technical field of application development of software systems, e.g. for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation.

78 citations


Patent
11 Mar 1998
TL;DR: In this article, the authors propose a framework for the development of software applications using Object Oriented Technology and frameworks, where the purpose of frameworks is to provide significant portions of an application that are common across multiple implementations of the application.
Abstract: The present invention relates to a method of developing a software system using Object Oriented Technology. The present invention addresses the problem of providing a technical foundation for the development of software applications using Object Oriented Technology and frameworks. The present invention solves this problem with a framework supporting flexible interchange of domain algorithms. The present invention is applicable in the technical field of application development of software systems, e.g., for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation.

63 citations


Patent
Stephen C. Hughes1
12 Feb 1998
TL;DR: In this paper, a run-time customization capability extends functionality of a software application in a computer system through object-oriented design, an instance of a first class is instantiated, the first class (e.g., a derived class) has the same interface as a second class.
Abstract: A run-time customization capability extends functionality of a software application in a computer system. Through object-oriented design, an instance of a first class is instantiated. The first class (e.g., a derived class) has a same interface as a second class. The first and second classes enable respective first and second functionalities through respective first and second implementations of the same interface. The first implementation is dynamically loaded at run time. The dynamic loading can involve locating the first implementation, such as by using a locator to locate a module comprising the first implementation. A transfer vector usable in accessing the first implementation can be initialized to have an indication of a location of the first implementation. Programming code associated with the same interface can be compiled prior to compilation of programming code associated with the first implementation.

60 citations


01 Jan 1998
TL;DR: A framework that provides assurance on the correctness of program execution at run-time based on the Monitoring and Checking (MaC) architecture is described, and complements the two traditional approaches for ensuring that a system is correct, namely static analysis and testing.
Abstract: Computer systems are often monitored for performance evaluation and enhancement, debugging and testing, control or to check for the correctness of the system. Recently, the problem of designing monitors to check for the correctness of system implementation has received increased attention from the research community. Traditionally, verification has been used to increase the confidence that a system will be correct by making sure that a design specification is correct. However, even if a design has been formally verified, it still does not ensure the correctness of an implementation of the design. This is because the implementation often is much more detailed, and may not strictly follow the formal design. So, there is possibility for introduction of errors into an implementation of a design that has been verified. One way that people have traditionally tried to overcome this gap between the design and the implementation has been to test the implementation's behavior on a pre-determined set of input sequences. This approach, however, fails to provide guarantees about the correctness of the implementation on all possible input sequences. Consequently, when a system is running, it is hard to guarantee whether the current execution of the system is correct or not using the two traditional methods. Therefore, the approach of continuously monitoring a running system has received much attention, as it attempts to overcome the difficulties encountered by the two traditional methods for checking the correctness of the current execution of the system. We describe a framework that provides assurance on the correctness of program execution at run-time. This approach is based on the Monitoring and Checking (MaC) architecture, and complements the two traditional approaches for ensuring that a system is correct, namely static analysis and testing. Unlike these approaches which try to ensure that all possible executions of the system are correct, this approach ensures only that the current execution of the system is correct. The MaC architecture consists of three components: filter, event recognizer, and runtime checker. The filter extracts low-level information, in the form of values of program variables, from the instrumented system code, and sends it to the event recognizer. From this low-level information, the event recognizer detects the occurrence of abstract events, and informs the run-time checker about these. The run-time checker then, based on the events, checks the conformance of the behavior of the system on the current execution, to the formal requirement specification for the system. Acknowledgment. This research was supported in part by NSF CCR-9415346, NSF CCR9619910, AFOSR F49620-95-1-0508, ARO DAAG55-98-1-0393, ARO DAAG55-98-1-0466, and ONR N00014-97-1-0505 (MURI). The current address of Ben-Abdallah is at Department of d’Informatique, Universite de Sfax, Tunisia.

42 citations


01 Jan 1998
TL;DR: The Reference Mission was developed over a period of several years and was published in NASA Special Publication 6107 in July 1997 as discussed by the authors, which provided a workable model for the human exploration of Mars, which is described in enough detail that alternative strategies and implementations can be compared and evaluated.
Abstract: The Reference Mission was developed over a period of several years and was published in NASA Special Publication 6107 in July 1997 The purpose of the Reference Mission was to provide a workable model for the human exploration of Mars, which is described in enough detail that alternative strategies and implementations can be compared and evaluated NASA is continuing to develop the Reference Mission and expects to update this report in the near future It was the purpose of the Reference Mission to develop scenarios based on the needs of scientists and explorers who want to conduct research on Mars; however, more work on the surface-mission aspects of the Reference Mission is required and is getting under way Some aspects of the Reference Mission that are important for the consideration of the surface mission definition include: (1) a split mission strategy, which arrives at the surface two years before the arrival of the first crew; (2) three missions to the outpost site over a 6-year period; (3) a plant capable of producing rocket propellant for lifting off Mars and caches of water, O, and inert gases for the life-support system; (4) a hybrid physico-chemical/bioregenerative life-support system, which emphasizes the bioregenerative system more in later parts of the scenario; (5) a nuclear reactor power supply, which provides enough power for all operations, including the operation of a bioregenerative life-support system as well as the propellant and consumable plant; (6) capability for at least two people to be outside the habitat each day of the surface stay; (7) telerobotic and human-operated transportation vehicles, including a pressurized rover capable of supporting trips of several days' duration from the habitat; (7) crew stay times of 500 days on the surface, with six-person crews; and (8) multiple functional redundancies to reduce risks to the crews on the surface New concepts are being sought that would reduce the overall cost for this exploration program and reducing the risks that are indigenous to Mars exploration Among those areas being explored are alternative space propulsion approaches, solar vs nuclear power, and reductions in the size of crews

Patent
James E. Carey1, Brent Allen Carlson1, Tore Dahl1, Timothy Graser1, Anders Nilsson1, Mark Pasch1 
11 Mar 1998
TL;DR: In this article, a method of developing a software system using Object Oriented Technology and frameworks for developing a business application is presented, which is applicable in the technical field of application development of software systems.
Abstract: The present invention relates to a method of developing a software system using Object Oriented Technology and frameworks for developing a business application. The present invention solves this problem with a framework framework comprising a using non-financial component integration base class, a target financial component integration base class, and a generic data conversion engine. The present invention is applicable in the technical field of application development of software systems, e.g. for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation.

30 Jul 1998
TL;DR: Common assumptions about network RAM are reexamine, possible implementations are compared, the structure and performance of the user-level implementation are described, and various methods for providing reliability are investigated.
Abstract: The goal of network RAM is to improve the performance of memory intensive workloads by paging to idle memory over the network rather than to disk. In this paper, we reexamine common assumptions about network RAM, compare possible implementations, describe the structure and performance of our user-level implementation and investigate various methods for providing reliability.

Proceedings ArticleDOI
02 Dec 1998
TL;DR: An approach for incorporating cores into a system-level specification to allow a designer to specify both custom behavior and pre-designed cores at the earliest design stages, and to refine both into implementations in a unified manner is described.
Abstract: We describe an approach for incorporating cores into a system-level specification. The goal is to allow a designer to specify both custom behavior and pre-designed cores at the earliest design stages, and to refine both into implementations in a unified manner. The approach is based on experience with an actual application of a GPS-based navigation system. We use an object oriented language for specification, representing each core as an object. We define three specification levels, and we evaluate the appropriateness of existing inter-object communication methods for cores. The approach forms the specification basis for the Dalton project.

01 Jan 1998
TL;DR: The dissertation extends the structural abstraction programming model to manage three levels of parallel control and data structures to match the multi-tier hardware, and presents a new communication orchestration model which combines structural abstraction with ideas from the inspector/executor paradigm.
Abstract: Multi-tier parallel computers such as clusters of symmetric multiprocessors (SMPs) offer both new opportunities and new challenges for high-performance computation. Although these computer platforms can potentially deliver unprecedented performance for computationally intensive scientific calculations, realizing the hardware's potential remains a formidable task. To achieve high performance, the programmer must coordinate several levels of parallelism and locality to match the hardware's capabilities. Current programming languages and software tools do not directly facilitate this task, and the resultant difficulties hinder efficient implementations of scientific calculations on SMP clusters. We present a concise set of programming abstractions that simplify implementation of efficient algorithms for block-structured scientific calculations on SMP clusters. The software infrastructure, KeLP, provides intuitive geometric mechanisms to help the programmer coordinate data layout, data motion, and parallel control. The KeLP constructs abstract away many low-level programming details of message-passing, thread management, synchronization, scheduling, and storage allocation. Nevertheless, KeLP still provides enough expressive power to implement effective multi-tier algorithms for a broad class of computationally-intensive scientific applications. Most importantly, the KeLP implementation adds little overhead to lower-level primitives, and KeLP performance usually matches or exceeds performance for comparable programs using lower-level primitives. The dissertation presents solutions to varied technical challenges in the realization of a concise, abstract, expressive, and efficient programming model for multi-tier computers. In particular, the dissertation extends the structural abstraction programming model to manage three levels of parallel control and data structures to match the multi-tier hardware. KeLP presents a new communication orchestration model which combines structural abstraction with ideas from the inspector/executor paradigm. In application studies, we present new multi-tier algorithms for several applications, including multigrid, matrix multiplication, and dense matrix factorization. Experimental results on several platforms expose bottlenecks that limit performance and trade-offs for algorithmic and hardware design. Finally, this research has resulted in the KeLP 2.0 implementation, a C++ class library which has been used successfully in a number of computational science research projects.

Proceedings ArticleDOI
Menas Kafatos1, X.S. Wang1, Zuotao Li, Ruixin Yang, D. Ziskin 
01 Jul 1998
TL;DR: The project can serve as a model for a larger WP-ESIP Federation to assist in the overall data information system associated with future large Earth Observing System data sets and their distribution.
Abstract: We address the implementation of a distributed data system designed to serve Earth system scientists. A consortium led by George Mason University has been funded by NASA's Working Prototype Earth Science Information Partner (WP-ESIP) program to develop, implement, and operate a distributed data and information system. The system will address the research needs of seasonal to interannual scientists whose research focus includes phenomena such as El Nino, monsoons and associated climate studies. The system implementation involves several institutions using a multitiered client-server architecture. Specifically the consortium involves an information system of three physical sites, GMU, the Center for Ocean-Land-Atmosphere Studies (COLA) and the Goddard Distributed Active Archive Center, distributing tasks in the areas of user services, access to data, archiving, and other aspects enabled by a low-cost, scalable information technology implementation. The project can serve as a model for a larger WP-ESIP Federation to assist in the overall data information system associated with future large Earth Observing System data sets and their distribution. The consortium has developed innovative information technology techniques such as content based browsing, data mining and associated component working prototypes; analysis tools particularly GrADS developed by COLA, the preferred analysis tool of the working seasonal to interannual communities; and a Java front-end query engine working prototype.

01 Jan 1998
TL;DR: In this paper, the authors describe an innovative use of intranet-based training to facilitate ERP implementation in a manufacturing organization, which is considered to be a high cost high risk project.
Abstract: Enterprise Resource Planning (ERP) systems enable organizations to gain better control over their operations and costs by tightly integrating related business functions. This results in substantial savings. They provide a migration path from legacy systems to a client server environment. They also offer a solution to the year 2000 problem. This has led to a sharp increase in the demand for these systems. Implementation of ERP systems, however, is considered to be a high cost high risk project. Training is a key factor in implementation success. ERP implementation affects the roles of a large number of employees in an organization, who must be trained within a relatively short time period. Organizing and delivering such training in a cost effective and timely manner is a challenging task. Trainers are using innovative methods to accomplish this. Webbased training is recently gaining popularity as a convenient and economical means for imparting training to a large group of widely dispersed audience. In this article we describe an innovative use of intranet-based training to facilitate ERP implementation in a manufacturing organization.

Book
30 Sep 1998
TL;DR: Communication Protocol Specification and Verification is written to address the needs to specify a protocol using an FDT and to verify its correctness in order to uncover specification errors in the early stage of a protocol development process.
Abstract: Communication protocols are rules whereby meaningful communication can be exchanged between different communicating entities. In general, they are complex and difficult to design and implement. Specifications of communication protocols written in a natural language (e.g. English) can be unclear or ambiguous, and may be subject to different interpretations. As a result, independent implementations of the same protocol may be incompatible. In addition, the complexity of protocols make them very hard to analyze in an informal way. There is, therefore, a need for precise and unambiguous specification using some formal languages. Many protocol implementations used in the field have almost suffered from failures, such as deadlocks. When the conditions in which the protocols work correctly have been changed, there has been no general method available for determining how they will work under the new conditions. It is necessary for protocol designers to have techniques and tools to detect errors in the early phase of design, because the later in the process that a fault is discovered, the greater the cost of rectifying it. Protocol verification is a process of checking whether the interactions of protocol entities, according to the protocol specification, do indeed satisfy certain properties or conditions which may be either general (e.g., absence of deadlock) or specific to the particular protocol system directly derived from the specification. In the 80s, an ISO (International Organization for Standardization) working group began a programme of work to develop formal languages which were suitable for Open Systems Interconnection (OSI). This group called such languages Formal Description Techniques (FDTs). Some of the objectives of ISO in developing FDTs were: enabling unambiguous, clear and precise descriptions of OSI protocol standards to be written, and allowing such specifications to be verified for correctness. There are two FDTs standardized by ISO: LOTOS and Estelle. Communication Protocol Specification and Verification is written to address the two issues discussed above: the needs to specify a protocol using an FDT and to verify its correctness in order to uncover specification errors in the early stage of a protocol development process. The readership primarily consists of advanced undergraduate students, postgraduate students, communication software developers, telecommunication engineers, EDP managers, researchers and software engineers. It is intended as an advanced undergraduate or postgraduate textbook, and a reference for communication protocol professionals.

Journal ArticleDOI
01 Jan 1998
TL;DR: This paper is a reflection on how the design process in Pueblo, a school-centered network community supported by a MOO, shows how designers can rely on social practice to simplify a technical implementation, how they can design technical mechanisms to work toward a desirable social goal, how similar technical implementations can have different social effects.
Abstract: Network communities are especially interesting and useful settings in which to look closely at the co-evolution of technology and social practice, to begin to understand how to explore the full space of design options and implications. In a network community we have a magnified view of the interactions between social practice and technical mechanisms, since boundaries between designers and users are blurred and co-evolution here is unusually responsive to user experience. This paper is a reflection on how we have worked with social and technical design elements in Pueblo, a school-centered network community supported by a MOO (an Internet-accessible, text-based virtual world). Four examples from Pueblo illustrate different ways of exploring the design space. The examples show how designers can rely on social practice to simplify a technical implementation, how they can design technical mechanisms to work toward a desirable social goal, how similar technical implementations can have different social effects, and how social and technical mechanisms co-evolve. We point to complexities of the design process and emphasize the contributions of mediators in addressing communication breakdowns among a diverse group of designers.

Proceedings ArticleDOI
13 Oct 1998
TL;DR: This paper presents a collection of design patterns that allow the designer to seamlessly integrate the synthesized code with the code frames generated by standard IDL compilers, and evaluates several implementations, solely based on standardized features of the CORBA standard.
Abstract: Middleware forms such as CORBA and DCOM provide standard component interfaces, interaction protocols and communication services to support interoperability of object-oriented applications operating in heterogeneous and distributed environments. General-purpose services and facilities foster re-use and help reduce development costs. Yet the degree of automation of the software development process is limited to the generation of skeleton and stub code from component interface specifications given in a common interface definition language (IDL). This is mainly due to the fact that the expressiveness of current IDLs is limited to the specification of type and operation signatures. Important properties of crucial components of security-, safety-critical or reactive applications such as object behavior, timing or synchronization constraints cannot be documented formally, let alone checked automatically. In this paper, we continue developing solutions for adding specifications of semantic properties to component interfaces and automatically synthesizing code that instruments corresponding semantic checks. Independently from the concrete syntax and semantics of such specification elements, we present a collection of design patterns that allow the designer to seamlessly integrate the synthesized code with the code frames generated by standard IDL compilers. We study these approaches along the concrete example of extending CORBA IDL with synchronization constraints and we evaluate several implementations, solely based on standardized features of the CORBA standard.

Proceedings ArticleDOI
04 Mar 1998
TL;DR: The Stanford Validity Checker (SVC), a highly efficient, general-purpose decision procedure for quantifier-free first-order logic with linear arithmetic, is adapted to check the consistency of specifications written in Requirements State Machine Language (RSML).
Abstract: : The increasing use of software in safety critical systems entails increasing complexity, challenging the safety of these systems. Although formal specifications of real-life systems are orders of magnitude simpler than the system implementations, they are still quite complex. It is easy to overlook problems in a specification, ultimately compromising the safety of the implementation. Since it is error-prone and time consuming to check large specifications manually, mechanical support is needed. The challenge is to find the right combination of deductive power (i.e., how rich a logic and what theories are decided) and efficiency to complete the verification in reasonable time. In addition, it must be possible to explain why a proof fails. As an initial approach to solving this problem, we have adapted the Stanford Validity Checker (SVC), a highly efficient, general-purpose decision procedure for quantifier-free first-order logic with linear arithmetic, to check the consistency of specifications written in Requirements State Machine Language (RSML). We have concentrated on a small but complex part of version 6.04a of the specification of the (air) Traffic alert and Collision Avoidance System (TCAS II). SVC was extended to produce a counter-example in terms of the original specification. The efforts discovered an undesired inconsistency in the specification, which the maintainers of the specification independently discovered and subsequently fixed in the most recent version. The case study demonstrates the practicality of uncovering problems in real-life specifications with a modest effort, by selective application of state-of-that-art formal methods and tools. The logic of SVC was sufficiently expressive for the properties that we checked, but more work is needed to extend the class of formulae that SVC decides to cover the properties found in other parts of the TCAS II specification.

Book ChapterDOI
14 Apr 1998
TL;DR: This paper addresses the construction of a object-oriented Genetic Programming framework using on design patterns to increase its flexibility and reusability.
Abstract: A large body of public domain software exists which addresses standard implementations of the Genetic Programming paradigm. Nevertheless researchers are frequently confronted with the lack of flexibility and reusability of the tools when for instance one wants to alter the genotypes representation or the overall behavior of the evolutionary process. This paper addresses the construction of a object-oriented Genetic Programming framework using on design patterns to increase its flexibility and reusability.

Patent
11 Mar 1998
TL;DR: In this paper, a method of developing a software system using Object Oriented Technology and frameworks for developing a business application is presented, which is applicable in the technical field of application development of software systems, e.g., financial or logistic and distribution.
Abstract: The present invention relates to a method of developing a software system using Object Oriented Technology and frameworks for developing a business application. The present invention is applicable in the technical field of application development of software systems, e.g. for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation.

Book ChapterDOI
Elizabeth Kendall1
20 Jul 1998
TL;DR: Role modeling addresses software specification, analysis, and design that is solely class or component based and becomes unwieldy and difficult to manage, control, and debug when they are distributed across many components that are instances of different classes.
Abstract: Role modeling addresses software specification, analysis, and design. Many role models are documented patterns, and they do have corresponding object oriented designs. However, a design and implementation that is solely class or component based has its problems and limitations. Role model implementations become unwieldy and difficult to manage, control, and debug when they are distributed across many components that are instances of different classes. Higher level, alternative, language constructs are needed.

Journal ArticleDOI
TL;DR: The second survey found that the overall motivating reasons for implementing TQM remained essentially the same, and that most firms still believed that total quality management (TQM) is a good idea as discussed by the authors.
Abstract: Three years ago research was conducted regarding total quality management (TQM) implementations in engineering and construction companies. This paper presents the second snapshot of the status of the TQM implementations in these companies. The same TQM implementation survey that was used in the initial study was sent to the same six firms—only three years later. This work shows the progress of the TQM implementations, identifies the changes in the perceptions of the six firms toward TQM, and analyzes the correlations established in the initial study. This second survey provided another node of data, through which several trends and similarities were discovered and documented. The second survey found that the overall motivating reasons for implementing TQM remained essentially the same, and that most firms still believed that TQM is a good idea. The methods and effectiveness of implementing TQM, however, did vary substantially between companies over the three years. Some firms completely abandoned their TQM implementations while others achieved award-winning results.

Patent
Brent Allen Carlson1, Neil Patterson1
11 Mar 1998
TL;DR: In this paper, the authors present a method of developing a software system using Object Oriented Technology (OIT) and a framework supporting flexible interchange of domain algorithms, which is applicable in the technical field of application development of software systems, e.g. financial or logistic and distribution.
Abstract: A method of developing a software system using Object Oriented Technology. The present invention addresses the problem of providing a technical foundation for the development of software applications using Object Oriented Technology and frameworks. The present invention solves this problem with a framework supporting flexible interchange of domain algorithms. The present invention is applicable in the technical field of application development of software systems, e.g. for a business application as Financial or Logistic and Distribution, wherein it is the purpose of frameworks to provide significant portions of the application that are common across multiple implementations of the application in a general manner, easy to extend for specific implementation.

Journal ArticleDOI
TL;DR: Common approaches for parallel implementations like lock protection for concurrent accesses and sequential or distributed task queues are replaced by more efficient access mechanisms and data structures which can be realized by the powerful multiprefix operations of the SB-PRAM.

Book
01 Jan 1998
TL;DR: This carefully written text will be of interest to engineers, scientists, and program managers involved in geologic exploration, aircraft design, image processing, weather modeling, operations, research, chemical synthesis, and medical applications.
Abstract: From the Publisher: UNDERSTANDING PARALLEL SUPERCOMPUTING is an exhaustive, applications-oriented survey of the world's largest and fastest computers. Beginning with the evolution of parallel supercomputing technology in recent history, author R. Michael Hord goes on to illustrate architectural concepts and implementations at the very center of today's cutting-edge technology. Topics featured include: technology benefits and drawbacks, software tools and programming languages, major programming concepts, sample parallel programs, algorithmic methods, both SIMD and MIMD architectures. This carefully written text will be of interest to engineers, scientists, and program managers involved in geologic exploration, aircraft design, image processing, weather modeling, operations, research, chemical synthesis, and medical applications. It will also be of practical use to computer specialists.

01 Mar 1998
TL;DR: The convergence of citation searching and Web linking towards citation linking will be examined in the paper with reference to two examples: Web of Science, a new service for scientists and social scientists from ISI, and an Open Journal of Cognitive Science developed by the Open Journal project.
Abstract: For over 40 years the citation databases from the Institute for Scientific Information (ISI) have provided a unique and incisive tool not just for researching the academic literature but for measuring and evaluating it as well. With the emergence of the World Wide Web this tool has been developed to place cited reference searching directly within the control of the user. Improved access to information is one feature of the Web, in particular through the mechanism of the Web link. The convergence of citation searching and Web linking towards citation linking will be examined in the paper with reference to two examples: Web of Science, a new service for scientists and social scientists from ISI, and an Open Journal of Cognitive Science developed by the Open Journal project. Web of Science provides links to the literature, links to holdings, to document delivery, and integration with bibliographic tools. Using citations -- the links made by authors themselves -- users can navigate between their current work and a priori work in the archives of the research literature or take a recent paper and move forward, tracking the citations dynamically. Literature searching is incomplete without links to the primary journal literature. The Open Journal of Cognitive Science, a collaborative work between the project and ISI, illustrates how the Web model of linking has been extended, linking from the abstracted literature to the full-text journal. Both implementations are illustrated in the paper.

01 Jan 1998
TL;DR: This dissertation presents a design methodology that uses a combination of formal and informal mappings to refine a high-level specification into an implementation to relate a synchronous implementation finite-state machine to an asynchronous specification state machine.
Abstract: Communication protocol design involves 4 complementary domains: specification, verification, performance estimation, and implementation Typically, these technologies are treated as separate, unrelated phases of the design: formal specification, formal verification, and implementation, in particular, are rarely approached from an integrated systems perspective For systems that are implemented using a combination of hardware and software, a significant technical barrier to this integration is the lack of an automated, formal mapping from an abstract, high-level specification to a detailed implementation in either synchronous hardware or non-deterministically interleaved software threads This dissertation presents a design methodology that uses a combination of formal and informal mappings to refine a high-level specification into an implementation A taxonomy of formal languages that are commonly used for protocol or finite-state machine (FSM) description is developed and used to identify when a particular formal model is most useful in the design flow The methodology relies on an informal specification to develop a formal description that can be formally verified at the asynchronous message-passing behavioral level Central to the methodology is application of compositional refinement verification to relate a synchronous implementation finite-state machine to an asynchronous specification state machine An architectural template for an embedded communication system is used to facilitate the mapping between the specification and a software implementation, and a prototype operating system and low-level interface units provide the necessary interprocess communication infrastructure between hardware and software