scispace - formally typeset
Search or ask a question

Showing papers on "System integration published in 1990"


Book ChapterDOI
01 Nov 1990
TL;DR: In this article, the authors describe various types of tool integration with the goal of illustrating how diverse tools can be effectively integrated into CASE environments, where issues of data integration, control integration, and presentation integration may be viewed as orthogonal and defining a three-dimensional space in which tool integration occurs.
Abstract: This paper has described the various types of tool integration with the goal of illustrating how diverse tools can be effectively integrated into CASE environments. Issues of data integration, control integration, and presentation integration may be viewed as orthogonal and defining a three-dimensional space in which tool integration occurs. The absence of standards has been shown to be a barrier to integration, as various tool developers remain unable to reach agreement on the appropriate point(s) in this space at which integration should occur. As a result, experience with tool integration has been largely at a tool-to-tool level, with little use of standard tool integration mechanisms.

256 citations


Journal ArticleDOI
TL;DR: The implementation of Optimizer has made it possible to make strategic changes to the configuration and control of the parts distribution network and has proven to be a highly flexible planning and operational control system.
Abstract: IBM recently implemented Optimizer, a system for flexible and optimal control of service levels and spare parts inventory, in its US network for service support. It is based upon recent research in multi-echelon inventory theory to address the IBM network. The inherent complexity and very large scale of the basic problem required IBM to develop suitable algorithms and sophisticated data structures and required large-scale systems integration. Optimizer has greatly improved IBM's US service business. The implementation of Optimizer has made it possible to make strategic changes to the configuration and control of the parts distribution network. It resulted in simultaneously reducing inventory investment and operating costs and improving service levels. Most important, however, Optimizer has proven to be a highly flexible planning and operational control system.

251 citations


Proceedings ArticleDOI
Flaviu Cristian1, Bob Dancey1, Jon Dehn1
03 Sep 1990
TL;DR: The general approach to fault-tolerance adopted in AAS is discussed, by reviewing some of the questions which were asked during the system design, various alternative solutions considered, and the reasons for the design choices made.
Abstract: The Advanced Automation System is a distributed real-time system under development by IBM's Systems Integration Division for the US Federal Aviation Administration. The system is intended to replace the present en-route and terminal approach US air traffic control computer systems over the next decade. High availability of air traffic control services is an essential requirement of the system. This paper discusses the general approach to fault-tolerance adopted in AAS, by reviewing some of the questions which were asked during the system design, various alternative solutions considered, and the reasons for the design choices made.

105 citations


Journal ArticleDOI
01 Jul 1990
TL;DR: An information modeling and design approach using a metadatabase framework and a two-stage entity-relationship (TSER) methodology to address the system integration problem is presented.
Abstract: An information modeling and design approach using a metadatabase framework and a two-stage entity-relationship (TSER) methodology to address the system integration problem is presented. The metadatabase framework is a simplification design entailing a federated system architecture, an integrated information model, and a knowledge-based control methodology. The TSER methodology, used for the information model, features both a semantic modeling construct encompassing the US Air Force's IDEF approach and an operational modeling construct for consolidating data structures across the manufacturing facility. Preliminary results of a pilot study, including both modeling and software system development, are included to illustrate the theoretical work. >

59 citations


Proceedings ArticleDOI
01 Mar 1990
TL;DR: An attempt is made to sort out the wide variety of concepts used in connection with systems integration and to give an overview of the field.
Abstract: An attempt is made to sort out the wide variety of concepts used in connection with systems integration and to give an overview of the field. Four aspects of systems integration are addressed: integration technology, integration architecture, semantic integration, and user integration. It is concluded that systems integration is difficult and complicated, and that there are no obvious shortcuts. In addition, integration is made difficult by most existing systems being not particularly integration friendly. The authors give some guidelines on how systems should be built to be more integration friendly. They also give some recommendations on how existing systems can be made more integration friendly. >

45 citations


Journal ArticleDOI
TL;DR: The U.S. still holds a leadership position in the development of software and in the area of systems and systems integration as mentioned in this paper, however, this position requires an evolution from the traditional serial model of product development to a parallel model which relies heavily on the use of new tools such as expert systems, collaborative systems, the modeling of manufacturing and product delivery, and other software systems and applications.
Abstract: Much of the problem with regard to U.S. competitiveness in world markets resides in the way the U.S. has managed the application of new technology in the entire process of technology transfer. The key to success for U.S. corporations lies in building on their areas of strength—the U.S. still holds a leadership position in the development of software and in the area of systems and systems integration. The fast pace of global competition requires an evolution from the traditional serial model of product development to a parallel model. This parallel model relies heavily on the use of new tools such as expert systems, collaborative systems, the modeling of manufacturing and product delivery, and other software systems and applications. Capitalizing on these U.S. strengths will require a national effort in building an information infrastructure and a close collaboration between corporations and universities.

30 citations


Proceedings ArticleDOI
01 Mar 1990
TL;DR: The approach helps to deal with three aspects of software standards that affect systems integration: periodic revision, missing features that result in the use of proprietary system services, and imprecise, natural language specification.
Abstract: A set of standards for an open systems environment is presented, and an approach to the use of these standards in systems integration is defined. In particular, the approach helps to deal with three aspects of software standards that affect systems integration: periodic revision, missing features that result in the use of proprietary system services, and imprecise, natural language specification. The architectural approach is consistent with the toolkit model of systems development that has been popularized by window systems, and takes advantage of the features provided by many window systems for building user-defined components. >

26 citations


Proceedings ArticleDOI
01 Mar 1990
TL;DR: A model is proposed for post-facto tool integration which is based on a module interconnection language for loosely coupled, communication processes where existing tools are encapsulated as concurrent objects.
Abstract: Experience in building major pieces of a software development platform by integrating existing software components into a uniform programming environment is reported. These components are written in incompatible languages and follow idiosyncratic protocols and conventions. These are examples of a common software engineering practice called post-facto integration. In this style of integration, the integrated system is designed after its major subcomponents have been built, often for a rather different purpose. A research effort has been initiated to better understand this style of integration and to develop tools and methods to make it a more productive and reliable software engineering practice. Two models for post-facto integration have been identified, one for tools integration and one for information integration. A model is proposed for post-facto tool integration which is based on a module interconnection language for loosely coupled, communication processes where existing tools are encapsulated as concurrent objects. >

23 citations


Proceedings ArticleDOI
03 Jul 1990
TL;DR: An integrated tele-robotics system for remote execution of assembly and disassembly of mechanical components in hazardous environments such as nuclear power plants, undersea and space and robustness and reliability in remote task execution has been achieved.
Abstract: Describes an integrated tele-robotics system for remote execution of assembly and disassembly of mechanical components in hazardous environments such as nuclear power plants, undersea and space. The key concept of this system integration consists of the combination of manipulation skills with an environment model. The model provides the geometric structure and physical properties of the objects in the environment. The manipulation skills enable reliable task execution in the presence of unavoidable errors and uncertainties. Based on the model and skills, the system autonomously executes specified tasks. In addition, the system provides the operator with a 1-DOF bilateral control dial as an intervention tool for real-time remote assistance of manipulator movement for error avoidance and recovery. With these features, robustness and reliability in remote task execution has been achieved. >

23 citations


Proceedings ArticleDOI
D. Pillai1
01 Oct 1990
TL;DR: A detailed analysis of the needs and requirements for successfully designing and implementing automated material handling and control systems in wafer fabs is presented in this paper, where potential system configurations and options for both interbay and intrabay material handling designs are discussed The control system requirements, the importance of total systems integration, and the role of the skilled production operator are addressed Constant references are made to material movement dynamics-a critical factor that affects the performance and acceptance of the system
Abstract: A detailed analysis of the needs and requirements for successfully designing and implementing automated material handling and control systems in wafer fabs is presented In addition, potential system configurations and options for both interbay and intrabay material handling designs are discussed The control system requirements, the importance of total systems integration, and the role of the skilled production operator are addressed Constant references are made to material movement dynamics-a critical factor that affects the performance and acceptance of the system >

19 citations




Journal ArticleDOI
TL;DR: An overview of an electric distribution management system that can support evolution and flexibility for customers in the 1990s and beyond is provided and the ability to evolve, system integration, and user-focused development are discussed.
Abstract: A system solution approach to information management for electrical distribution that meets the requirements being imposed by industry trends and the electric utility customer is presented. The approach is based on three significant phenomena in the computer industry: the replacement of traditional mainframe computer solutions with networked microcomputer and workstation solutions; the emergence of a standards-based software products industry; and the movement toward object-oriented software methodologies. The resulting technology enables the construction of generalized information systems that can evolve and be augmented over extended life cycles while containing maintenance and lifetime costs. An overview of an electric distribution management system that can support evolution and flexibility for customers in the 1990s and beyond is provided. The ability to evolve, system integration, and user-focused development are discussed. >

Journal ArticleDOI
01 Jul 1990
TL;DR: Findings from NASA's space electronics division's (SED) advanced systems studies related to future communications satellite services that will require onboard switching and processing technology are reviewed.
Abstract: Findings from NASA's space electronics division's (SED's) advanced systems studies related to future communications satellite services that will require onboard switching and processing technology are reviewed. SED's digital signal switching and processing technology program is reviewed. This program responds to specific systems technology development needs for enabling commercial development of future satellite services. The technologies include: modulators, demodulators, and forward error-correction hardware for space- and ground-based applications; onboard information switching and processing, onboard network control, and health monitoring; and cost-efficient ground terminals. The in-house systems integration, test, and evaluation (SITE) project, which includes a laboratory testbed for evaluating technology in a simulated systems environment, is reviewed. >

Journal Article
TL;DR: The scope of the work being done at CISL is examined, in particular semantic aspects of the integration process and methods for tracking data sources in composite information systems are examined.
Abstract: The Composite Information Systems Laboratory (CISL) at MIT is involved in research on the strategic, organizational and technical aspects of the integration of multiple heterogeneous database systems. In this paper we examine the scope of the work being done at CISL. Certain research efforts in the technical areas of system integration are emphasized; in particular semantic aspects of the integration process and methods for tracking data sources in composite information systems.

Book ChapterDOI
19 Sep 1990
TL;DR: The volume and complexity of this information, uncertainty in the data and the understanding of processes, as well as the often very large number of alternatives to be considered require specific data processing tools.
Abstract: Environmental planning and management require comprehensive and interdisciplinary information as the scientific and technical information basis for what are, ultimately, political decisions. The volume and complexity of this information, uncertainty in the data and the understanding of processes, as well as the often very large number of alternatives to be considered require specific data processing tools.

R. Summers1
03 Apr 1990
TL;DR: In this paper, the authors describe some design approaches based on microcomputers and digital and linear ASIC technologies for the integration of such multifunction electricity measurement and telemetering systems.
Abstract: The conventional Ferraris electricity wheel meter has served the industry well over many decades as a basic instrument for measuring domestic power and is likely to survive for a good many more. However, multi tariffing systems to facilitate efficient load management will require the use of sophisticated semiconductor based solutions. As cost reduction considerations continue to drive system integration, the gradual replacement of the mechanical Ferraris wheel function by semiconductor devices is inevitable. The author briefly describes some design approaches based on microcomputers, and digital and linear ASIC technologies for the integration of such multifunction electricity measurement and telemetering systems. >

Book
11 Oct 1990
TL;DR: The manufacturing process computers in manufacturing computer- aided engineering systems computer-aided production systems integration of CAD and CAM technologies computer aided business computer-integrated manufacturing planning for CIM the human factors of CIM implementing a CIM system.
Abstract: The manufacturing process computers in manufacturing computer-aided engineering systems computer-aided production systems integration of CAD and CAM technologies computer aided business computer-integrated manufacturing planning for CIM the human factors of CIM implementing a CIM system.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: In identifying problem areas and research efforts to meet application requirements, it is observed that some of the most promising research involves the integration of speech algorithm techniques including speech coding, speech recognition, and speaker recognition.
Abstract: This paper presents a study of military applications of advanced speech processing technology which includes three major elements: (1) review and assessment of current efforts in military applications of speech technology; (2) identification of opportunities for future military applications of advanced speech technology; and (3) identification of problem areas where research in speech processing is needed to meet application requirements, and of current research thrusts which appear promising. The relationship of this study to previous assessments of military applications of speech technology is discussed, and substantial recent progress is noted. Current efforts in military applications of speech technology which are highlighted include: (1) narrowband (2400 b/s) and very low-rate (50--1200 b/s) secure voice communication; (2) voice/data integration in computer networks; (3) speech recognition in fighter aircraft, military helicopters, battle management, and air traffic control training systems; and (4) noise and interference removal for human listeners. Opportunities for advanced applications are identified by means of descriptions of several generic systems which would be possible with advances in speech technology and in system integration. These generic systems include: (1) integrated multi-rate voice/data communications terminal; (2) interactive speech enhancement system; (3) voice-controlled pilot's associate system; (4) advanced air traffic control training systems; (5) battle management command and control support system with spoken natural language interface; and (6) spoken language translation system. In identifying problem areas and research efforts to meet application requirements, it is observed that some of the most promising research involves the integration of speech algorithm techniques including speech coding, speech recognition, and speaker recognition.

Proceedings ArticleDOI
23 May 1990
TL;DR: This two-part paper presents the concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensor, controller, and actuator within a control loop.
Abstract: Large-scale dynamical systems, such as advanced aircraft, spacecraft, and autonomous manufacturing plants, require high-speed and reliable communications between the individual components and subsystems for decision-making and control. This can be accomplished by Integrated Communication and Control Systems (ICCS) which use asynchronous time-division-multiplexed networks. However, these networks introduce randomly varying delays. The two-part paper presents the concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensor(s), controller, and actuator(s) within a control loop. In this paper, which is the first of two parts, the delay compensation algorithm is formulated and analyzed. The above algorithm is implemented in the second part, and it is verified by experimentation at an IEEE 802.4 network testbed as well as by simulation of the flight control system of an advanced aircraft.

ReportDOI
01 Dec 1990
TL;DR: Computer Aided Software Engineering tool users are faced with the task of coordinating tools and data from a variety of sources spanning the entire software development life cycle, and formal efforts have done little to resolve the integration problems.
Abstract: : Computer Aided Software Engineering tool users are faced with the task of coordinating tools and data from a variety of sources spanning the entire software development life cycle. Despite much discussion and increased standardization activity, complete, transparent CASE tool integration is still a long way from realization. There are a number of factors which have complicated the tool integration scenario and a number of actions being taken in an attempt to resolve the problems. The implications of these concerns can be examined from the perspectives of single-vendor, multiple-vendor, operating environment, development process, and end user integration. In addition to specific technical and methodological solutions, standards efforts are viewed as a possible path to tool integration. To date, formal efforts have done little to resolve the integration problems, but defacto standards may well become the cornerstone of future CASE tool evolution.

Proceedings ArticleDOI
04 Jun 1990
TL;DR: The effectiveness of a utility-based process simulation system is demonstrated by investigating several current VLSI technology problems with a variety of powerful simulators.
Abstract: The effectiveness of a utility-based process simulation system is demonstrated by investigating several current VLSI technology problems with a variety of powerful simulators. Interactions of deposition, etching and spin-on steps in a planarization process are investigated with topography and creeping flow simulators. Topography and thermal processing simulators are combined to investigate a complete bipolar device process. Simulators for 2-D image calculation and scattering from topography are applied to problems in submicron lithography. The primary platform used for utility-based software integration is SIMPL-DIX which connects layout and process flow data, and an X-window graphical user-interface, into a system for generating device cross-sections. An experimental adaptation of SIMPL into the OCT/VEM/RPC CAD framework has also been developed that provides access to the VEM user-interface, the OCT database, a mask editor, and circuit CAD tools

Proceedings ArticleDOI
G. Herbella1
15 Oct 1990
TL;DR: The use of modularity within the Multipath Redundant Avionics Suite project is discussed and a method of configuring multiple sets of avionics modules (clusters) into a fault-tolerant redundant subsystem has been developed.
Abstract: The use of modularity within the Multipath Redundant Avionics Suite project is discussed. Modularity is considered at several levels, including systems, subsystems, electronic circuit assemblies and even functionality within circuit assemblies. MPRAS maintains the modularity of PAVE PILLAR and extends it to include modules designed to provide efficient input/output functions and to support implementation of multistring redundancy. A method of configuring multiple sets of avionics modules (clusters) into a fault-tolerant redundant subsystem has been developed. These subsystems are intended to be incorporated into the overall system in a modular fashion to support vehicles such as the advanced launch system, which are themselves designed as modular vehicle families. The software operating system is also modular, based on the Ada high order language, and can be configured for specific vehicle functions. Deterministic system software operation and high levels of avionics testability provide an environment intended to keep system operations costs as low as possible. The result is an avionics architecture which can be used in a wide variety of applications, from the simplest launch vehicles to complex systems envisioned for future space exploration. >

Journal ArticleDOI
TL;DR: A microcomputer-based power network control simulator designed to be used as a teaching aid for students in power engineering courses is introduced, an offline version of control systems actually used in power utilities.
Abstract: A microcomputer-based power network control simulator designed to be used as a teaching aid for students in power engineering courses is introduced. The system is an offline version of control systems actually used in power utilities. It includes functions such as steady-state security analysis and security monitoring, online load flow, and contingency analysis. Using a special feature implemented in the system called perturbation scheduling, different events can be set up to occur at different times before the simulation takes place. Also available to the user are other functions, such as short-circuit study and transient stability analysis, that are normally used for planning or study purposes. Following a detailed description of the system, software integration in an undergraduate power system operation course is illustrated, and students responses are discussed. >

01 Sep 1990
TL;DR: The functional components approach is described, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional part, to integrated telemetry data system are described.
Abstract: NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

Journal ArticleDOI
TL;DR: In this paper, the integration of GPS with inertial navigation systems (INS) does enable an increase in system performance, although there is not necessarily an improvement in accuracy, the integration with GPS with Inertial Navigation System (INS).
Abstract: The Global Positioning System (GPS) offers an absolute positioning accuracy of 15 to 100 metres. Inertial navigation complements GPS in that it provides relative positioning and is totally self-contained. These two positioning sensors are ideally suited for system integration for although there is not necessarily an improvement in accuracy, the integration of GPS with inertial navigation systems (INS) does enable an increase in system performance.

Journal Article
TL;DR: This approach answers hospital information needs by shifting some of the processing and data to the end-user level, yet allows management to retain control of the central portion of the data base while facilitating data sharing among various organizational units.
Abstract: This article describes the main characteristics of the integrated hospital information system (HIS) environment, discusses design objectives, and analyzes four design issues--system architecture, conceptual data base design, application portfolio, and plans for development and implementation. The main objective is to provide managers and system designers with a guiding blueprint for HIS design based on state-of-the-practice technological capabilities and current experience with integrated HIS. Clearly, the capabilities of present information technology provide more feasible ways to implement integrated HIS in a distributed environment. This approach answers hospital information needs by shifting some of the processing and data to the end-user level, yet allows management to retain control of the central portion of the data base while facilitating data sharing among various organizational units.

Journal ArticleDOI
TL;DR: In this article, the Triple C principle of communication, cooperation, and coordination is used to facilitate total quality management throughout an organization and the role of role management should play in ensuring total QM is outlined.
Abstract: This paper presents guidelines for a systems approach to total quality management. The basic characteristics of a system as they relate to quality management are discussed. A procedure is described for implementing the Triple C principle of communication, cooperation, and coordination to facilitate total quality management throughout an organization. The role management should play in ensuring total quality management is outlined. Discussions are also presented on the concepts of continuous process improvement, continuous measurable improvement, and quality function deployment.

Journal ArticleDOI
TL;DR: A discussion is presented of system issues in integrating an artificial intelligence system with a conventional computer‐aided design system.
Abstract: A discussion is presented of system issues in integrating an artificial intelligence system with a conventional computer-aided design system. Two different schemes for integration, and experience in implementing a combined VM/PROLOG and CADAM, are described. Both Prolog and CADAM are separately interactively systems, and it was necessary to provide an interactive interface to each, and to be able to move between them. Since CADAM is not reentrant, a coroutining relationship was required. It was necessary to specify a Prolog form for drawings and to be able to transform drawings in the CADAM workspace in Fortran array form into Prolog clauses, representing the same information and residing in the Prolog workspace, and vice versa. The problems of linking to CADAM, and of storage management in a VM virtual machine had to be dealt with. Finally, error recovery had to be considered. Examples of the use of the system are presented, including examples of CAD graphics, and the setting of goals, with the corresponding system responses. The main practical application area was the design and manufacture of assemblies made of sheet metal and extrusions, such as components of aircraft. Several application programs, implemented using the intelligent CAD system, are described.

Patent
27 Apr 1990
TL;DR: Test data is incorporated within the microcode of a bit-slice microprocessor to verify program performance and during operation of the program as a built-in test as discussed by the authors, which allows complete algorithm debugging during program development, and permits the rest of the system to be developed in parallel.
Abstract: Test data is incorporated within the microcode of a bit-slice microprocessor to be used during development of the program to verify program performance and during operation of the program as a built-in test. Little additional hardware is required and there is minimal impact on the structure of the program. The program is allowed to operate with the same data that it would have when integrated with the system. During development, the embedded data is used as a substitute for the rest of the system, allowing program development to continue until system integration, using only power supplies and some test equipment. When implemented and used with a commercially available microprocessor ROM emulator, the test data may be varied to highlight difficulties in algorithm design and program development. The operating program cannot tell the difference between live system data and embedded test data. Thus, the program will behave identically during development and system operation. This allows complete algorithm debugging during program development, and permits the rest of the system to be developed in parallel. Developmental test data can be used later for operational program/hardware bit confidence testing with minimal changes.