scispace - formally typeset
Search or ask a question

Showing papers on "System integration published in 1993"


Book
01 Dec 1993
TL;DR: The 4th edition of Systems Analysis and Design continues to offer a hands-on approach to SA&D while focusing on the core set of skills that all analysts must possess.
Abstract: From the Publisher: This edition continues to react to changes and expected changes in the information technology domain including Year 2000 (Y2K) compatibility,client/server computing,the Internet,intranets,and extranets. Finally,there are exciting systems analysis and design challenges with Enterprise Resource Planning (ERP) applications (such as SAP),systems integration,and business process redesign. Today's students want to "practice" the application of concepts,not just study them. As with the previous editions of this book,the authors wrote it to: 1) Balance the coverage of concepts,tools,techniques,and their applications 2) Provide the most examples of system analysis and design deliverables available in any book 3) Balance the coverage of classical methods (such as structured analysis and information engineering) and emerging methods (e. g.,object-oriented analysis and rapid application development). Additionally,the textbook can serve the reader as a post-course,professional refer-ence for best current practices.

141 citations


Journal ArticleDOI
01 Jan 1993
TL;DR: Model integration is projected as the springboard for building a theory of models equivalent in power to relational theory in the database community.
Abstract: Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integration corresponding to data definition and manipulation, respectively. A model manipulation language based on structured modeling and communicating structured models is suggested which incorporates schema and process integration. The use of object-oriented concepts for designing and implementing integrated modeling environments is discussed. Model integration is projected as the springboard for building a theory of models equivalent in power to relational theory in the database community.

110 citations


Journal ArticleDOI
TL;DR: A main result of this paper is an O (h3) error estimate for the Lie-Poisson structure, where h is the integration step-size.

93 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms.
Abstract: The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

84 citations


Journal Article
TL;DR: This report documents and presents a top-level design and implementation plan for geographic information systems (GISs) for transportation based on an assessment of the current state of the art of GIS for transportation, and a projection of technological developments through the next five to ten years.
Abstract: This report documents and presents a top-level design and implementation plan for geographic information systems (GISs) for transportation. The basis for the design and implementation plan has been first an assessment of the current state of the art of GIS for transportation (GIS-T) through interviews with DOTs and MPOs and through a survey of GIS vendors, and second a projection of technological developments through the next five to ten years. A GIS-T may be thought of as a union of a GIS and a transportation information system, with enhancements to the GIS software and to the transportation data. A central significance of GIS-T technology is in its potential to serve as the long-sought data and systems integrator for transportation agencies. Given that so many transportation data are or can be geographically referenced, the GIS-T enabled and managed concept of location provides a basis for integrating databases and information systems across almost all transportation agency applications. In order to realize the greatest benefits of GIS-T, DOTs should develop agency-wide strategic plans that comprehend not only GIS technology adoption and application, but also concurrent adoption and application of open-systems standards and of a wide range of imminent complementary technologies, from computer networking and distributed computing, through new data storage media and database architectures, through computer-aided software engineering, to computer-based graphics and computer-aided design. The plans should address a full range of application scales, because GIS has the potential to become ubiquitous throughout all functional areas of transportation agencies. The recommended approach is top down for system design, then bottom up for application development. A GIS-T server-net architecture with computational and data management labor divided among different kinds of servers is recommended. (Fifteen kinds are suggested as a plausible first iteration for the required design.) Implementation of the server net can be incremental with a conceptual architecture in place as an organizing principle before complete physical realization is feasible, just as the concept of location can serve as a conceptual integrator for data schemas before the GIS enabled and managed spatial databases required for actual integration are fully available and as they are being incrementally constructed.

73 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: In this article, the unique problems to be encountered in the design of systems for the precise control of hypersonic vehicles are reviewed and discussed, and some candidate designs for the highly-integrated control systems are presented.
Abstract: The unique problems to be encountered in the design of systems for the precise control of hypersonic vehicles are to be reviewed and discussed. These problems pose significant research challenges if successful guidance and control systems are to be developed for this new class of vehicle. These challenges will be shown to arise due to the stringent mission requirements on the vehicle, and the highly integrated configuration designs being considered. In this paper, the mission requirements and operational goals of these vehicles will be first reviewed, and the trajectory performance issues highlighted. Then the presence of critical coupling between several vehicular subsystems will be clearly exposed, and the dynamic interactions between these subsystems will be presented. The results to be discussed are based on the analysis of a generic hypersonic vehicle, with characteristics similar to the X-30. It is an unstable, highly-coupled, aeropropulsive/aeroelastic system, with large variations in its attitude-dynamic characteristics over its extensive flight envelope. The genesis of these interactions is explored, the magnitudes quantified, and their significance in the context of control-system design presented. Finally, some candidate designs for the highly-integrated control systems will be presented.

35 citations


Journal ArticleDOI
01 Aug 1993
TL;DR: The fast ASIC prototyping concept based on the use of multiple FPGAs is reviewed in different engineering applications and some future goals are outlined to develop an integrated, multipurpose DSP ASIC prototypes prototyping environment.
Abstract: Field Programmable Gate Arrays (FPGAs) offer a cost-effective and flexible technology for DSP ASIC prototype development. In this article, the fast ASIC prototyping concept based on the use of multiple FPGAs is reviewed in different engineering applications. The design experiences of the proposed approach, applied to four different DSP ASIC design projects are presented. The design experiences concerning the selection of the design methodology, application architectures and prototyping technologies are analyzed with respect to efficient system integration and ASIC migration from the FPGA prototype onto first-time functional silicon. Novel prototyping techniques based on using configurable hardware modellers concerning the same objective are studied. Some future goals are outlined to develop an integrated, multipurpose DSP ASIC prototyping environment.

31 citations


Journal ArticleDOI
TL;DR: The ability of the aforementioned integration strategies to detect defects, and produce reliable systems is addressed, the efficacy of spot unit testing is explored, and phased and incremental versions of top-down and bottom-up integration strategies are compared.
Abstract: There has been much discussion about the merits of various testing and integration strategies. Top-down, bottom-up, big-bang, and sandwich integration strategies are advocated by various authors. Also, some authors insist that modules be unit tested, while others believe that unit testing diverts resources from more effective verification processes. This article addresses the ability of the aforementioned integration strategies to detect defects, and produce reliable systems. It also explores the efficacy of spot unit testing, and compares phased and incremental versions of top-down and bottom-up integration strategies. Relatively large artificial software systems were constructed using a code generator with ten basic module templates. These systems were seeded with known defects and tested using the above testing and integration strategies. A number of experiments were then conducted using a simulator whose validity was established by comparing results against these artificial systems. The defect detection ability and resulting system reliability were measured for each strategy. Results indicated that top-down integration strategies are generally most effective in terms of defect correction. Top-down and big-bang strategies produced the most reliable systems. Results favored neither those strategies that incorporate spot unit testing nor those that do not; also, results favored neither phased nor incremental strategies. >

30 citations


Proceedings ArticleDOI
M. Ben-Bassat1, Israel Beniaminy, M. Eshel, B. Feldman, A. Shpiro 
20 Sep 1993
TL;DR: The authors present an integrated framework for effective management of maintenance operations that addresses two key problems: the shortage of resources; and the performance differences between the individual available resources, and specifically, the human resources.
Abstract: The authors present an integrated framework for effective management of maintenance operations that addresses two key problems: the shortage of resources; and the performance differences between the individual available resources, and specifically, the human resources. Software tools are proposed for these two problems. W-6 assists in improved cost-effective utilization of the available resources. That is, assigning the right person, to the right job, at the right time, and with the right resources. AITEST and OnDoc contribute in increasing the productivity and work quality of each individual service person. Software integration is not centered on sharing of data because most of the data and knowledge required by one tool is not required at all by the others. The proposed EPI (External Program Interface) tool implements integration that is centered on the concept of workflow management. The different software systems exchange messages that are characterized by control-passing and by division of labor, with relatively small amounts of data being shared. A case study of a large depot facility where this framework has been implemented is described. >

30 citations


Journal ArticleDOI
TL;DR: This paper examines the message-passing approach to integration in an SDE, looks at the general principles of the approach, describes some existing implementations, and discusses the use of such a mechanism as the basis for a more flexible environment that is open to experimentation with different approaches to integration.
Abstract: Understanding tool integration in a software development environment (SDE) is one of the key issues being addressed in current work on providing automated support for large-scale software production. Work has been taking place at both the conceptual level (‘what is integration?’) and the mechanistic level (‘how do we provide integration?’). Many people see the answers to these questions as providing the corner-stone to real progress in the area. Until recently, existing integration mechanisms have been very rigid in the support they provide for integration. Users have been offered a fixed level of integration with little flexibility. However, one approach that has been recently implemented employs a control integration paradigm which appears to be flexible, supportive and adaptable to a wide range of end-user needs. Implementations of this paradigm are based on the notion of ‘message-passing’ as the underlying communication mechanism between SDE services. In this paper, we examine the message-passing approach to integration in an SDE, look at the general principles of the approach, describe some existing implementations, and discuss the use of such a mechanism as the basis for a more flexible environment that is open to experimentation with different approaches to integration.

29 citations


Proceedings ArticleDOI
05 Jan 1993
TL;DR: In this article, it is argued that there are few hypermedia applications within business and industrial organizations and that large acceptance of hypermedia within organizations will occur once this technology is better integrated with other organizational systems and applied to carefully selected tasks.
Abstract: There are few hypermedia applications within business and industrial organizations. It is argued that this phenomenon is rooted in the concept of hypermedia applications as stand-alone programs. Larger acceptance of hypermedia within organizations will occur once this technology is better integrated with other organizational systems and applied to carefully selected tasks. Three areas for research in this context are identified: the tasks perspective, which deals with selecting tasks to develop hypermedia applications; the knowledge perspective, which deals with representing and managing the knowledge processed by organizations; and the integration perspective, which deals with technical issues in software integration. It is suggested that solutions to the problems presented will prompt the acceptance of hypermedia technology within organizations. >

Journal ArticleDOI
TL;DR: The major aspects of agent-based software engineering methodology and its application to integrated facility engineering are presented and a highlight of the current integrated design environment development is given to illustrate the advantages of this approach.
Abstract: An agent-based framework for the development of integrated facility engineering environments in support of collaborative design is introduced. This framework aims at integrating design software by allowing better software interoperability. Within their framework, design agents represent various existing design and planning systems that communicate their design information and knowledge partially and incrementally using the Agent Communication Language (ACL). ACL is a formal language proposed as a communication standard for disparate software. It is based on a logic-based language called Knowledge Interchange Format (KIF) and a message protocol called Knowledge Query Manipulation Language (KQML). Design agents are linked and their communication of design information is coordinated via system programs called facilitators in a federation architecture. The federation architecture specifies the way design agents and facilitators communicate in an integrated software environment. In concert with pursuing fundamental research concepts, we have been developing an integrated design software environment that spans different phases of the facility life cycle. This environment serves to demonstrate the primary aspects of this research methodology. In this paper, we first discuss the integration problem and review related research projects. We then present the major aspects of agent-based software engineering methodology and its application to integrated facility engineering. A highlight of the current integrated design environment development is given to illustrate the advantages of this approach. Finally, we summarize and discuss some of the important research issues in light of previous research.

Proceedings ArticleDOI
Gang Cheng1, Y. Lu1, Geoffrey C. Fox1, Kim Mills1, Tomasz Haupt1 
01 Dec 1993
TL;DR: Using the EMS simulation as a case study, the authors explore the AVS dataflow methodology to naturally integrate data visualization, parallel systems and heterogeneous computing.
Abstract: An integrated interactive visualization environment was created for an electromagnetic scattering (EMS) simulation, coupling a graphical user interface (GUI) for runtime simulation parameters input and 3-D rendering output on a graphical workstation, with computational modules running on a parallel supercomputer and two workstations. Application Visualization System (AVS) was used as integrating software to facilitate both networking and scientific data visualization. Using the EMS simulation as a case study, the authors explore the AVS dataflow methodology to naturally integrate data visualization, parallel systems and heterogeneous computing. Major issues in integrating this remote visualization system are discussed, including task decomposition, system integration, concurrent control, and a high level data-visualization-environment distributed programming model.

Proceedings ArticleDOI
06 Aug 1993
TL;DR: This paper will review the application of automated material handling and packing techniques to industrial problems, and outline the problems involved in the full automation of such a procedure.
Abstract: A rich theoretical background to the problems that occur in the automation of material handling can be found in operations research, production engineering, systems engineering and automation, more specifically machine vision, literature. This work has contributed towards the design of intelligent handling systems. This paper will review the application of these automated material handling and packing techniques to industrial problems. The discussion will also highlight the systems integration issues involved in these applications. An outline of one such industrial application, the automated placement of shape templates on to leather hides, is also discussed. The purpose of this system is to arrange shape templates on a leather hide in an efficient manner, so as to minimize the leather waste, before they are automatically cut from the hide. These pieces are used in the furniture and car manufacturing industries for the upholstery of high quality leather chairs and car seats. Currently this type of operation is semi-automated. The paper will outline the problems involved in the full automation of such a procedure.


Journal ArticleDOI
TL;DR: Major learning paradigms are reviewed, the role of learning in intelligent information systems is examined, and potential research issues in integrating learning capabilities in information systems design are discussed.
Abstract: Developing intelligent information systems has been a focus of recent research. A major component that makes a system intelligent is its learning capabilities. A learning system can adapt itself to new environments and improve its performance with minimum intervention from the developer. In this paper, we review major learning paradigms, examine the role of learning in intelligent information systems, and discuss potential research issues in integrating learning capabilities in information systems design.

Proceedings ArticleDOI
TL;DR: The primary objective of the testbed is to perform a system integration of Control Structure Interaction technologies necessary to demonstrate the end-to-end operation of a space- based interferometer, ultimately proving to flight mission planners that the necessary control technology exists to meet the challenging requirements of future space-based interferometry missions.
Abstract: This paper describes the overall design and planned phased delivery of the ground-based Micro-Precision Interferometer (MPI) Testbed. The testbed is a half scale replica of a future space-based interferometer containing all the spacecraft subsystems necessary to perform an astrometric measurement. Appropriate sized reaction wheels will regulate the testbed attitude as well as provide a flight-like disturbance source. The optical system will consist of two complete Michelson interferometers. Successful interferometric measurements require controlling the positional stabilities of these optical elements to the nanometer level. The primary objective of the testbed is to perform a system integration of Control Structure Interaction (CSI) technologies necessary to demonstrate the end-to-end operation of a space- based interferometer, ultimately proving to flight mission planners that the necessary control technology exists to meet the challenging requirements of future space-based interferometry missions. These technologies form a multi-layered vibration attenuation architecture to achieve the necessary quiet environment. This three layered methodology blends disturbance isolation, structural quieting and active optical control techniques. The paper describes all the testbed subsystems in this end-to-end ground-based system as well as the present capabilities of the evolving testbed.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings Article
18 Apr 1993


Journal ArticleDOI
TL;DR: Object-oriented technology (00T) has been a valuable asset to the authors' product development cycle and its methods can be used in other projects to shorten theDevelopment cycle and enhance product quality.
Abstract: Object-oriented technology (00T) has been a valuable asset to our product development cycle. It resulted in significant reuse of code, design, and analysis; reduced development cycles; simplified system integration; and a more efficient method of generating and using requirements. By incorporating manageable pieces of 00T into our development process, a few techniques at a time, we have been able to learn about it, introduce it at a manageable rate, and still meet our customer commitments without asking for additional resources for 00T work. Our developers and system engineers continue to incorporate 00T techniques into their analyses, designs, implementations, and testing, and look for new areas to enhance our existing paradigm. This paper is for project managers, architects, system engineers, and developers who are using, or are considering using, 00T. Our methods can be used in other projects to shorten the development cycle and enhance product quality.

01 Jun 1993
TL;DR: The software packaging tool subsumes traditional integration tools like UNIX scMAKE by providing a rule-based approach to software integration that is independent of execution environments.
Abstract: Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their run-time environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This thesis describes a process called software packaging that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Whereas previous efforts focused solely on integration mechanisms, software packaging provides a context that relates such mechanisms to software integration processes. We demonstrate the value of this approach by reducing the cost of configuring applications whose components are distributed and implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX scMAKE by providing a rule-based approach to software integration that is independent of execution environments.

Journal ArticleDOI
TL;DR: The basic functionalities and architecture of the metadatabase model are described and several examples central to information management are included to illustrate how the metADatabase approach can be deployed to better manage enterprise-wide information resources.
Abstract: The success of modern information technology in the past decades has brought about the proliferation of systems dedicated to individual groups of applications and functions. This proliferation, in turn, has led to the need for enterprise-wide management and integration of information, and has triggered major efforts such as systems integration, re-engineering, and computer integrated manufacturing. Nonetheless, achieving such integration remains a challenge.To effectively manage information resources and to coordinate processing at a global level requires concepts and technologies that overcome the difficult problems arising from, among other things, combining data with knowledge and managing concurrent multi-database systems. The metadatabase approach offers fresh promise to solve some of these fundamental problems. The basic functionalities and architecture of the metadatabase model are described in this paper. Several examples central to information management are also included to illustrate how the metadatabase approach can be deployed to better manage enterprise-wide information resources.

01 Apr 1993
TL;DR: In this article, a profile negotiation process (PNP) between the Center/TRACON Automation System (CTAS) and an aircraft equipped with a four-dimensional flight management system (4D FMS) was designed to provide an arrival trajectory solution which satisfies the separation requirements of ATC while remaining as close as possible to the aircraft's preferred trajectory.
Abstract: Historically, development of airborne flight management systems (FMS) and ground-based air traffic control (ATC) systems has tended to focus on different objectives with little consideration for operational integration. A joint program, between NASA's Ames Research Center (Ames) and Langley Research Center (Langley), is underway to investigate the issues of, and develop systems for, the integration of ATC and airborne automation systems. A simulation study was conducted to evaluate a profile negotiation process (PNP) between the Center/TRACON Automation System (CTAS) and an aircraft equipped with a four-dimensional flight management system (4D FMS). Prototype procedures were developed to support the functional implementation of this process. The PNP was designed to provide an arrival trajectory solution which satisfies the separation requirements of ATC while remaining as close as possible to the aircraft's preferred trajectory. Results from the experiment indicate the potential for successful incorporation of aircraft-preferred arrival trajectories in the CTAS automation environment. Fuel savings on the order of 2 percent to 8 percent, compared to fuel required for the baseline CTAS arrival speed strategy, were achieved in the test scenarios. The data link procedures and clearances developed for this experiment, while providing the necessary functionality, were found to be operationally unacceptable to the pilots. In particular, additional pilot control and understanding of the proposed aircraft-preferred trajectory, and a simplified clearance procedure were cited as necessary for operational implementation of the concept.

Journal ArticleDOI
TL;DR: A distributed development environment is developed that combines and extends the capabilities of existing methods while fixing their drawbacks to meet the special needs of the HAGAR project.
Abstract: The HAGAR project is building a high-performance disk controller. It is an embedded system for which many hundreds of thousands of lines of embedded software will have to be developed concurrently with the development of the hardware. We found existing methods for embedded software development, such as simulation and remote cross development, to be inadequate for us. To meet our special needs, we developed a distributed development environment that combines and extends the capabilities of existing methods while fixing their drawbacks. Our environment is based on a processor-pool architecture, in which multiple hardware sets are pooled and managed systematically. It supports embedded software development for many programmers at different sites. It allows for the emulation of non-existing hardware adaptor cards and for the integration of embedded software testing with hardware simulation. The environment provides a single system image, hiding many hardware and configuration details from its users. From the perspective of the programmers, our environment makes developing embedded software for special hardware systems as easy as developing application programs for a UNIX workstation.

Book ChapterDOI
01 Jan 1993
TL;DR: A Visually Coupled System (VCS) is described along with its components, interfaces and performance requirements for immersive virtual environment systems.
Abstract: A Visually Coupled System (VCS) forms the heart of all immersive virtual environment systems (frequently known as virtual reality systems). A VCS comprises space trackers, helmet-mounted displays, sensors and display generators [1,2]. In this paper a VCS is described along with its components, interfaces and performance requirements. Integration of the VCS components requires very careful selection and matching of a range of technologies from hardware interface and software integration points of view. Moreover, it is important to examine the task requirements and ensure that these are considered in the design [3,4].

Book ChapterDOI
01 Jan 1993
TL;DR: Computer Aided Engineering tools and technologies that hold the potential for creating a simulation based Concurrent Engineering environment in the near-term and a design optimization based capability for the future are analyzed.
Abstract: Computer Aided Engineering tools and technologies that hold the potential for creating a simulation based Concurrent Engineering environment in the near-term and a design optimization based capability for the future are analyzed. Technical challenges and opportunities associated with integrating these tools into a software environment that can support multidisciplinary engineering teams are defined and illustrated. A road map for evolutionary creation of a simulation based Concurrent Engineering design environment in the near-term and a design optimization environment in the longer term is presented. Projects underway to create the capability advocated by the road map are presented, to illustrate technical considerations peculiar to Concurrent Engineering of mechanical systems and challenges associated with network based multidisciplinary CAE system integration for Concurrent Engineering.

29 Jun 1993
TL;DR: An overview of distribution management systems is presented by discussing: placing DMS in context; systems currently in use; development of network control and analysis; evolution of systems integration; a technological approach; a strategic approach; and a representative DMS architecture.
Abstract: The author presents an overview of distribution management systems (DMS) by discussing: placing DMS in context; systems currently in use; development of network control and analysis; evolution of systems integration; a technological approach; a strategic approach; and a representative DMS architecture.

Journal ArticleDOI
TL;DR: A framework, a meta-process model, that allows one to develop in a flexible but integrated manner a distributed, open, and integrated system with a planned approach to build a common culture of understanding and conceptual thinking in an application domain is proposed.
Abstract: Maturity of technologies from one side and customers' demands from the other side have led to the need to develop increasingly large and complex systems. The problem we face is to structure the development of these types of systems and the systems themselves in a useful way and to support the development process from its conceptual foundation to its tool aspect. We believe that we must take one step beyond the current software engineering methodology to be able to cope with this task. What we propose is a framework, a meta-process model, that allows one to develop in a flexible but integrated manner a distributed, open, and integrated system with a planned approach. Based on the premise that the main factor is to build a common culture of understanding and conceptual thinking in an application domain, we suggest an additional level of coordination and modeling above the various development projects. In analyzing this two-leveled process model, we identify the major processes and models involved. While we focus on the process model itself, we discuss also in some more depth the two major concepts of domain analysis and integration architecture design as they relate to our approach. A strategy for realizing the meta-process model based on the notion of Application Machines is described.

Proceedings ArticleDOI
Satoshi Abe1, Toshio Abe, Hisao Sato, Taizo Okano, Koki Sengoku 
06 May 1993
TL;DR: The general structure of the system, overview of the algorithms to detect and evaluate cracks, and discuss system integration including human interface issues are presented.
Abstract: Evaluation of cracks on the surface of the road is an essential work for public organizations that are in charge of maintenance of roads. Currently, men view the photographs of the road to evaluate the cracks of the road surface. The work requires much man-power: high skill and careful examination. So far, several efforts have been made to realize automatic evaluation system of the cracks. Many of them are however in the experimental stage because of the amount of image data. Our system is developed as a practical system for the actual use of the road crack evaluation. Our system has the following advantages: (1) Real-time image processing hardware. Road image is at most 1,000-feet long. It is difficult to handle with tall that amount of image on even today's workstations. A special real-time image processor reduces image data into 1/200 without losing an essential crack information. (2) Good human interface. We design the human interface carefully to achieve a truly easy-to-use system. (3) Efficiency. This system reduced the man power dramatically. This paper presents a general structure of the system, overview of the algorithms to detect and evaluate cracks, and discuss system integration including human interface issues.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.