scispace - formally typeset
Search or ask a question

Showing papers on "System integration published in 2011"


Journal ArticleDOI
Li Da Xu1
TL;DR: The state of the art in the area of enterprise systems as they relate to industrial informatics is surveyed, highlighting formal methods and systems methods crucial for modeling complex enterprise systems, which poses unique challenges.
Abstract: Rapid advances in industrial information integration methods have spurred tremendous growth in the use of enterprise systems. Consequently, a variety of techniques have been used for probing enterprise systems. These techniques include business process management, workflow management, Enterprise Application Integration (EAI), Service-Oriented Architecture (SOA), grid computing, and others. Many applications require a combination of these techniques, which is giving rise to the emergence of enterprise systems. Development of the techniques has originated from different disciplines and has the potential to significantly improve the performance of enterprise systems. However, the lack of powerful tools still poses a major hindrance to exploiting the full potential of enterprise systems. In particular, formal methods and systems methods are crucial for modeling complex enterprise systems, which poses unique challenges. In this paper, we briefly survey the state of the art in the area of enterprise systems as they relate to industrial informatics.

637 citations


Proceedings ArticleDOI
03 Jul 2011
TL;DR: A conformal mm-wave phased array antenna design and packaging integration to provide low loss solution and flexibility for platform integration is presented.
Abstract: 60GHz technology utilizes world wide exempt 5–9GHz bandwidth to provide multi-gigabit high throughput wireless communication in WPAN and WLAN applications. One of the key challenges to enable 60GHz technology is developing mm-wave phased array antenna design and packaging integration with mm-wave ICs. This paper presents a conformal mm-wave phased array antenna design and packaging integration to provide low loss solution and flexibility for platform integration.

209 citations


Journal ArticleDOI
TL;DR: This article reviews several promising system integration approaches for microfluidics and discusses their advantages, limitations, and applications, which will lead toward translational lab-on-a-chip systems for a wide spectrum of biological engineering applications.
Abstract: Microfluidics holds great promise to revolutionize various areas of biological engineering, such as single cell analysis, environmental monitoring, regenerative medicine, and point-of-care diagnostics. Despite the fact that intensive efforts have been devoted into the field in the past decades, microfluidics has not yet been adopted widely. It is increasingly realized that an effective system integration strategy that is low cost and broadly applicable to various biological engineering situations is required to fully realize the potential of microfluidics. In this article, we review several promising system integration approaches for microfluidics and discuss their advantages, limitations, and applications. Future advancements of these microfluidic strategies will lead toward translational lab-on-a-chip systems for a wide spectrum of biological engineering applications.

93 citations


Proceedings ArticleDOI
10 Nov 2011
TL;DR: A plan for the mechatronic system integration was devised to combine the mechanical, electronic and software elements of the research, and the system was modelled mathematically, to achieve stability.
Abstract: In the event of a disaster, there is an impending need for robotic assistance in order to conduct an effective search and rescue operation, due to their immediate permissible deployment. In this paper, the development of an unmanned aerial vehicle (UAV) intended for search and rescue applications is presented. The platform for the UAV is a quad-rotor type helicopter, simply referred to as a quadrotor. A plan for the mechatronic system integration was devised to combine the mechanical, electronic and software elements of the research. Once the system was modelled mathematically, a control strategy was implemented to achieve stability. This was investigated by creating a MATLAB® Simulink® numerical model, which was used to run simulations of the system.

82 citations


Proceedings ArticleDOI
14 Mar 2011
TL;DR: Three general techniques to implement and model predictable and composable resources are presented, and their applicability in the context of a memory controller is demonstrated.
Abstract: Designing multi-processor systems-on-chips becomes increasingly complex, as more applications with realtime requirements execute in parallel. System resources, such as memories, are shared between applications to reduce cost, causing their timing behavior to become inter-dependent. Using conventional simulation-based verification, this requires all concurrently executing applications to be verified together, resulting in a rapidly increasing verification complexity. Predictable and composable systems have been proposed to address this problem. Predictable systems provide bounds on performance, enabling formal analysis to be used as an alternative to simulation. Composable systems isolate applications, enabling them to be verified independently. Predictable and composable systems are built from predictable and composable resources. This paper presents three general techniques to implement and model predictable and composable resources, and demonstrates their applicability in the context of a memory controller. The architecture of the memory controller is general and supports both SRAM and DDR2/DDR3 SDRAM and a wide range of arbiters, making it suitable for many predictable and composable systems. The modeling approach is based on a shared-resource abstraction that covers any combination of supported memory and arbiter and enables system-level performance analysis with a variety of well-known frameworks, such as network calculus or data-flow analysis.

77 citations


BookDOI
20 Jun 2011
TL;DR: The typical development environment today consists of many specialized development tools, which are partially integrated, forming a complex tool landscape with partial integration as mentioned in this paper, which is the case in many traditional development environments.
Abstract: The typical development environment today consists of many specialized development tools, which are partially integrated, forming a complex tool landscape with partial integration Traditional appr

71 citations


Proceedings ArticleDOI
25 May 2011
TL;DR: This work proposes techniques that build on existing knowledge by converting structured data into an RDF-based knowledge base that can be gradually extended as part of the interaction during the definition of the robot task.
Abstract: Robots used in manufacturing today are tailored to their tasks by system integration based on expert knowledge concerning both production and machine control. For upcoming new generations of even more flexible robot solutions, in applications such as dexterous assembly, the robot setup and programming gets even more challenging. Reuse of solutions in terms of parameters, controls, process tuning, and of software modules in general then gets increasingly important. There has been valuable progress within reuse of automation solutions when machines comply with standards and behave according to nominal models. However, more flexible robots with sensor-based manipulation skills and cognitive functions for human interaction are far too complex to manage, and solutions are rarely reusable since knowledge is either implicit in imperative software or not captured in machine readable form. We propose techniques that build on existing knowledge by converting structured data into an RDF-based knowledge base. By enhancements of industrial control systems and available engineering tools, such knowledge can be gradually extended as part of the interaction during the definition of the robot task.

69 citations


Proceedings ArticleDOI
21 May 2011
TL;DR: This paper presents an empirical analysis of a large-scale project that implemented 1195 features in a software system and revealed that cross-feature interactions are a major driver of integration failures.
Abstract: Feature-driven software development is a novel approach that has grown in popularity over the past decade. Researchers and practitioners alike have argued that numerous benefits could be garnered from adopting a feature-driven development approach. However, those persuasive arguments have not been matched with supporting empirical evidence. Moreover, developing software systems around features involves new technical and organizational elements that could have significant implications for outcomes such as software quality. This paper presents an empirical analysis of a large-scale project that implemented 1195 features in a software system. We examined the impact that technical attributes of product features, attributes of the feature teams and crossfeature interactions have on software integration failures. Our results show that technical factors such as the nature of component dependencies and organizational factors such as the geographic dispersion of the feature teams and the role of the feature owners had complementary impact suggesting their independent and important role in terms of software quality. Furthermore, our analyses revealed that cross-feature interactions, measured as the number of architectural dependencies between two product features, are a major driver of integration failures. The research and practical implications of our results are discussed.

67 citations



Reference BookDOI
19 Oct 2011
TL;DR: The authors illustrate the inherent benefits of time-triggered communication in terms of predictability, complexity management, fault-tolerance, and analytical dependability modeling, which are key aspects of safety-critical systems.
Abstract: Time-Triggered Communication helps readers build an understanding of the conceptual foundation, operation, and application of time-triggered communication, which is widely used for embedded systems in a diverse range of industries. This book assembles contributions from experts that examine the differences and commonalities of the most significant protocols including: TTP, FlexRay, TTEthernet, SAFEbus, TTCAN, and LIN. Covering the spectrum, from low-cost time-triggered fieldbus networks to ultra-reliable time-triggered networks used for safety-critical applications, the authors illustrate the inherent benefits of time-triggered communication in terms of predictability, complexity management, fault-tolerance, and analytical dependability modeling, which are key aspects of safety-critical systems. Examples covered include FlexRay in cars, TTP in railway and avionic systems, and TTEthernet in aerospace applications. Illustrating key concepts based on real-world industrial applications, this book: Details the underlying concepts and principles of time-triggered communication Explores the properties of a time-triggered communication system, contrasting its strengths and weaknesses Focuses on the core algorithms applied in many systems, including those used for clock synchronization, startup, membership, and fault isolation Describes the protocols that incorporate presented algorithms Covers tooling requirements and solutions for system integration, including scheduling The information in this book will be extremely useful to industry leaders who design and manufacture products with distributed embedded systems based on time-triggered communication. It will also benefit suppliers of embedded components or development tools used in this area. As an educational tool, this material can be used to teach students and working professionals in areas including embedded systems, computer networks, system architectures, dependability, real-time systems, and automotive, avionics, and industrial control systems.

50 citations


Proceedings ArticleDOI
04 Jan 2011
TL;DR: The development of a conceptual level integrated process for design and analysis of efficient and environmentally acceptable supersonic aircraft with a unique blend of low, mixed and high-fidelity engineering tools combined together in the software integration framework, ModelCenter.
Abstract: This paper documents the development of a conceptual level integrated process for design and analysis of efficient and environmentally acceptable supersonic aircraft. To overcome the technical challenges to achieve this goal, a conceptual design capability which provides users with the ability to examine the integrated solution between all disciplines and facilitates the application of multidiscipline design, analysis, and optimization on a scale greater than previously achieved, is needed. The described capability is both an interactive design environment as well as a high powered optimization system with a unique blend of low, mixed and high-fidelity engineering tools combined together in the software integration framework, ModelCenter. The various modules are described and capabilities of the system are demonstrated. The current limitations and proposed future enhancements are also discussed.

Journal ArticleDOI
TL;DR: This work presents the design and implementation of CompOSe, a light-weight (only 1500 lines of code) composable operating system for MPSoCs, and experimentally demonstrates the ability to provide temporal composability, even in the presence of dynamic application behaviour and multiple use cases.

Journal ArticleDOI
TL;DR: Emerging disruptive technologies and innovative paradigm such as Open Source software are leading the way to a new generation of information systems that slowly will change the way physicians and healthcare providers as well as patients will interact and communicate in the future.

Journal ArticleDOI
TL;DR: A proposal called Guarana, which provides explicit support to devise EAI solutions using enterprise integration patterns by means of a graphical model, and its DSL enables software engineers to have not only the view of a process, but also a view of the whole set of processes of which an EAI solution is composed.
Abstract: Enterprise Application Integration (EAI) solutions cope with two kinds of problems within software ecosystems, namely: keeping a number of application's data in synchrony or creating new functionality on top of them. Enterprise Service Bus (ESB) provides the technology required to implement a variety of EAI solutions at sensible costs, but they are still far from negligible. It is not surprising then that many authors are working on proposals to endow them with domain-specific tools to help software engineers reduce integration costs. In this article, we introduce a proposal called Guarana. Its key features are as follows: it provides explicit support to devise EAI solutions using enterprise integration patterns by means of a graphical model; its DSL enables software engineers to have not only the view of a process, but also a view of the whole set of processes of which an EAI solution is composed; both processes and tasks can have multiple inputs and multiple outputs; and, finally, its runtime system provides a task-based execution model that is usually more efficient than the process-based execution models in current use. We have also implemented a graphical editor for our DSL and a set of scripts to transform our models into Java code ready to be compiled and executed. To set up a solution from this code, a software engineer only needs to configure a number of adapters to communicate with the applications being integrated.

Book ChapterDOI
23 May 2011
TL;DR: In this paper some aspects of CAD/CAPP/CAP systems integration process are described and the structure of an integrated environment, which is an answer to this kind of integration problem is shown.
Abstract: In this paper some aspects of CAD/CAPP/CAP systems integration process are described. Moreover, the structure of an integrated environment, which is an answer to this kind of integration problem is shown. Main elements of this environment are technological knowledge base - TKB and scheduling knowledge base - SKB. The foundation for building these knowledge bases was application of object methods in processes of technological and scheduling knowledge representation. In the paper both models it means the object model of technological knowledge representation and scheduling knowledge representation are discussed in details.

Proceedings ArticleDOI
03 Dec 2011
TL;DR: The mounting cost pressures in scale-out datacenters demand technologies that can decrease the Total Cost of Ownership (TCO).
Abstract: A System-on-Chip (SoC) integrates multiple discrete components into a single chip, for example by placing CPU cores, network interfaces and I/O controllers on the same die. While SoCs have dominated high-end embedded products for over a decade, system-level integration is a relatively new trend in servers, and is driven by the opportunity to lower cost (by reducing the number of discrete parts) and power (by reducing the pin crossings from the cores to the I/O). Today, the mounting cost pressures in scale-out dat-acenters demand technologies that can decrease the Total Cost of Ownership (TCO). At the same time, the diminshing return of dedicating the increasing number of available transistors to more cores and caches is creating a stronger case for SoC-based servers.This paper examines system-level integration design options for the scale-out server market, specifically targeting datacenter-scale throughput computing workloads. We develop tools to model the area and power of a variety of discrete and integrated server configurations. We evaluate the benefits, trade-offs, and trends of system-level integration for warehouse-scale datacenter servers, and identify the key "uncore" components that reduce cost and power. We perform a comprehensive design space exploration at both SoC and datacenter level, identify the sweet spots, and highlight important scaling trends of performance, power, area, and cost from 45nm to 16nm. Our results show that system integration yields substantial benefits, enables novel aggregated configurations with a much higher compute density, and significantly reduces total chip area and dynamic power versus a discrete-component server.Finally, we use utilization traces and architectural profiles of real machines to evaluate the dynamic power consumption of typical scale-out cloud applications, and combine them in an overall TCO model. Our results show that, for example at 16nm, SoC-based servers can achieve more than a 26% TCO reduction at datacenter scale.

Journal ArticleDOI
TL;DR: This article presents a wide variety of techniques for realizing transaction-level models of the increasingly large-scale multiprocessor systems on chip and describes how such models of hardware allow subsequent software integration and system performance evaluation.
Abstract: This article presents a wide variety of techniques for realizing transaction-level models of the increasingly large-scale multiprocessor systems on chip. It describes how such models of hardware allow subsequent software integration and system performance evaluation.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: The next generation of sensors is proposed: the intelligent sensor platform, defined as the combination of sensor and processing with a dedicated architecture to aggregate external sensor data.
Abstract: Smart sensors are defined by the IEEE 1451 standard as sensors with small memory and standardized physical connection to enable the communication with processor and data network. Beyond this definition, smart sensors are defined as the combination of a sensor with signal conditioning, embedded algorithms and digital interface. They are currently highly adopted in mobile and portable devices like phones and tablets. Such types of sensors respond to the issues of power consumption, data communication and system integration at the sensor level and for predefined use cases. Some limitations of smart sensors are the lack of flexibility, absence of customization, narrow spectrum of applications, and the basic communication protocol. Moreover, there is a growing request of new and broader applications for individual sensors while integrating an increasing number of different types of sensors. Therefore, to overcome these limitations and address the new challenges, the next generation of sensors is proposed: the intelligent sensor platform. It is defined as the combination of sensor and processing with a dedicated architecture to aggregate external sensor data. The main advantages are reviewed and an implementation of an intelligent sensor platform embedding a MEMS accelerometer with a 32-bit microcontroller is described.

Book
01 Jan 2011
TL;DR: Camberos and Moorhouse as mentioned in this paper developed a conceptual framework for constructing performance metrics across multiple disciplines to map work-potential losses as a common currency for aerospace systems analysis and design optimization.
Abstract: Recognizing a critical need for a holistic approach to systems integration and multidisciplinary analysis and design optimization, volume editors Camberos and Moorhouse pioneered the application of a powerful scientific principle, the second law of thermodynamics, for aerospace engineering. "Exergy Analysis and Design Optimization for Aerospace Vehicles and Systems" illustrates how they applied this law to advance aerospace systems analysis and design optimization. They set forth a comprehensive research program incorporating: a systematic theoretical basis for constructing the proper formulas quantifying exergy balance; development of new computational capabilities for calculating exergy destruction; and, exploration of novel approaches for system-level design. To enable rational progress in this unorthodox aero-sciences domain, they developed a conceptual framework for constructing performance metrics across multiple disciplines to map work-potential losses as a common currency. Concepts discussed include: identification of the upper limits on engineering system performance using the second law of thermodynamics; design methodology integration with tools developed in CFD and MDA/MDO; application of exergy methods to all levels of flight vehicle design; and, future directions, including constructal theory, quantum thermodynamics, and numerical methods in light of the second law.

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper designs facility information access protocol (FIAP) for data-centric building automation systems, carried out FIAP-based system integration into a building of the University of Tokyo, and demonstrates that FIAP enables incremental installation for wide varieties of applications with small engineering costs.
Abstract: Intelligent buildings are getting data-centric - they archive the historical records of motion detectors, power usages, HVAC statuses, weather, and any other information in order to improve their control strategies. The engineering cost of installation and maintenance of such systems should be minimized as the system owner has to operate them for several decades: i.e., the lifetime of the building. However, there are several design pitfalls that multiply such engineering costs, which make the operation heavy burden. This paper identifies those pitfalls and presents technical challenges that enable lightweight installation and maintenance. We, then, design facility information access protocol (FIAP) for data-centric building automation systems. We carried out FIAP-based system integration into a building of the University of Tokyo, and demonstrate that FIAP enables incremental installation for wide varieties of applications with small engineering costs.

Book ChapterDOI
23 May 2011
TL;DR: In this paper, the authors present a method of integration of preparation of production and production planning systems, such as SWZ and PROEDIMS. The system constituted in this process will enable SME entrepreneurs to take correct decisions connected with planning and controlling production.
Abstract: Fast growth of the SME sector, increasing demands of customers and dynamic market force the producers to lower production costs and to use new IT tools in production. The paper presents method of integration of preparation of production and production planning systems, such as SWZ and PROEDIMS. The system constituted in this process will enable SME entrepreneurs to take correct decisions connected with planning and controlling production. The integration will be achieved by methods of data transformation and data mapping using XML language. Due to this integration PROEDIMS system will be enriched with the module supporting verification of production orders, using constraints satisfaction and depth-first search (DFS) algorithm with backtracking.

Proceedings ArticleDOI
01 Jan 2011
TL;DR: OnCO-i2b2, funded by the Lombardia region, grounds on the software developed by the Informatics for Integrating Biology and the Bedside (i 2b2) NIH project, and new software modules purposely designed, data coming from multiple sources are integrated and jointly queried.
Abstract: The University of Pavia and the IRCCS Fondazione Salvatore Maugeri of Pavia (FSM), has recently started an IT initiative to support clinical research in oncology, called ONCO-i2b2. ONCO-i2b2, funded by the Lombardia region, grounds on the software developed by the Informatics for Integrating Biology and the Bedside (i2b2) NIH project. Using i2b2 and new software modules purposely designed, data coming from multiple sources are integrated and jointly queried. The core of the integration process stands in retrieving and merging data from the biobank management software and from the FSM hospital information system. The integration process is based on a ontology of the problem domain and on open-source software integration modules. A Natural Language Processing module has been implemented, too. This module automatically extracts clinical information of oncology patients from unstructured medical records. The system currently manages more than two thousands patients and will be further implemented and improved in the next two years.

Journal ArticleDOI
Jacob Beal1
TL;DR: In this paper, the authors propose an engineering approach based on functional blueprints, under which a system is specified in terms of desired performance and means of incrementally correcting deficiencies, and demonstrate the functional blueprint approach by applying it to integrate simplified models of tissue growth and vascularization, and further show how the composed system may itself be modulated for use as a component in more complex design.
Abstract: The engineering of grown systems poses fundamentally different system integration challenges than ordinary engineering of static designs. On the one hand, a grown system must be capable of surviving not only in its final form, but at every intermediate stage, despite the fact that its subsystems may grow unevenly or be subject to different scaling laws. On the other hand, the ability to grow offers much greater potential for adaptation, either to changes in the environment or to internal stresses developed as the system grows. I observe that the ability of subsystems to tolerate stress can be used to transform incremental adaptation into the dynamic discovery of viable growth trajectories for the system as a whole. Using this observation, I propose an engineering approach based on functional blueprints, under which a system is specified in terms of desired performance and means of incrementally correcting deficiencies. I explore how manifold geometric programming can support such an approach by simplifying the construction of distortion-tolerant programs, then demonstrate the functional blueprints approach by applying it to integrate simplified models of tissue growth and vascularization, and further show how the composed system may itself be modulated for use as a component in a more complex design.

Book
08 Sep 2011
TL;DR: This paper presents a meta- Ontology of User Interfaces and Interactions and its applications, including Efficient Semantic Event Processing, and Improving Information Exploration.
Abstract: Introduction.- Part I System Integration, Ontologies, and User Interfaces.- Application Integration on the User Interface Level.- Ontology-based System Integration.- Ontologies in User Interface Development.- Part II Integrating User Interfaces with Ontologies.- A Framework for User Interface Integration.- An Ontology of User Interfaces and Interactions.- Data Object Exchange.- Efficient Semantic Event Processing.- Crossing Technological Borders.- Part III The Future of Ontology-based UI Integration.- Improving Information Exploration.- Towards End-user User Interface Integration.- Conclusion and Outlook.- Index.

Proceedings ArticleDOI
14 Mar 2011
TL;DR: This work presents a structured approach to predictable integration based on a combination of architectural principles and associated analysis techniques, and identifies four QoS classes and defines the type of QoS guarantees to be supported for the two classes targeted at real-time functions.
Abstract: Advanced SoCs integrate a diverse set of system functions that pose different requirements on the SoC infrastructure. Predictable integration of such SoCs, with guaranteed Quality-of-Service (QoS) for the real-time functions, is becoming increasingly challenging. We present a structured approach to predictable integration based on a combination of architectural principles and associated analysis techniques. We identify four QoS classes and define the type of QoS guarantees to be supported for the two classes targeted at real-time functions. We then discuss how a SoC infrastructure can be built that provides such QoS guarantees on its interfaces and how network calculus can be applied for analyzing worst-case performance and sizing of buffers. Benefits of our approach are predictable performance and improved time-to-market, while avoiding costly over-design.

Journal ArticleDOI
TL;DR: A formalization of the model integration problem is proposed and a coupling method is presented and the extension of the classic Guyton model, a multi-organ, integrated systems model of blood pressure regulation, is used as an example of the application of the proposed method.
Abstract: This paper presents a contribution to the definition of the interfaces required to perform heterogeneous model integration in the context of integrative physiology. A formalization of the model integration problem is proposed and a coupling method is presented. The extension of the classic Guyton model, a multi-organ, integrated systems model of blood pressure regulation, is used as an example of the application of the proposed method. To this end, the Guyton model has been restructured, extensive sensitivity analyses have been performed, and appropriate transformations have been applied to replace a subset of its constituting modules by integrating a pulsatile heart and an updated representation of the renin-angiotensin system. Simulation results of the extended integrated model are presented and the impacts of their integration within the original model are evaluated.

Journal ArticleDOI
TL;DR: The challenges of a more explicit sharing of image and image processing semantics are discussed, and the help that semantic web technologies may bring to achieving this goal is discussed.

Proceedings ArticleDOI
03 Apr 2011
TL;DR: In this paper, the requirements of EMS-MES system integration within the framework of discrete event simulation (DES) were analyzed for precision sand casting production, and a case study of the EMS-MS system integration was explored.
Abstract: The desire to be environmentally sustainable gives manufacturers the necessary impetus to implement "green" technology that previously may have been regarded as less important. Traditionally, Energy Management Systems (EMS), which handle energy-related activities within building services, and Manufacturing Execution Systems (MES), which handle production activities, have been isolated from one another. Clearly, the integration of EMS-MES offers a compelling opportunity to make important energy-efficient contributions toward manufacturing sustainability. Discrete Event Simulation (DES) has been very valuable for manufacturing applications as an efficient analysis tool to aid problem solving and decision-making. This paper analyzes the requirements of EMS-MES system integration within the framework of DES. A case study of the EMS-MES system integration for precision sand casting production will be explored.

Journal ArticleDOI
TL;DR: This article reports on the design of a SOA platform, the "Service and Application Integration” (SAI) system, targeting novel approaches for legacy data and systems integration in the maritime surveillance domain, and develops a proof-of-concept of the main system capabilities.
Abstract: Maritime-surveillance operators still demand for an integrated maritime picture better supporting international coordination for their operations, as looked for in the European area. In this area, many data-integration efforts have been interpreted in the past as the problem of designing, building and maintaining huge centralized repositories. Current research activities are instead leveraging service-oriented principles to achieve more flexible and network-centric solutions to systems and data integration. In this direction, this article reports on the design of a SOA platform, the “Service and Application Integration” (SAI) system, targeting novel approaches for legacy data and systems integration in the maritime surveillance domain. We have developed a proof-of-concept of the main system capabilities to assess feasibility of our approach and to evaluate how the SAI middleware architecture can fit application requirements for dynamic data search, aggregation and delivery in the distributed maritime domain.

Journal ArticleDOI
TL;DR: The design provides a workable way to manage and query mixed schemas in a data warehouse and flexibly manages data access and confidentiality, facilitates catalog search, and readily formulates and compiles complex queries.