scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 2005"


Journal ArticleDOI
TL;DR: A Service-Oriented Context-Aware Middleware architecture for the building and rapid prototyping of context-aware services and a formal context model based on ontology using Web Ontology Language to address issues including semantic representation, context reasoning, context classification and dependency are proposed.

954 citations


Journal ArticleDOI
TL;DR: A virtual machine can support individual processes or a complete system depending on the abstraction level where virtualization occurs, and replication by virtualization enables more flexible and efficient and efficient use of hardware resources.
Abstract: A virtual machine can support individual processes or a complete system depending on the abstraction level where virtualization occurs. Some VMs support flexible hardware usage and software isolation, while others translate from one instruction set to another. Virtualizing a system or component -such as a processor, memory, or an I/O device - at a given abstraction level maps its interface and visible resources onto the interface and resources of an underlying, possibly different, real system. Consequently, the real system appears as a different virtual system or even as multiple virtual systems. Interjecting virtualizing software between abstraction layers near the HW/SW interface forms a virtual machine that allows otherwise incompatible subsystems to work together. Further, replication by virtualization enables more flexible and efficient and efficient use of hardware resources.

665 citations


Journal ArticleDOI
TL;DR: The key architectural features of BlueGene/L are introduced: the link chip component and five Blue Gene/L networks, the PowerPC® 440 core and floating-point enhancements, the on-chip and off-chip distributed memory system, the node- and system-level design for high reliability, and the comprehensive approach to fault isolation.
Abstract: The Blue Gene®/L computer is a massively parallel supercomputer based on IBM system-on-a-chip technology. It is designed to scale to 65,536 dual-processor nodes, with a peak performance of 360 teraflops. This paper describes the project objectives and provides an overview of the system architecture that resulted. We discuss our application-based approach and rationale for a low-power, highly integrated design. The key architectural features of Blue Gene/L are introduced in this paper: the link chip component and five Blue Gene/L networks, the PowerPC® 440 core and floating-point enhancements, the on-chip and off-chip distributed memory system, the node- and system-level design for high reliability, and the comprehensive approach to fault isolation.

422 citations


Journal ArticleDOI
Ernst Fricke1, Armin P. Schulz
TL;DR: Flexibility, agility, robustness, and adaptability as four key aspects of changeability will be defined and described and a basic approach outlining and guiding an application of the framework described concludes this paper.
Abstract: In the past decades the world has been changing in almost every aspect. Systems development is facing rapidly changing and increasingly global environments in markets, competition, technology, regulatory, and societal systems. Systems to be delivered must be designed not only to meet customer or market needs, but also increasingly to meet requirements and constraints of systems sharing its operational context and throughout their entire lifecycle. The design of a system must provide for a continuous evolution of its architecture either by upgrading a system already in service or releasing a new version or derivative. Based on these key challenges imposed on development systems, this paper will evolve the idea of incorporating changeability into a system architecture. Flexibility, agility, robustness, and adaptability as four key aspects of changeability will be defined and described. Design principles to enable flexibility, agility, robustness, and adaptability within systems are proposed and described. A basic approach outlining and guiding an application of the framework described concludes this paper. Examples from varying industries will illustrate the applicability and implementation of selected principles. Thus this paper spans a view from why, when, and how changeability has to be incorporated into a system's architecture. © 2005 Wiley Periodicals, Inc. Syst Eng 8: 342–359, 2005

352 citations


Journal Article
TL;DR: An evolving e-learning system which can adapt itself both to the learners and to the open Web, and it is argued that a hybrid collaborative filtering technique is more efficient to make "just-in-time" recommendations.
Abstract: In this article, we proposed an evolving e-learning system which can adapt itself both to the learners and to the open Web, and we pointed out the differences of making recommendations in e-learning and other domains. We propose two pedagogy features in recommendation: learner interest and background knowledge. A description of a paper's value, similarity, and ordering are presented using formal definitions. We also study two pedagogy-oriented recommendation techniques: content-based and hybrid recommendations. We argue that while it is feasible to apply both of these techniques in our domain, a hybrid collaborative filtering technique is more efficient to make \"just-in-time\" recommendations. In order to assess and compare these two techniques, we carried out an experiment using artificial learners. Experiment results are encouraging, showing that hybrid collaborative filtering, which can lower the computational costs, will not compromise the overall performance of the recommendation system. In addition, as more and more learners participate in the learning process, both learner and paper models can better be enhanced and updated, which is especially desirable for webbased learning systems. We have tested the recommendation mechanisms with real learners, and the results are very encouraging.

276 citations


Journal ArticleDOI
TL;DR: LiveNet is a stable, accessible system that combines inexpensive, commodity hardware; a flexible sensor/peripheral interconnection bus; and a powerful, light-weight distributed sensing, classification, and inter-process communications software architecture to facilitate the development of distributed real-time multi-modal and context-aware applications.
Abstract: In this paper we describe LiveNet, a flexible wearable platform intended for long-term ambulatory health monitoring with real-time data streaming and context classification Based on the MIT Wearable Computing Group's distributed mobile system architecture, LiveNet is a stable, accessible system that combines inexpensive, commodity hardware; a flexible sensor/peripheral interconnection bus; and a powerful, light-weight distributed sensing, classification, and inter-process communications software architecture to facilitate the development of distributed real-time multi-modal and context-aware applications LiveNet is able to continuously monitor a wide range of physiological signals together with the user's activity and context, to develop a personalized, data-rich health profile of a user over time We demonstrate the power and functionality of this platform by describing a number of health monitoring applications using the LiveNet system in a variety of clinical studies that are underway Initial evaluations of these pilot experiments demonstrate the potential of using the LiveNet system for real-world applications in rehabilitation medicine

258 citations


Proceedings Article
01 Dec 2005
TL;DR: This paper proposes the goal of the MetaQuerier for Web-scale integration– with its dynamic and ad-hoc nature, such large scale integration mandates both dynamic source discovery and on-thefly query translation.
Abstract: The Web has been rapidly “deepened” by myriad searchable databases online, where data are hidden behind query interfaces Toward large scale integration over this “deep Web,” we have been building the MetaQuerier system– for both exploring (to find) and integrating (to query) databases on the Web As an interim report, first, this paper proposes our goal of the MetaQuerier for Web-scale integration– With its dynamic and ad-hoc nature, such large scale integration mandates both dynamic source discovery and on-thefly query translation Second, we present the system architecture and underlying technology of key subsystems in our ongoing implementation Third, we discuss “lessons” learned to date, focusing on our efforts in system integration, for putting individual subsystems to function together On one hand, we observe that, across subsystems, the system integration of an integration system is itself non-trivial– which presents both challenges and opportunities beyond subsystems in isolation On the other hand, we also observe that, across subsystems, there emerge unified insights of “holistic integration”– which leverage large scale itself as a unique opportunity for information integration

218 citations


Proceedings ArticleDOI
10 Apr 2005
TL;DR: The overall system architecture and a prototype implementation for the x86 platform are discussed, and the preliminary performance evaluation shows that although full emulation can be prohibitively expensive, selective emulation can incur as little as 30% performance overhead relative to an uninstrumented (but failure-prone) instance of Apache.
Abstract: We propose a reactive approach for handling a wide variety of software failures, ranging from remotely exploitable vulnerabilities to more mundane bugs that cause abnormal program termination (e.g., illegal memory dereference) or other recognizable bad behavior (e.g., computational denial of service). Our emphasis is in creating "self-healing" software that can protect itself against a recurring fault until a more comprehensive fix is applied. Briefly, our system monitors an application during its execution using a variety of external software probes, trying to localize (in terms of code regions) observed faults. In future runs of the application, the "faulty" region of code will be executed by an instruction-level emulator. The emulator will check for recurrences of previously seen faults before each instruction is executed. When a fault is detected, we recover program execution to a safe control flow. Using the emulator for small pieces of code, as directed by the observed failure, allows us to minimize the performance impact on the immunized application. We discuss the overall system architecture and a prototype implementation for the x86 platform. We show the effectiveness of our approach against a range of attacks and other software failures in real applications such as Apache, sshd, and Bind. Our preliminary performance evaluation shows that although full emulation can be prohibitively expensive, selective emulation can incur as little as 30% performance overhead relative to an uninstrumented (but failure-prone) instance of Apache. Although this overhead is significant, we believe our work is a promising first step in developing self-healing software.

187 citations


Journal Article
TL;DR: Singularity demonstrates the practicality of new technologies and architectural decisions, which should lead to the construction of more robust and dependable systems.
Abstract: . Singularity is a research project in Microsoft Research that started with the question: what would a software platform look like if it was designed from scratch with the primary goal of dependability? Singularity is working to answer this question by building on advances in programming languages and tools to develop a new system architecture and operating system (named Singularity), with the aim of producing a more robust and dependable software platform. Singularity demonstrates the practicality of new technologies and architectural decisions, which should lead to the construction of more robust and dependable systems.

162 citations


Journal ArticleDOI
TL;DR: In this article, in addition to the overall system architecture, the middleware services and the unique sensor fusion algorithms are described and an analysis of the experimental data gathered during field trials at US military facilities is presented.
Abstract: An ad-hoc wireless sensor network-based system is presented that detects and accurately locates shooters even in urban environments. The localization accuracy of the system in open terrain is competitive with that of existing centralized countersniper systems. However, the presented sensor network-based solution surpasses the traditional approach because it can mitigate acoustic multipath effects prevalent in urban areas and it can also resolve multiple simultaneous shots. These unique characteristics of the system are made possible by employing novel sensor fusion techniques that utilize the spatial and temporal diversity of multiple detections. In this article, in addition to the overall system architecture, the middleware services and the unique sensor fusion algorithms are described. An analysis of the experimental data gathered during field trials at US military facilities is also presented.

158 citations


Journal ArticleDOI
TL;DR: The challenge has been to demonstrate that remote programming combined with an advanced multimedia user interface for remote control is very suitable, flexible, and profitable for the design of a telelaboratory.
Abstract: In this paper, we present the user interface and the system architecture of an Internet-based telelaboratory, which allows researchers and students to remotely control and program two educational online robots. In fact, the challenge has been to demonstrate that remote programming combined with an advanced multimedia user interface for remote control is very suitable, flexible, and profitable for the design of a telelaboratory. The user interface has been designed by using techniques based on augmented reality and nonimmersive virtual reality, which enhance the way operators get/put information from/to the robotic scenario. Moreover, the user interface provides the possibility of letting the operator manipulate the remote environment by using multiple ways of interaction (i.e., from the simplification of the natural language to low-level remote programming). In fact, the paper focuses on the lowest level of interaction between the operator and the robot, which is remote programming. As explained in the paper, the system architecture permits any external program (i.e., remote experiment, speech-recognition module, etc.) to have access to almost every feature of the telelaboratory (e.g., cameras, object recognition, robot control, etc.). The system validation was performed by letting 40 Ph.D. students within the "European Robotics Research Network Summer School on Internet and Online Robots for Telemanipulation" workshop (Benica/spl grave/ssim, Spain, 2003) program several telemanipulation experiments with the telelaboratory. Some of these experiments are shown and explained in detail. Finally, the paper focuses on the analysis of the network performance for the proposed architecture (i.e., time delay). In fact, several configurations are tested through various networking protocols (i.e., Remote Method Invocation, Transmission Control Protocol/IP, User Datagram Protocol/IP). Results show the real possibilities offered by these remote-programming techniques, in order to design experiments intended to be performed from both home and the campus.

Journal ArticleDOI
01 May 2005
TL;DR: This paper describes an application-driven approach to the architectural design and implementation of a wireless sensor device that recognizes the event-driven nature of many sensor-network workloads and suggests one to two orders of magnitude reduction in power dissipation over existing commodity-based systems for an important class of sensor network applications.
Abstract: Recent years have seen a burgeoning interest in embedded wireless sensor networks with applications ranging from habitat monitoring to medical applications. Wireless sensor networks have several important attributes that require special attention to device design. These include the need for inexpensive, long-lasting, highly reliable devices coupled with very low performance requirements. Ultimately, the "holy grail" of this design space is a truly untethered device that operates off of energy scavenged from the ambient environment. In this paper, we describe an application-driven approach to the architectural design and implementation of a wireless sensor device that recognizes the event-driven nature of many sensor-network workloads. We have developed a full-system simulator for our sensor node design to verify and explore our architecture. Our simulation results suggest one to two orders of magnitude reduction in power dissipation over existing commodity-based systems for an important class of sensor network applications. We are currently in the implementation stage of design, and plan to tape out the first version of our system within the next year.

Proceedings ArticleDOI
15 Jun 2005
TL;DR: This work develops a general framework for adaptive algorithm selection for use in the Standard Template Adaptive Parallel Library (STAPL), using machine learning techniques to analyze data collected by STAPL installation benchmarks and to determine tests that will select among algorithmic options at run-time.
Abstract: Writing portable programs that perform well on multiple platforms or for varying input sizes and types can be very difficult because performance is often sensitive to the system architecture, the run-time environment, and input data characteristics This is even more challenging on parallel and distributed systems due to the wide variety of system architectures One way to address this problem is to adaptively select the best parallel algorithm for the current input data and system from a set of functionally equivalent algorithmic options Toward this goal, we have developed a general framework for adaptive algorithm selection for use in the Standard Template Adaptive Parallel Library (STAPL) Our framework uses machine learning techniques to analyze data collected by STAPL installation benchmarks and to determine tests that will select among algorithmic options at run-time We apply a prototype implementation of our framework to two important parallel operations, sorting and matrix multiplication, on multiple platforms and show that the framework determines run-time tests that correctly select the best performing algorithm from among several competing algorithmic options in 86-100% of the cases studied, depending on the operation and the system

Patent
28 Jul 2005
TL;DR: In this paper, a unified approach, a fusion technique, a space-time constraint, a methodology, and system architecture are provided to fuse the outputs of monocular and stereo video trackers, RFID and localization systems and biometric identification systems.
Abstract: A unified approach, a fusion technique, a space-time constraint, a methodology, and system architecture are provided. The unified approach is to fuse the outputs of monocular and stereo video trackers, RFID and localization systems and biometric identification systems. The fusion technique is provided that is based on the transformation of the sensory information from heterogeneous sources into a common coordinate system with rigorous uncertainties analysis to account for various sensor noises and ambiguities. The space-time constraint is used to fuse different sensor using the location and velocity information. Advantages include the ability to continuously track multiple humans with their identities in a large area. The methodology is general so that other sensors can be incorporated into the system. The system architecture is provided for the underlying real-time processing of the sensors.

Journal ArticleDOI
TL;DR: The architecture of a system which uses the technologies of augmented and virtual reality to support the planning process of complex manufacturing systems is described, which assists the user in modeling, the validation of the simulation model, and the subsequent optimization of the production system.

Journal ArticleDOI
TL;DR: An approach based on the 3C (communication, coordination and cooperation) collaboration model to the development of collaborative systems is introduced and a component-based system architecture following this 3C approach is described.
Abstract: This paper introduces an approach based on the 3C (communication, coordination and cooperation) collaboration model to the development of collaborative systems. The 3C model is studied by means of a detailed analysis of each of its three elements, followed by a case study of a learningware application and the methodology of a web-based course, both designed based on this model. Moreover, this paper describes a component-based system architecture following this 3C approach.

Journal ArticleDOI
TL;DR: The centralized and decentralized coordination models are compared using results from simulation scenarios that highlight safety, time efficiency and communication efficiency aspects for each model.
Abstract: Collaborative driving is a growing domain of intelligent transportation systems (ITS) that makes use of communications to autonomously guide cooperative vehicles on an automated highway system (AHS). In this paper, we address this issue by using a platoon of cars considered as more or less autonomous software agents. To achieve this, we propose a hierarchical driving agent architecture based on three layers (guidance layer, management layer and traffic control layer). This architecture has been used to develop centralized platoons, where the driving agent of the head vehicle coordinates other driving agents by applying strict rules, and decentralized platoons, where the platoon is considered as a group of driving agents with a similar degree of autonomy, trying to maintain a stable platoon. The latter decentralized model mainly considers an agent teamwork model based on a multiagent architecture, known as STEAM. The centralized and decentralized coordination models are finally compared using results from simulation scenarios that highlight safety, time efficiency and communication efficiency aspects for each model.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: A system that produces efficient MPI collective communication routines that adapts to a given platform and constructs routines that are customized for the platform by automatically generating topology specific routines and using an empirical approach to select the best implementations.
Abstract: In order for collective communication routines to achieve high performance on different platforms, they must be able to adapt to the system architecture and use different algorithms for different situations. Current Message Passing Interface (MPI) implementations, such as MPICH and LAM/MPI, are not fully adaptable to the system architecture and are not able to achieve high performance on many platforms. In this paper, we present a system that produces efficient MPI collective communication routines. By automatically generating topology specific routines and using an empirical approach to select the best implementations, our system adapts to a given platform and constructs routines that are customized for the platform. The experimental results show that the tuned routines consistently achieve high performance on clusters with different network topologies.

Book ChapterDOI
05 Sep 2005
TL;DR: The feasibility of a framework for document-driven workflow systems that requires no explicit control flow and the execution of the process is driven by input documents is demonstrated.
Abstract: We propose and demonstrate the feasibility of a framework for document-driven workflow systems that requires no explicit control flow and the execution of the process is driven by input documents. The framework can assist workflow designers to discover the data dependencies between tasks in a process and achieve more efficient control flow design. The framework also provides an architecture to separate the workflow system from application data and facilitate inter-organizational processes. Document-driven workflow systems are more flexible than traditional control flow processes, easier to verify and work better for ad hoc workflows. We also implemented a prototype workflow system using the framework entirely in a RDBMS using Transact-SQL in Microsoft SQL Server 2000. A detailed comparison with control driven workflows has also been done.

Book ChapterDOI
27 Jun 2005
TL;DR: AgentSpeak is an elegant logic-based programming language inspired by the BDI architecture as mentioned in this paper, and it is based on the tutorial given as part of the tutorial programme of CLIMA-VI.
Abstract: This paper is based on the tutorial given as part of the tutorial programme of CLIMA-VI. The tutorial aimed at giving an overview of the various features available in Jason, a multi-agent systems development platform that is based on an interpreter for an extended version of AgentSpeak. The BDI architecture is the best known and most studied architecture for cognitive agents, and AgentSpeak is an elegant, logic-based programming language inspired by the BDI architecture.

Proceedings ArticleDOI
16 Sep 2005
TL;DR: This paper starts with a threat model for airports and uses this to derive the security requirements to motivate an open-standards based architecture for surveillance, and discusses the critical aspects of this architecture and its implementation in the IBM S3 smart surveillance system.
Abstract: As smart surveillance technology becomes a critical component in security infrastructures, the system architecture assumes a critical importance. This paper considers the example of smart surveillance in an airport environment. We start with a threat model for airports and use this to derive the security requirements. These requirements are used to motivate an open-standards based architecture for surveillance. We discuss the critical aspects of this architecture and its implementation in the IBM S3 smart surveillance system. Demo results from a pilot deployment in Hawthorne, NY are presented.

Journal ArticleDOI
TL;DR: The research issues and industrial requirements for a knowledge driven CPD system architecture are presented and the proposed system architecture is described in detail and its implementation is presented using a case study of an injection moulded product.

Proceedings ArticleDOI
M. Brodie, Sheng Ma, Guy M. Lohman1, Laurent Mignet1, Mark Francis Wilding1, J. Champlin, P. Sohn 
13 Jun 2005
TL;DR: An architecture for and prototype of a system for quickly detecting software problem recurrences and demonstrating the value of automatically detecting re-occurrence of the same problem for a popular sofware product is presented.
Abstract: We present an architecture for and prototype of a system for quickly detecting software problem recurrences. Re-discovery of the same problem is very common in many large software products and is a major cost component of product support. At run-time, when a problem occurs, the system collects the problem symptoms, including the program call-stack, and compares it against a database of symptoms to find the closest matches. The database is populated off-line using solved cases and indexed to allow for efficient matching. Thus problems that occur repeatedly can be easily and automatically resolved without requiring any human problem-solving expertise. We describe a prototype implementation of the system, including the matching algorithm, and present some experimental results demonstrating the value of automatically detecting re-occurrence of the same problem for a popular sofware product

Proceedings ArticleDOI
23 Feb 2005
TL;DR: The requirements for supporting the lifecycle of an experiment and how they influenced the overall design of the architecture of the ORBIT radio grid testbed are described.
Abstract: This paper presents the software architecture of the ORBIT radio grid testbed. We describe the requirements for supporting the lifecycle of an experiment and how they influenced the overall design of the architecture. We specifically highlight those components and services which will be visible to a user of the ORBIT testbed.

Proceedings ArticleDOI
27 Dec 2005
TL;DR: This paper proposes an approach, Model-Based Safety Analysis, in which the system and safety engineers use the same system models created during a model-based development process, which can both reduce the cost and improve the quality of the safety analysis.
Abstract: System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to finding undocumented details of the system behavior and embedding this information in the safety artifacts such as the fault trees. In this paper we propose an approach, Model-Based Safety Analysis, in which the system and safety engineers use the same system models created during a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

Proceedings ArticleDOI
01 Dec 2005
TL;DR: This paper presents an architecture that allows to build applications with a much smaller TCB based on a kernelized architecture and on the reuse of legacy software using trusted wrappers.
Abstract: The trusted computing bases (TCBs) of applications running on today's commodity operating systems have become extremely large This paper presents an architecture that allows to build applications with a much smaller TCB It is based on a kernelized architecture and on the reuse of legacy software using trusted wrappers We discuss the design principles, the architecture and some components, and a number of usage examples

Journal ArticleDOI
01 Jan 2005
TL;DR: Details of a long-term and ongoing research project, where indoor intelligent spaces endowed with a range of useful functionalities are designed, built, and systematically evaluated, are presented.
Abstract: Intelligent environments can be viewed as systems where humans and machines (rooms) collaborate. Intelligent (or smart) environments need to extract and maintain an awareness of a wide range of events and human activities occurring in these spaces. This requirement is crucial for supporting efficient and effective interactions among humans as well as humans and intelligent spaces. Visual information plays an important role for developing accurate and useful representation of the static and dynamic states of an intelligent environment. Accurate and efficient capture, analysis, and summarization of the dynamic context requires the vision system to work at multiple levels of semantic abstractions in a robust manner. In this paper, we present details of a long-term and ongoing research project, where indoor intelligent spaces endowed with a range of useful functionalities are designed, built, and systematically evaluated. Some of the key functionalities include: intruder detection; multiple person tracking; body pose and posture analysis; person identification; human body modeling and movement analysis; and for integrated systems for intelligent meeting rooms, teleconferencing, or performance spaces. The paper includes an overall system architecture to support design and development of intelligent environments. Details of panoramic (omnidirectional) video camera arrays, calibration, video stream synchronization, and real-time capture/processing are discussed. Modules for multicamera-based multiperson tracking, event detection and event based servoing for selective attention, voxelization, streaming face recognition, are also discussed. The paper includes experimental studies to systematically evaluate performance of individual video analysis modules as well as to evaluate basic feasibility of an integrated system for dynamic context capture and event based servoing, and semantic information summarization.

Proceedings ArticleDOI
25 Sep 2005
TL;DR: The history of the Web browser domain is examined and several underlying phenomena are identified that have contributed to its evolution, including the significant reuse of open source components among different browsers and the emergence of extensive Web standards.
Abstract: A reference architecture for a domain captures the fundamental subsystems common to systems of that domain as well as the relationships between these subsystems. Having a reference architecture available can aid both during maintenance and at design time: it can improve understanding of a given system, it can aid in analyzing tradeoffs between different design options, and it can serve as a template for designing new systems and re-engineering existing ones. In this paper, we examine the history of the Web browser domain and identify several underlying phenomena that have contributed to its evolution. We develop a reference architecture for Web browsers based on two well known open source implementations, and we validate it against two additional implementations. Finally, we discuss our observations about this domain and its evolutionary history; in particular, we note that the significant reuse of open source components among different browsers and the emergence of extensive Web standards have caused the browsers to exhibit "convergent evolution".

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The architecture enables different schemes of decision distribution in the system, depending on the available decision making capabilities of the UAVs and on the operational constraints related to the tasks to achieve.
Abstract: This paper presents a decisional architecture and the associated algorithms for multi-UAV (unmanned aerial vehicle) systems. The architecture enables different schemes of decision distribution in the system, depending on the available decision making capabilities of the UAVs and on the operational constraints related to the tasks to achieve. The paper mainly focuses on the deliberative layer of the UAVs: we detail a planning scheme where a symbolic planner relies on refinement tools that exploit UAVs and environment models. Integration effort related to decisional features is highlighted, and preliminary simulation results are provided.

Journal ArticleDOI
TL;DR: The architecture, the algorithms and the performance results of an integrated indoor environment energy management system (IEEMS) for buildings, installed in two buildings in Athens and in Crete, are presented.