scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 2008"


Journal ArticleDOI
TL;DR: An open-source platform, OpenAlea, that provides a user-friendly environment for modellers, and advanced deployment methods, and the use of the platform to assemble several heterogeneous model components and to rapidly prototype a complex modelling scenario is presented.
Abstract: As illustrated by the approaches presented during the 5th FSPM workshop (Prusinkiewicz and Hanan 2007, and this issue), the development of functional-structural plant models requires an increasing amount of computer modeling. All these models are developed by different teams in various contexts and with different goals. Efficient and flexible computational frameworks are required to augment the interaction between these models, their reusability, and the possibility to compare them on identical datasets. In this paper, we present an open-source platform, OpenAlea, that provides a user-friendly environment for modelers, and advanced deployment methods. OpenAlea allows researchers to build models using a visual programming interface and provides a set of tools and models dedicated to plant modeling. Models and algorithms are embedded in OpenAlea components with well defined input and output interfaces that can be easily interconnected to form more complex models and define more macroscopic components. The system architecture is based on the use of a general purpose, high-level, object-oriented script language, Python, widely used in other scientific areas. We briefly present the rationale that underlies the architectural design of this system and we illustrate the use of the platform to assemble several heterogeneous model components and to rapidly prototype a complex modeling scenario.

313 citations


Journal ArticleDOI
TL;DR: PriS is described, a security requirements engineering method, which incorporates privacy requirements early in the system development process and provides a holistic approach from ‘high-level’ goals to ‘privacy-compliant’ IT systems.
Abstract: A major challenge in the field of software engineering is to make users trust the software that they use in their every day activities for professional or recreational reasons. Trusting software depends on various elements, one of which is the protection of user privacy. Protecting privacy is about complying with user’s desires when it comes to handling personal information. Users’ privacy can also be defined as the right to determine when, how and to what extend information about them is communicated to others. Current research stresses the need for addressing privacy issues during the system design rather than during the system implementation phase. To this end, this paper describes PriS, a security requirements engineering method, which incorporates privacy requirements early in the system development process. PriS considers privacy requirements as organisational goals that need to be satisfied and adopts the use of privacy-process patterns as a way to: (1) describe the effect of privacy requirements on business processes; and (2) facilitate the identification of the system architecture that best supports the privacy-related business processes. In this way, PriS provides a holistic approach from ‘high-level’ goals to ‘privacy-compliant’ IT systems. The PriS way-of-working is formally defined thus, enabling the development of automated tools for assisting its application.

244 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: This paper describes a system architecture and deployment to meet the design requirements and to allow model-driven control, thereby optimizing the prediction capability of the system.
Abstract: Predictive environmental sensor networks provide complex engineering and systems challenges. These systems must withstand the event of interest, remain functional over long time periods when no events occur, cover large geographical regions of interest to the event, and support the variety of sensor types needed to detect the phenomenon. Prediction of the phenomenon on the network complicates the system further, requiring additional computation on themicrocontrollers and utilizing prediction models that are not typically designed for sensor networks. This paper describes a system architecture and deployment to meet the design requirements and to allow model-driven control, thereby optimizing the prediction capability of the system. We explore the application of river flood prediction using this architecture, describing our work on a centralized form of the prediction model, network implementation, component testing and infrastructure development in Honduras, deployment on a river in Massachusetts, and results of the field experiments. Our system uses only a small number of nodes to cover basins of 1000-10000 square km2 using an unique heterogeneous communication structure to provide real-time sensed data, incorporating self-monitoring for failure, and adapting measurement schedules to capture events of interest.

175 citations


Proceedings ArticleDOI
01 Apr 2008
TL;DR: Parallax offers a comprehensive set of storage features including frequent, low-overhead snapshot of virtual disks, the 'gold-mastering' of template images, and the ability to use local disks as a persistent cache to dampen burst demand on networked storage.
Abstract: Parallax is a distributed storage system that uses virtualization to provide storage facilities specifically for virtual environments. The system employs a novel architecture in which storage features that have traditionally been implemented directly on high-end storage arrays and switches are relocated into a federation of storage VMs, sharing the same physical hosts as the VMs that they serve. This architecture retains the single administrative domain and OS agnosticism achieved by array- and switch-based approaches, while lowering the bar on hardware requirements and facilitating the development of new features. Parallax offers a comprehensive set of storage features including frequent, low-overhead snapshot of virtual disks, the 'gold-mastering' of template images, and the ability to use local disks as a persistent cache to dampen burst demand on networked storage.

154 citations


Journal ArticleDOI
TL;DR: Real-time process algebra (RTPA) is a denotational mathematical structure for denoting and manipulating system behavioral processes for intelligent and software system modeling, specification, refinement, and implementation.
Abstract: Real-time process algebra (RTPA) is a denotational mathematical structure for denoting and manipulating system behavioral processes. RTPA is designed as a coherent algebraic system for intelligent and software system modeling, specification, refinement, and implementation. RTPA encompasses 17 metaprocesses and 17 relational process operations. RTPA can be used to describe both logical and physical models of software and intelligent systems. Logic views of system architectures and their physical platforms can be described using the same set of notations. When a system architecture is formally modeled, the static and dynamic behaviors performed on the architectural model can be specified by a three-level refinement scheme at the system, class, and object levels in a top-down approach. RTPA has been successfully applied in real-world system modeling and code generation for software systems, human cognitive processes, and intelligent systems.

133 citations


Book
19 Nov 2008
TL;DR: This work believes that methodologies from the Multi Agent Technology (MAS) are good candidates for modeling HMS, and has defined domain specific guidelines for the identification and specification of holons, to help the designer to identify domains cooperations and the system goals as use cases.
Abstract: ANEMONA is a multi-agent system (MAS) methodology for holonic manufacturing system (HMS) analysis and design. ANEMONA defines a mixed top-down and bottom-up development process, and provides HMS-specific guidelines to help designers identify and implement holons. The analysis phase is defined in two stages: System Requirements Analysis, and Holon Identification and Specification. This analysis provides high-level HMS specifications, adopting a top-down recursive approach which provides a set of elementary elements and assembling rules. The next stage is Holon Design, a bottom-up process to produce the system architecture from the analysis models. The Holons Implementation stage produces an Executable Code for the SetUp and Configuration stage. Finally, maintenances functions are executed in the Operation and Maintenance stage. The book will be of interest to researchers and students involved in artificial intelligence and software engineering, and manufacturing engineers in industry and academia.

122 citations


Journal IssueDOI
TL;DR: This paper enhances a dynamic model to evaluate architecture adaptability over the maintenance and upgrade lifetime of a system, formulating a Design for Dynamic Value (DDV) optimization model.
Abstract: The value of a system usually diminishes over its lifetime, but some systems depreciate more slowly than others. Diminished value is due partly to the increasing needs and wants of the system's stakeholders and partly to its decreasing capabilities relative to emerging alternatives. Thus, systems are replaced or upgraded at substantial cost and disruption. If a system is designed to be changed and upgraded easily, however, this adaptability may increase its lifetime value. How can adaptability be designed into a system so that it will provide increased value over its lifetime? This paper describes the problem and an approach to its mitigation, adopting the concept of real options from the field of economics, extending it to the field of systems architecture, and coining the term architecture options for this next-generation method and the associated tools for design for adaptability. Architecture options provide a quantitative means of optimizing a system architecture to maximize its lifetime value. This paper provides two quantitative models to assess the value of architecture adaptability. First, we define three metrics—component adaptability factors, component option values, and interface cost factors—which are used in a static model to evaluate architecture adaptability during the design of new systems. Second, we enhance a dynamic model to evaluate architecture adaptability over the maintenance and upgrade lifetime of a system, formulating a Design for Dynamic Value (DDV) optimization model. We illustrate both models with quantitative examples and also discuss how to obtain the socio-economic data required for each model. © 2008 Wiley Periodicals, Inc. Syst Eng

115 citations


Journal ArticleDOI
01 Aug 2008
TL;DR: Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables.
Abstract: This paper introduces Clustera, an integrated computation and data management system. In contrast to traditional cluster-management systems that target specific types of workloads, Clustera is designed for extensibility, enabling the system to be easily extended to handle a wide variety of job types ranging from computationally-intensive, long-running jobs with minimal I/O requirements to complex SQL queries over massive relational tables. Another unique feature of Clustera is the way in which the system architecture exploits modern software building blocks including application servers and relational database systems in order to realize important performance, scalability, portability and usability benefits. Finally, experimental evaluation suggests that Clustera has good scale-up properties for SQL processing, that Clustera delivers performance comparable to Hadoop for MapReduce processing and that Clustera can support higher job throughput rates than previously published results for the Condor and CondorJ2 batch computing systems.

113 citations


Journal ArticleDOI
Byungun Yoon1
TL;DR: The proposed tool, Techpioneer, aims to offer decisive information in order to identify technology opportunities and uses textual information from technological document databases and applies morphology analysis to derive promising alternatives and conjoint analysis to evaluate their priority.
Abstract: Technology intelligence tools have come to be regarded as vital components in planning for technology development and formulating technology strategies However, most such tools currently focus on providing graphical frameworks and databases to support the process of technology analysis Techpioneer, the proposed tool in this paper, aims to offer decisive information in order to identify technology opportunities To this end, the system uses textual information from technological document databases and applies morphology analysis to derive promising alternatives and conjoint analysis to evaluate their priority In addition, the method used in developing a technology dictionary is presented, employing clustering and network analysis This system also has the ability to communicate with experts in order to estimate the value of existing patents, which is inevitable for the priority-setting of alternatives, construct a morphological matrix and so on This paper presents the system architecture and functions of this tool and moreover, illustrates the prototype implementation and case study of the same

102 citations



Journal ArticleDOI
TL;DR: It is found that existing Radio Frequency IDentification (RFID) data management scheme has to be modified so as to provide end‐to‐end traceability.
Abstract: Purpose – The paper aims to propose a novel dynamic tracing task model to enhance the traceability range along the supply chain beyond simple distribution channels. It further extends the study by implementing the system architecture with the proposed data model to support the dynamic tracing task.Design/methodology/approach – Typical processes of supply chain in manufacturing industries using bill of material data to extract and define information requirements are followed. The data elements are systematically selected and explained in the proposed model step by step.Findings – This paper found that existing Radio Frequency IDentification (RFID) data management scheme has to be modified so as to provide end‐to‐end traceability.Research limitations/implications – Validation of the proposed model and system architecture should be done through actual implementation in industrial settings.Practical implications – The paper gives an insight to many system managers and executors in how full traceability along ...

Proceedings ArticleDOI
14 Apr 2008
TL;DR: An extension to the CUDA programming language which extends parallelism to multi-GPU systems and GPU-cluster environments and provides a consistent development interface for additional, higher levels of parallel abstraction from the bus and network interconnects is presented.
Abstract: We present an extension to the CUDA programming language which extends parallelism to multi-GPU systems and GPU-cluster environments. Following the existing model, which exposes the internal parallelism of GPUs, our extended programming language provides a consistent development interface for additional, higher levels of parallel abstraction from the bus and network interconnects. The newly introduced layers provide the key features specific to the architecture and programmability of current graphics hardware while the underlying communica- tion and scheduling mechanisms are completely hidden from the user. All extensions to the original programming language are handled by a self-contained compiler which is easily embedded into the CUDA compile process. We evaluate our system using two different sample applications and discuss scaling behavior and performance on different system architectures.

Book
29 May 2008
TL;DR: This book is the first, comprehensive survey of modern architecture description languages and will be an invaluable reference for embedded system architects, designers, developers, and validation engineers.
Abstract: Efficient design of embedded processors plays a critical role in embedded systems design. Processor description languages and their associated specification, exploration and rapid prototyping methodologies are used to find the best possible design for a given set of applications under various design constraints, such as area, power and performance. This book is the first, comprehensive survey of modern architecture description languages and will be an invaluable reference for embedded system architects, designers, developers, and validation engineers. Readers will see that the use of particular architecture description languages will lead to productivity gains in designing particular (application-specific) types of embedded processors. * Comprehensive coverage of all modern architecture description languages... use the right ADL to design your processor to fit your application; * Most up-to-date information available about each architecture description language from the developers...save time chasing down reliable documentation; * Describes how each architecture desccription language enables key design automation tasks, such as simulation, synthesis and testing...fit the ADL to your design cycle;

Journal ArticleDOI
TL;DR: A cost-effective construction site monitoring system integrating a long-range wireless network, network cameras, and a web-based collaborative platform that supports simultaneous user access to view real-time captured images or video of a construction site.

Proceedings ArticleDOI
20 Oct 2008
TL;DR: This work aims to identify the common traits of these systems and present a layered software architecture which abstracts these similarities by defining common interfaces between successive layers, which provides developers with a unified view of the various types of multitouch hardware.
Abstract: In recent years, a large amount of software for multitouch interfaces with various degrees of similarity has been written. In order to improve interoperability, we aim to identify the common traits of these systems and present a layered software architecture which abstracts these similarities by defining common interfaces between successive layers. This provides developers with a unified view of the various types of multitouch hardware. Moreover, the layered architecture allows easy integration of existing software, as several alternative implementations for each layer can co-exist. Finally, we present our implementation of this architecture, consisting of hardware abstraction, calibration, event interpretation and widget layers.

Journal ArticleDOI
TL;DR: The significant benefits of developing a system architecture model for GEOSS using the SoSE process are described and an example of how the process would capture the architecture model of GEOSS is presented.
Abstract: There is an increasing need to perform systems-of-systems engineering (SoSE) in a global environment. A new SoSE process has been developed which is a significant breakthrough in the development of large complex systems and net-centric systems-of-systems (SoS). The SoSE process provides a complete, detailed, and systematic development approach for military and civil SoS. This architecture-centric, model-based systems engineering process emphasizes concurrent development of the system architecture model and system specifications. It is applicable to all phases of a system's lifecycle. The significant benefits of developing a system architecture model for GEOSS using the SoSE process are described. An example of how the process would capture the architecture model of GEOSS is presented.

Journal ArticleDOI
TL;DR: Measured transfer rates over the communication channel and processing times for the implemented hardware/software logic are presented for various frame sizes and a comparison with other solutions is given and a range of applications is given.

Journal ArticleDOI
Liming Xiu1
TL;DR: This paper attempts to explore and understand the signal characteristics and frequency domain behavior of this architecture through mathematical analysis and the underlying concept associated with this architecture, time-average-frequency, is formally introduced.
Abstract: Flying-adder frequency synthesis architecture is a novel technique of generating frequency on chip. Since its invention, it has been utilized in many commercial products to cope with various difficult challenges. During the evolution of this architecture, the issues related to circuit- and system-level implementation have been studied in prior publications. However, rigorous mathematical treatment on this architecture has not been established. In this paper, we attempt to explore and understand the signal characteristics and frequency domain behavior of this architecture through mathematical analysis. In the meantime, the underlying concept associated with this architecture, time-average-frequency, is formally introduced.

Proceedings ArticleDOI
14 Apr 2008
TL;DR: This paper focuses on one of the innovative aspects of the Games@Large project idea - the interactive streaming of graphical output to client devices, achieved by capturing the graphical commands at the DirectX API on the server and rendering them locally, resulting in high visual quality and enabling multiple game execution.
Abstract: In coming years we will see low cost networked consumer electronics (CE) devices dominating the living room. Various applications will be offered, including IPTV, VoIP, VoD, PVR and others. With regards to gaming, the need to compete with PlayStation and Xbox will require a radical change in system architecture. While traditional CE equipment suffers from having to meet low BOM (bill of materials) targets, dictated by highly competitive market and cable companies targeted costs, consoles enjoy superior hardware and software capabilities, being able to offset hardware and BOM costs with software royalties. Exent Technologies is leading the European FP6 Integrated Project Games@Large, whose mission is to research, develop and implement a new platform aimed at providing users with a richer variety of entertainment experience in familiar environments, such as their house, hotel room, and Internet Cafe. This will support low-cost, ubiquitous game-play throughout such environments, while taking advantage of existing hardware and providing multiple members of the family and community the ability to play simultaneously and to share experiences. This paper focuses on one of the innovative aspects of the Games@Large project idea - the interactive streaming of graphical output to client devices. This is achieved by capturing the graphical commands at the DirectX API on the server and rendering them locally, resulting in high visual quality and enabling multiple game execution. In order to support also small handheld devices which lack hardware graphics support, an enhanced video method is additionally provided.

Journal ArticleDOI
Yannick Morvan1, Dirk Farin2
TL;DR: It is shown that the proposed system achieves an efficient compression of 3D/multi-view video by extending a standard H.264 encoder such that near backward compatibility is retained.
Abstract: This paper presents a system architecture of an acquisition, compression and rendering system for 3D- TV and free-viewpoint video applications. We show that the proposed system yields two distinct advantages. First, it achieves an efficient compression of 3D/multi-view video by extending a standard H.264 encoder such that near backward compatibility is retained. Second, the proposed system can efficiently compress both 3D-TV and free- viewpoint multi-view video datasets using the single proposed system architecture.

Journal ArticleDOI
TL;DR: This paper describes a preliminary attempt at using the Semantic Web paradigm, particularly the Web Ontology Language (OWL), for domain-specific engineering design knowledge representation in a multi-agent distributed design environment.
Abstract: This paper describes a preliminary attempt at using the Semantic Web paradigm, particularly the Web Ontology Language (OWL), for domain-specific engineering design knowledge representation in a multi-agent distributed design environment. Ontology-based modeling to the engineering design knowledge on the Semantic Web is proposed as a prelude to the meaningful agent communication and knowledge reuse for collaborative work among multidisciplinary organizations. Formal knowledge representation in OWL format extends traditional product modeling with capabilities of knowledge sharing and distributed problem solving, and is used as a content language within the FIPA-ACL (Agent Communication Language) messages in the proposed multi-agent system architecture. As an illustration, engineering design knowledge of automatic assembly systems for manufacturing electronic connectors, which contain a group of electro-mechanical components, is represented in OWL format, with its inherent structure-function-process relationships defined explicitly and formally, to facilitate semantic access and retrieval of electro-mechanical component information across different disciplines. The proposed approach is viewed as a promising knowledge-management method that facilitates the implementation of computer supported cooperative work (CSCW) in design of Semantic Web applications.

Proceedings ArticleDOI
27 Oct 2008
TL;DR: The system architecture, user interface design, user software testing and future directions for development are described, including knowledge management, information retrieval, machine learning and network management.
Abstract: Netpal is a web-based dynamic knowledge base system designed to assist network administrators in their troubleshooting tasks, in recalling and storing experience, and in identifying new failure cases and their symptoms. In the context of web hosting environments, Netpal summarises network data and and supports retrieval of relevant organisational experience for system administrators. The system design draws on a variety of domains including knowledge management, information retrieval, machine learning and network management. We describe the system architecture, user interface design, user software testing and future directions for development.

Journal ArticleDOI
01 Jul 2008
TL;DR: An integrated approach for Web-based collaborative manufacturing, including distributed process planning, dynamic scheduling, real-time monitoring, and remote control, enabled by a Web- based integrated sensor-driven e-ShopFloor framework targeting distributed yet collaborative manufacturing environments is presented.
Abstract: This paper presents an integrated approach for Web-based collaborative manufacturing, including distributed process planning, dynamic scheduling, real-time monitoring, and remote control. It is enabled by a Web-based integrated sensor-driven e-ShopFloor (Wise-ShopFloor) framework targeting distributed yet collaborative manufacturing environments. Utilizing the latest Java technologies (Java 3D and Java Servlet) for system implementation, this approach allows users to plan and control distant shop floor operations based on runtime information from the shop floor. The objective of this research is to develop methodology and algorithms for Web-based collaborative planning and control, supported by real-time monitoring for dynamic scheduling. Details on the principle of the Wise-ShopFloor framework, system architecture, and a proof-of-concept prototype are reported in this paper. An example of distributed process planning for remote machining is chosen as a case study to demonstrate the effectiveness of this approach toward Web-based collaborative manufacturing.

Proceedings ArticleDOI
06 Nov 2008
TL;DR: This paper proposes a highly scalable parallelized L7-filter system architecture with affinity-based scheduling on a multi-core server and develops a model to explore the connection level parallelism in L8-filter and proposes an affinity- based scheduler to optimize system scalability.
Abstract: L7-filter is a significant component in Linux's QoS framework that classifies network traffic based on application layer data. It enables subsequent distribution of network resources in respect to the priority of applications. Considerable research has been reported to deploy multi-core architectures for computationally intensive applications. Unfortunately, the proliferation of multi-core architectures has not helped fast packet processing due to: 1) the lack of efficient parallelism in legacy network programs, and 2) the non-trivial configuration for scalable utilization on multi-core servers.In this paper, we propose a highly scalable parallelized L7-filter system architecture with affinity-based scheduling on a multi-core server. We start with an analytical study of the system architecture based on an offline design. Similar to Receive Side Scaling (RSS) in the NIC, we develop a model to explore the connection level parallelism in L7-filter and propose an affinity-based scheduler to optimize system scalability. Performance results show that our optimized L7-filter has superior scalability over the naive multithreaded version. It improves system performance by about 50% when all the cores are deployed.

01 Jan 2008
TL;DR: This tutorial surveys the relevant notions of ‘trust’, exploring what this means for ‘trusted computing’ and describes briefly the interventions needed in hardware and software required to give a stable platform upon which such systems can be constructed.
Abstract: Networked computer systems underlie a great deal of business, social, and government activity today. Everyone is expected to place a great deal of trust in their correct operation, but experience shows that this trust is often misplaced. Such systems have always been subject to failures due to oversights and mistakes by those who designed them; increasingly such failures are exploited by those with malicious intent. The concept of Trusted Computing has been present in the computer security literature for quite some time, and has influenced the design of some high-assurance solutions. These ideas are now becoming incorporated in mainstream products — PCs, mobile phones, disc drives, servers — and are the subject of much discussion and sometimes misinformation. Trusted computing implies a re-design of systems architecture in such a way as to support its factorization into relatively discrete components with well-defined characteristics. This permits, in particular, rational decisions based upon reasonable expectations of behaviour. Any such systems thinking must be motivated by an analysis of risks — so that effort is expended where it may give the best return — and an awareness of the limitations of such risk assessment (because frequently the raw data and parameters are simply not available, and because security properties are typically not compositional). The approach described here is largely the result of the work of an industry consortium (the Trusted Computing Group, TCG), itself informed by a history of research, largely in the area of high-assurance systems, from government and academe. TCG’s approach is distinctive in that previous trusted systems were usually bespoke and highly expensive: the current work aims to touch every computing device. This tutorial surveys the relevant notions of ‘trust’, exploring what this means for ‘trusted computing’. We describe briefly the interventions needed in hardware and software required to give a stable platform upon which such systems can be constructed. In essence, this gives us two new systems characteristics: (a) a high degree of confidence in the state (configuration, running software, etc.) of a local computing system—and hence a measure of its relative freedom from unwanted intervention; (b) a relatively high degree of confidence in the state of a remote system (a property called ‘remote attestation’). The first of those characteristics has perhaps always informed the way that users interact with desktop personal computers: many malware attacks exploit misplaced trust in the local system. The second characteristic is genuinely novel, and may be seen as an enabler of many new kinds of pattern of interaction in distributed systems. Knowing that a platform is in a particular state is neither a necessary nor sufficient condition for trustworthiness (or, indeed, security) — but helps to inform decisions about that. We explore how these capabilities are constructed, and some uses to which they might be put. We also briefly describe the state of deployment of these technologies, and some current areas of research.

Journal ArticleDOI
TL;DR: Newton's Pen, a statics tutor implemented on a ''pentop computer,'' a writing instrument with an integrated digitizer and embedded processor, and three user studies suggest that Newton's Pen is an effective teaching tool.

Book ChapterDOI
26 May 2008
TL;DR: A pragmatic incremental approach in which detail is progressively added to abstract system-level specifications of functional and timing properties via intermediate models that express system architecture, concurrency and timing behaviour is proposed and illustrated.
Abstract: The construction of formal models of real-time distributed systems is a considerable practical challenge. We propose and illustrate a pragmatic incremental approach in which detail is progressively added to abstract system-level specifications of functional and timing properties via intermediate models that express system architecture, concurrency and timing behaviour. The approach is illustrated by developing a new formal model of the cardiac pacemaker system proposed as a "grand challenge" problem in 2007. The models are expressed using the Vienna Development Method (VDM) and are validated primarily by scenario-based tests, including the analysis of timed traces. We argue that the insight gained using this staged modelling approach will be valuable in the subsequent development of implementations, and in detecting potential bottlenecks within suggested implementation architectures.

Patent
Hasan Timucin Ozdemir1, Hongbing Li1, Lipin Liu1, Kuo Chu Lee1, Namsoo Joo1 
05 May 2008
TL;DR: In this article, a multi-perspective context sensitive behavior assessment system includes an adaptive behavior model builder establishing a real-time reference model that captures intention of motion behavior, which operates by modeling outputs of multiple user defined scoring functions with respect to multiple references of application specific target areas of interest.
Abstract: A multi-perspective context sensitive behavior assessment system includes an adaptive behavior model builder establishing a real-time reference model that captures intention of motion behavior. It operates by modeling outputs of multiple user defined scoring functions with respect to multiple references of application specific target areas of interest. The target areas have criticality values representing a user's preference regarding the target areas with respect to one another. The outputs of the scoring functions are multiplied by the critically values to form high level sequences of representation that are communicated to the user.

Journal ArticleDOI
TL;DR: An architectural approach in which the software architecture imposed on the assembly prevents black-box integration anomalies and is validated on an industrial case study that concerns the development of systems for safeguarding, fruiting, and supporting the Cultural Heritage.

Journal ArticleDOI
01 Jan 2008
TL;DR: This paper describes a distributed hybrid intelligent system, called SmartDrill, using fuzzy logic, expert system framework and Web services for helping petroleum engineers to diagnose and solve lost circulation problems.
Abstract: Lost circulation is the most common problem encountered while drilling oil wells This paper describes a distributed fuzzy expert system, called Smart-Drill, aimed in helping petroleum engineers to diagnose and solve lost circulation problems To represent and manipulate perception-based evaluations of uncertainties of facts and rules, the expert system uses an uncertainty model with qualitative scales of plausibility values and multi-set-based fuzzy algebra of strict monotonic operations Its realization in inference procedures permits taking into account the change of plausibility of premises in expert systems rules Original tools like CAPNET Expert System Shell, Knowledge Acquisition Tool and WITSML Converter implementing the proposed model were used for the development of the Smart-Drill Overall, the system architecture is discussed and implementation details are provided Both desktop and Web-based implementations permit petroleum engineers benefit from the system working out in the field The system is currently at field testing phase in PEMEX, Mexican Oil Company