scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 2010"


Journal ArticleDOI
TL;DR: The developments and applications described here clearly indicate that PtMS is effective for use in networked complex traffic systems and is closely related to emerging technologies in cloud computing, social computing, and cyberphysical-social systems.
Abstract: Parallel control and management have been proposed as a new mechanism for conducting operations of complex systems, especially those that involved complexity issues of both engineering and social dimensions, such as transportation systems. This paper presents an overview of the background, concepts, basic methods, major issues, and current applications of Parallel transportation Management Systems (PtMS). In essence, parallel control and management is a data-driven approach for modeling, analysis, and decision-making that considers both the engineering and social complexity in its processes. The developments and applications described here clearly indicate that PtMS is effective for use in networked complex traffic systems and is closely related to emerging technologies in cloud computing, social computing, and cyberphysical-social systems. A description of PtMS system architectures, processes, and components, including OTSt, Dyna CAS, aDAPTS, iTOP, and TransWorld is presented and discussed. Finally, the experiments and examples of real-world applications are illustrated and analyzed.

760 citations


Journal ArticleDOI
TL;DR: A description and prototype implementation of the system architecture based on customized Moteiv Tmote Invent motes and sensor-enabled Nokia N80 mobile phones is presented; an evaluation of sensing and inference that quantifies cyclist performance and the cyclist environment; a report on networking performance in an environment characterized by bicycle mobility and human unpredictability.
Abstract: We present BikeNet, a mobile sensing system for mapping the cyclist experience. Built leveraging the MetroSense architecture to provide insight into the real-world challenges of people-centric sensing, BikeNet uses a number of sensors embedded into a cyclist's bicycle to gather quantitative data about the cyclist's rides. BikeNet uses a dual-mode operation for data collection, using opportunistically encountered wireless access points in a delay-tolerant fashion by default, and leveraging the cellular data channel of the cyclist's mobile phone for real-time communication as required. BikeNet also provides a Web-based portal for each cyclist to access various representations of her data, and to allow for the sharing of cycling-related data (for example, favorite cycling routes) within cycling interest groups, and data of more general interest (for example, pollution data) with the broader community. We present: a description and prototype implementation of the system architecture based on customized Moteiv Tmote Invent motes and sensor-enabled Nokia N80 mobile phones; an evaluation of sensing and inference that quantifies cyclist performance and the cyclist environment; a report on networking performance in an environment characterized by bicycle mobility and human unpredictability; and a description of BikeNet system user interfaces.

369 citations


Proceedings ArticleDOI
19 Jun 2010
TL;DR: The NoHype architecture, named to indicate the removal of the hypervisor, addresses each of the key roles of the virtualization layer: arbitrating access to CPU, memory, and I/O devices, acting as a network device, and managing the starting and stopping of guest virtual machines.
Abstract: Cloud computing is a disruptive trend that is changing the way we use computers. The key underlying technology in cloud infrastructures is virtualization -- so much so that many consider virtualization to be one of the key features rather than simply an implementation detail. Unfortunately, the use of virtualization is the source of a significant security concern. Because multiple virtual machines run on the same server and since the virtualization layer plays a considerable role in the operation of a virtual machine, a malicious party has the opportunity to attack the virtualization layer. A successful attack would give the malicious party control over the all-powerful virtualization layer, potentially compromising the confidentiality and integrity of the software and data of any virtual machine. In this paper we propose removing the virtualization layer, while retaining the key features enabled by virtualization. Our NoHype architecture, named to indicate the removal of the hypervisor, addresses each of the key roles of the virtualization layer: arbitrating access to CPU, memory, and I/O devices, acting as a network device (e.g., Ethernet switch), and managing the starting and stopping of guest virtual machines. Additionally, we show that our NoHype architecture may indeed be "no hype" since nearly all of the needed features to realize the NoHype architecture are currently available as hardware extensions to processors and I/O devices.

279 citations


Journal ArticleDOI
TL;DR: The overall architecture and the features of the SC Collaborator system, a service oriented, web-based system that facilitates the flexible coordination of construction supply chains by leveraging web services, web portal, and open source technologies, are described.

212 citations


Journal ArticleDOI
01 Nov 2010
TL;DR: This article describes an authorization model suitable for cloud computing that supports hierarchical rolebased access control, path-based object hierarchies, and federation and presents an authorization system architecture for implementing the model.
Abstract: This article describes an authorization model suitable for cloud computing that supports hierarchical rolebased access control, path-based object hierarchies, and federation. The authors also present an authorization system architecture for implementing the model.

151 citations


Proceedings ArticleDOI
04 Nov 2010
TL;DR: The proposed methodology allows the identification of coverage gaps, which may cause bottlenecks in the network, prior to deployment and therefore supports efficient and reliable deployment and operation of the system.
Abstract: This paper describes the system architecture and the performance evaluation of a Radio Frequency (RF) mesh based system for smart energy management applications in the Neighborhood Area Network (NAN). The RF mesh system presented in this paper leverages the Industrial, Scientific and Medical (ISM) band at 902-928 MHz and is based on frequency hopping spread spectrum (FHSS). The performance evaluation is based on a geographical model of the deployment scenario and implements geographical routing combined with appropriate radio propagation models. The results show that the system is able to handle Smart Metering communication traffic with a high reliability provided potential coverage gaps are properly filled with repeater nodes. The proposed methodology allows the identification of coverage gaps, which may cause bottlenecks in the network, prior to deployment and therefore supports efficient and reliable deployment and operation of the system.

137 citations


Patent
04 Jun 2010
TL;DR: In this article, a hardware/software system and method that collectively enables virtualization of the host computer's native I/O system architecture via the Internet and LANs is presented. But it does not address the problems of the relatively narrow focus of iSCSI, the direct connect limitation of PCI Express, and the inaccessibility of PCIe Express for expansion in blade architectures.
Abstract: A hardware/software system and method that collectively enables virtualization of the host computer's native I/O system architecture via the Internet and LANs. The invention includes a solution to the problems of the relatively narrow focus of iSCSI, the direct connect limitation of PCI Express, and the inaccessibility of PCI Express for expansion in blade architectures.

126 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is argued that adopting a best-effort service model for various software and hardware components of the computing platform stack can lead to drastic improvements in scalability and large improvements in performance and energy efficiency.
Abstract: With the advent of mainstream parallel computing, applications can obtain better performance only by scaling to platforms with larger numbers of cores. This is widely considered to be a very challenging problem due to the difficulty of parallel programming and the bottlenecks to efficient parallel execution. Inspired by how networking and storage systems have scaled to handle very large volumes of packet traffic and persistent data, we propose a new approach to the design of scalable, parallel computing platforms. For decades, computing platforms have gone to great lengths to ensure that every computation specified by applications is faithfully executed. While this design philosophy has remained largely unchanged, applications and the basic characteristics of their workloads have changed considerably. A wide range of existing and emerging computing workloads have an inherent forgiving nature. We therefore argue that adopting a best-effort service model for various software and hardware components of the computing platform stack can lead to drastic improvements in scalability. Applications are cognizant of the best-effort model, and separate their computations into those that may be executed on a best-effort basis and those that require the traditional execution guarantees. Best-effort computations may be exploited to simply reduce the computing workload, shape it to be more suitable for parallel execution, or execute it on unreliable hardware components. Guaranteed computations are realized either through an overlay software layer on top of the best-effort substrate, or through the use of application-specific strategies. We describe a system architecture for a best-effort computing platform, provide examples of parallel software and hardware that embody the best-effort model, and show that large improvements in performance and energy efficiency are possible through the adoption of this approach.

124 citations


Journal ArticleDOI
Hagen Stubing1, Marc Bechler2, Dieter Heussner, Thomas May3, Ilja Radusch4, Horst Rechner4, Peter Vogel3 
TL;DR: This article proposes the simTD system architecture and the components necessary for successful deployment in the large-scale field trial of a car-to-x communication system.
Abstract: Car-to-car and car-to-infrastructure communications are considered as a key technology for safe and intelligent mobility in the future. In the German project simTD, car-to-x communication is shifting from a pure research topic toward a first deployment of such a system: the goal of simTD is to test car-to-x applications in a real metropolitan field trial. The simTD scenario includes vehicles, roadside units, and infrastructural facilities for traffic and test management. Besides these domains, several third parties are involved to provide access to additional services. The main communication partners are furthermore distributed over a wide area including highway, suburban, and urban scenarios. As a result, such a system requires a commonly accepted architecture of the individual components and a seamless communication network for reliable and efficient information interchange. In this article we propose the simTD system architecture and the components necessary for successful deployment in the large-scale field trial.

113 citations


Journal ArticleDOI
TL;DR: A new methodology for reasoning about the functional failures during early design of complex systems based on the notion that a failure happens when a functional element in the system does not perform its intended task is introduced.
Abstract: In this paper, we introduce a new methodology for reasoning about the functional failures during early design of complex systems. The proposed approach is based on the notion that a failure happens when a functional element in the system does not perform its intended task. Accordingly, a functional criticality is defined depending on the role of functionality in accomplishing designed tasks. A simulation-based failure analysis tool is then used to analyze functional failures and reason about their impact on overall system functionality. The analysis results are then integrated into an early stage system architecture analysis framework that analyzes the impact of functional failures and their propagation to guide system-level architectural design decisions. With this method, a multitude of failure scenarios can be quickly analyzed to determine the effects of architectural design decisions on overall system functionality. Using this framework, design teams can systematically explore risks and vulnerabilities during the early (functional design) stage of system development prior to the selection of specific components. Application of the presented method to the design of a representative aerospace electrical power system (EPS) testbed demonstrates these capabilities.

111 citations


Journal ArticleDOI
TL;DR: A condensed survey of existing research and technologies, including smart meeting system architecture, meeting capture, meeting recognition, semantic processing, and evaluation methods, aimed at providing an overview of underlying technologies to help understand the key design issues of such systems.
Abstract: Smart meeting systems, which record meetings and analyze the generated audio--visual content for future viewing, have been a topic of great interest in recent years. A successful smart meeting system relies on various technologies, ranging from devices and algorithms to architecture. This article presents a condensed survey of existing research and technologies, including smart meeting system architecture, meeting capture, meeting recognition, semantic processing, and evaluation methods. It aims at providing an overview of underlying technologies to help understand the key design issues of such systems. This article also describes various open issues as possible ways to extend the capabilities of current smart meeting systems.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This study identifies several non-intuitive relationships between program characteristics and demonstrates that it is possible to accurately model CUDA kernel performance using only metrics that are available before a kernel is executed.
Abstract: Heterogeneous systems, systems with multiple processors tailored for specialized tasks, are challenging programming environments. While it may be possible for domain experts to optimize a high performance application for a very specific and well documented system, it may not perform as well or even function on a different system. Developers who have less experience with either the application domain or the system architecture may devote a significant effort to writing a program that merely functions correctly. We believe that a comprehensive analysis and modeling frame-work is necessary to ease application development and automate program optimization on heterogeneous platforms. This paper reports on an empirical evaluation of 25 CUDA applications on four GPUs and three CPUs, leveraging the Ocelot dynamic compiler infrastructure which can execute and instrument the same CUDA applications on either target. Using a combination of instrumentation and statistical analysis, we record 37 different metrics for each application and use them to derive relationships between program behavior and performance on heterogeneous processors. These relationships are then fed into a modeling frame-work that attempts to predict the performance of similar classes of applications on different processors. Most significantly, this study identifies several non-intuitive relationships between program characteristics and demonstrates that it is possible to accurately model CUDA kernel performance using only metrics that are available before a kernel is executed.

Book ChapterDOI
01 Nov 2010
TL;DR: In this paper, the authors introduce system architecture transmission techniques in the LTE system channels in the Radio Interface of the LTE System Radio Resource Management in LTE References, and present a detailed discussion of their work.
Abstract: This chapter contains sections titled: Introduction System Architecture Transmission Techniques in the LTE System Channels in the Radio Interface of the LTE System Radio Resource Management in LTE References

Proceedings ArticleDOI
04 Nov 2010
TL;DR: The overall system architecture, a sketch of the trip-prediction algorithm, and the associated optimization problem are provided, and extensive use of well- known, standardized, communica- tion protocols between EVs and the centralized VPP are proposed.
Abstract: This paper outlines an architectue of an electric vehicle (EV) based vehicle-to-grid (V2G) integrating virtual power plant (VPP). The overall system architecture, a sketch of the trip-prediction algorithm, and the associated optimization problem are provided. The communication requirements for our proposed architecture are derived, with emphasis on its reliabil- ity, responsiveness, security, and application-level behaviour. We propose extensive use of well- known, standardized, communica- tion protocols between EVs and the centralized VPP to transmit status and trip information from EVs to the VPP as well as to control the charging process.

Journal ArticleDOI
TL;DR: This proposal updates, extends and refines the well-known architecture proposed earlier by Pinedo and Yen's, and serves to integrate the different requirements identified in the literature review.

Proceedings ArticleDOI
13 Apr 2010
TL;DR: This paper presents an initial elastic computing framework that transparently optimizes application code onto diverse systems, achieving significant speedups ranging from 1.3x to 46x on a hyper-threaded Xeon system with an FPGA accelerator, a 16-CPU Opteron system, and a quad-core Xeon system.
Abstract: Over the past decade, system architectures have started on a clear trend towards increased parallelism and heterogeneity, often resulting in speedups of 10x to 100x. Despite numerous compiler and high-level synthesis studies, usage of such systems has largely been limited to device experts, due to significantly increased application design complexity. To reduce application design complexity, we introduce elastic computing - a framework that separates functionality from implementation details by enabling designers to use specialized functions, called elastic functions, which enable an optimization framework to explore thousands of possible implementations, even ones using different algorithms. Elastic functions allow designers to execute the same application code efficiently on potentially any architecture and for different runtime parameters such as input size, battery life, etc. In this paper, we present an initial elastic computing framework that transparently optimizes application code onto diverse systems, achieving significant speedups ranging from 1.3x to 46x on a hyper-threaded Xeon system with an FPGA accelerator, a 16-CPU Opteron system, and a quad-core Xeon system.

Journal ArticleDOI
TL;DR: Experimental results in a real-world setting suggest that the proposed solution is a promising approach to provide user context to local mobile applications as well as to network-level applications such as social networking services.

Journal ArticleDOI
TL;DR: A practical implementation of the transparent clock is presented with the overall system architecture and detailed operation of each building block, and results show that the time error is limited below 30 ns for nodes that were connected by three switches.
Abstract: This paper addresses issues with time synchronization using the IEEE 1588-2008 for distributed measurement and control systems. A practical implementation of the transparent clock is presented with the overall system architecture and detailed operation of each building block. To verify the submicrosecond accuracy using the implemented devices, an experimental setup that was analogous to a practical distributed system has been built. Measured results from the experiment show that the time error is limited below 30 ns for nodes that were connected by three switches. It is remarkable that the results are observed in spite of large packet queuing delays that were introduced by a traffic generator. The discussion on sources of time error that was outlined here provides technical considerations to designing IEEE 1588 systems.

Posted Content
TL;DR: The methods developed for the classification of information on the World-Wide-Web are applied to study the organization of Open Source programs in an attempt to establish the statistical laws governing software architecture.
Abstract: Starting from the pioneering works on software architecture precious guidelines have emerged to indicate how computer programs should be organized. For example the "separation of concerns" suggests to split a program into modules that overlap in functionality as little as possible. However these recommendations are mainly conceptual and are thus hard to express in a quantitative form. Hence software architecture relies on the individual experience and skill of the designers rather than on quantitative laws. In this article I apply the methods developed for the classification of information on the World-Wide-Web to study the organization of Open Source programs in an attempt to establish the statistical laws governing software architecture.

Journal ArticleDOI
TL;DR: The paper provides an overview of the problem area, gives an idea of the breadth of present ongoing research, establises a new system architecture and reports on the results of conducted experiments with real-life robots.
Abstract: The intelligent controlling mechanism of a typical mobile robot is usually a computer system. Some recent research is ongoing in which biological neurons are being cultured and trained to act as the brain of an interactive real world robot�thereby either completely replacing, or operating in a cooperative fashion with, a computer system. Studying such hybrid systems can provide distinct insights into the operation of biological neural structures, and therefore, such research has immediate medical implications as well as enormous potential in robotics. The main aim of the research is to assess the computational and learning capacity of dissociated cultured neuronal networks. A hybrid system incorporating closed-loop control of a mobile robot by a dissociated culture of neurons has been created. The system is flexible and allows for closed-loop operation, either with hardware robot or its software simulation. The paper provides an overview of the problem area, gives an idea of the breadth of present ongoing research, establises a new system architecture and, as an example, reports on the results of conducted experiments with real-life robots.

Proceedings ArticleDOI
20 Sep 2010
TL;DR: The ability to automatically generate adaptation plans based solely on ADL models and an application problem description simplifies the specification and use of adaptation mechanisms for system architects.
Abstract: Modern software-intensive systems are expected to adapt, often while the system is executing, to changing requirements, failures, and new operational contexts. This paper describes an approach to dynamic system adaptation that utilizes plan-based and architecture-based mechanisms. Our approach utilizes an architecture description language (ADL) and a planning-as-model-checking technology to enable dynamic replanning. The ability to automatically generate adaptation plans based solely on ADL models and an application problem description simplifies the specification and use of adaptation mechanisms for system architects. The approach uses a three-layer architecture that, while similar to previous work, provides several significant improvements. We apply our approach within the context of a mobile robotics case study.

Journal ArticleDOI
TL;DR: This paper discusses the use of a communications network security device, called a trust system, to enhance supervisory control and data-acquisition (SCADA) security, creates new trust system implementations to increase its flexibility, and demonstrates the trust system using TCP traffic.
Abstract: This paper discusses the use of a communications network security device, called a trust system, to enhance supervisory control and data-acquisition (SCADA) security. The major goal of the trust system is to increase security with minimal impact on existing utility communication systems. A previous paper focused on the technical operation of the trust system by augmenting routers to protect User Datagram Protocol (UDP)-based traffic. This paper concentrates on placing the trust system into a broader context, creates new trust system implementations to increase its flexibility, and demonstrates the trust system using TCP traffic. Specifically, the article expands on previous work in the following ways: 1) the article summarizes major threats against SCADA systems; 2) it discusses new trust system implementations, which allow the trust system to be used with a wider array of network-enabled equipment; 3) it discusses key SCADA security issues in the literature and shows how the trust system responds to such issues; 4) the paper shows the impact of the trust system when widely prevalent TCP/IP network communication is used; and 5) finally, the paper discusses a new hypothetical scenario to illustrate the protection that a trust system provides against insider threats.

Journal ArticleDOI
TL;DR: A novel environmental monitoring system with a focus on overall system architecture for seamless integration of wired and wireless sensors for long-term, remote, and near-real-time monitoring and a new WSN-based soil moisture monitoring system is developed and deployed to support hydrologic monitoring and modeling research.
Abstract: Wireless sensor networks (WSNs) have great potential to revolutionize many science and engineering domains. We present a novel environmental monitoring system with a focus on overall system architecture for seamless integration of wired and wireless sensors for long-term, remote, and near-real-time monitoring. We also present a unified framework for sensor data collection, management, visualization, dissemination, and exchange, conforming to the new Sensor Web Enablement standard. Some initial field testing results are also presented. The monitoring system is being integrated into the Texas Environmental Observatory infrastructure for long-term operation. As part of the integrated system, a new WSN-based soil moisture monitoring system is developed and deployed to support hydrologic monitoring and modeling research. This work represents a significant contribution to the empirical study of the emerging WSN technology. We address many practical issues in real-world application scenarios that are often neglected in the existing WSN research.

Journal ArticleDOI
TL;DR: A modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device and utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment is discussed.
Abstract: Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human---automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human---automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.

Proceedings ArticleDOI
19 Jul 2010
TL;DR: This paper analysis the current state of art in disaster recovery and proposed approach with Markov Model, and uses cloud computing as a tool in managing the disaster in the system of organization.
Abstract: A System Architecture [SA] plays the leading role in ICT based organization. It covers the entire organization and helps to achieve the mission of organization. As other architectures, it also has set of components and the relationship (connectors) among the components. It has pre-defined life cycle that determines the duration of the system in which it should provide the services continuously. It means system should survive throughout its defined life time. Unfortunately, a system comes across to many hurdles during its life time. In this paper, we propose the reliable approach in recovering from disaster. We analysis the current state of art in disaster recovery and propose the new approach on it. We use cloud computing as a tool in managing the disaster in the system of organization. We analysis our proposed approach with Markov Model. The required attributes for a system to address disaster recovery like availability, survivability, unavailability and downtime are calculated at the end of this paper.

Proceedings ArticleDOI
01 May 2010
TL;DR: A model derived from analyzing actual projects undertaken at Vistaprint Corporation is presented, derived from an analysis of effort tracked against modifications to specific software components before and after a significant architectural transformation to the subsystem housing those components.
Abstract: In any IT-intensive organization, it is useful to have a model to associate a value with software and system architecture decisions. More generally, any effort---a project undertaken by a team---needs to have an associated value to offset its labor and capital costs. Unfortunately, it is extremely difficult to precisely evaluate the benefit of "architecture projects"---those that aim to improve one or more quality attributes of a system via a structural transformation without (generally) changing its behavior. We often resort to anecdotal and informal "hand-waving" arguments of risk reduction or increased developer productivity. These arguments are typically unsatisfying to the management of organizations accustomed to decision-making based on concrete metrics. This paper will discuss research done to address this long-standing dilemma. Specifically, we will present a model derived from analyzing actual projects undertaken at Vistaprint Corporation. The model presented is derived from an analysis of effort tracked against modifications to specific software components before and after a significant architectural transformation to the subsystem housing those components. In this paper, we will discuss the development, implementation, and iteration of the model and the results that we have obtained.

Journal ArticleDOI
TL;DR: An agent-based approach used to design a Transportation Regulation Support System (TRSS), that reports the network activity in real-time and thus assists the bus network regulators and a prototype called SATIR that has been tested on the Brussels transportation network.
Abstract: This paper presents an agent-based approach used to design a Transportation Regulation Support System (TRSS), that reports the network activity in real-time and thus assists the bus network regulators. The objective is to combine the functionalities of the existing information system with the functionalities of a decision support system in order to propose a generic model of a traffic regulation support system. Unlike the other approaches that only deal with a specific task, the original feature of our generic model is that it proposes a global approach to the regulation function under normal conditions (network monitoring, dynamic timetable management) and under disrupted conditions (disturbance assessment and action planning of feasible solutions). Following the introduction, the second section presents the notions of the domain and highlights the main regulation problems. The third section details and motivates our choice of the components of the generic model. Based on our generic model, in the fourth section, we present a TRSS prototype called SATIR (Systeme Automatique de Traitement des Incidents en Reseau - Automatic System for Network Incident Processing) that we have developed. SATIR has been tested on the Brussels transportation network (STIB). The results are presented in the fifth section. Lastly, we show how using the multi-agent paradigm opens perspectives regarding the development of new functionalities to improve the management of a bus network.

Journal ArticleDOI
TL;DR: Detailed system architecture and methodologies are first proposed, where the components include a real-time DDDAS simulation, grid modules, a web service communication server, databases, various sensors and a real system.
Abstract: Dynamic-Data-Driven Application Systems (DDDAS) is a new modeling and control paradigm which adaptively adjusts the fidelity of a simulation model. The fidelity of the simulation model is adjusted against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective date update. To this end, comprehensive system architecture and methodologies are first proposed, where the components include a real-time DDDAS simulation, grid modules, a web service communication server, databases, various sensors and a real system. Abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation are enabled through the embedded algorithms developed in this work. Grid computing is used for computational resources management and web services are used for inter-operable communications among distributed software components. The proposed DDDAS is demonstrated on an example of preventive maintenance scheduling in...

Proceedings ArticleDOI
08 Mar 2010
TL;DR: The architecture uses a Sobel edge detector to achieve real-time (75 fps) performance, and is configurable in terms of various application parameters, making it suitable for a number of application environments.
Abstract: Stereoscopic 3D reconstruction is an important algorithm in the field of Computer Vision, with a variety of applications in embedded and real-time systems. Existing software-based implementations cannot satisfy the performance requirements for such constrained systems; hence an embedded hardware mechanism might be more suitable. In this paper, we present an architecture of a 3D reconstruction system for stereoscopic images, which we implement on Virtex2 Pro FPGA. The architecture uses a Sobel edge detector to achieve real-time (75 fps) performance, and is configurable in terms of various application parameters, making it suitable for a number of application environments. The paper also presents a design exploration on algorithmic parameters such as disparity range, correlation window size, and input image size, illustrating the impact on the performance for each parameter.

Journal ArticleDOI
TL;DR: The efforts that have been made to realize CDS service features, core components, application, and deployment architectures in the context of the Korean EHR showed the potential to contribute to the adoption of CDS at the national level.