scispace - formally typeset
Search or ask a question

Showing papers on "Systems architecture published in 2007"


Proceedings ArticleDOI
06 Nov 2007
TL;DR: A description and prototype implementation of the system architecture, an evaluation of sensing and inference that quantifies cyclist performance and the cyclist environment; a report on networking performance in an environment characterized by bicycle mobility and human unpredictability; and a description of BikeNet system user interfaces are presented.
Abstract: We describe our experiences deploying BikeNet, an extensible mobile sensing system for cyclist experience mapping leveraging opportunistic sensor networking principles and techniques. BikeNet represents a multifaceted sensing system and explores personal, bicycle, and environmental sensing using dynamically role-assigned bike area networking based on customized Moteiv Tmote Invent motes and sensor-enabled Nokia N80 mobile phones. We investigate real-time and delay-tolerant uploading of data via a number of sensor access points (SAPs) to a networked repository. Among bicycles that rendezvous en route we explore inter-bicycle networking via data muling. The repository provides a cyclist with data archival, retrieval, and visualization services. BikeNet promotes the social networking of the cycling community through the provision of a web portal that facilitates back end sharing of real-time and archived cycling-related data from the repository. We present: a description and prototype implementation of the system architecture, an evaluation of sensing and inference that quantifies cyclist performance and the cyclist environment; a report on networking performance in an environment characterized by bicycle mobility and human unpredictability; and a description of BikeNet system user interfaces. Visit [4] to see how the BikeNet system visualizes a user's rides.

467 citations


Journal ArticleDOI
TL;DR: This paper discusses an advanced approach for a 3DTV service, which is based on the concept of video-plus-depth data representations, and provides a modular and flexible system architecture supporting a wide range of multi-view structures.
Abstract: Due to enormous progress in the areas of auto-stereoscopic 3D displays, digital video broadcast and computer vision algorithms, 3D television (3DTV) has reached a high technical maturity and many people now believe in its readiness for marketing. Experimental prototypes of entire 3DTV processing chains have been demonstrated successfully during the last few years, and the motion picture experts group (MPEG) of ISO/IEC has launched related ad hoc groups and standardization efforts envisaging the emerging market segment of 3DTV. In this context the paper discusses an advanced approach for a 3DTV service, which is based on the concept of video-plus-depth data representations. It particularly considers aspects of interoperability and multi-view adaptation for the case that different multi-baseline geometries are used for multi-view capturing and 3D display. Furthermore it presents algorithmic solutions for the creation of depth maps and depth image-based rendering related to this framework of multi-view adaptation. In contrast to other proposals, which are more focused on specialized configurations, the underlying approach provides a modular and flexible system architecture supporting a wide range of multi-view structures.

434 citations


Journal ArticleDOI
TL;DR: It is argued that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them, and an architecture for machine-based delegation systems based on the metaphor of a sports team's “playbook” is developed.
Abstract: OBJECTIVE: To develop a method enabling human-like, flexible supervisory control via delegation to automation. BACKGROUND: Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. METHOD: We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. RESULTS: On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. CONCLUSION: Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. APPLICATION: Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning. Language: en

407 citations


Journal ArticleDOI
TL;DR: The paper provides practitioners with insight on how RFID technology can meet traceability requirements and what technological approach is more appropriate.
Abstract: Purpose – This paper aims to study the main requirements of traceability and examine how the technology of radio frequency identification (RFID) technology can address these requirements. It further seeks to outline both an information data model and a system architecture that will make traceability feasible and easily deployable across a supply chain.Design/methodology/approach – The design research approach is followed, associating traceability requirements to a proposed system design.Findings – The technological approach used has great implications in relation to the cost associated with a traceability system and the ease of its deployment.Research limitations/implications – Validation of the proposed information data model and system architecture is required through practical deployment in different settings.Practical implications – The paper provides practitioners with insight on how RFID technology can meet traceability requirements and what technological approach is more appropriate.Originality/val...

386 citations


Book ChapterDOI
15 May 2007
TL;DR: The conflict-driven answer set solver clasp is described, which is based on concepts from constraint processing (CSP) and satisfiability checking (SAT) and provides a systematic empirical evaluation of its features.
Abstract: We describe the conflict-driven answer set solver clasp, which is based on concepts from constraint processing (CSP) and satisfiability checking (SAT) We detail its system architecture and major features, and provide a systematic empirical evaluation of its features

328 citations


Journal ArticleDOI
01 Feb 2007
TL;DR: A system architecture capable of integrating mobile commerce and RFID applications is proposed and the aims of the system are to keep track of the locations of stackers and containers, provide greater visibility of the operations data, and improve the control processes.
Abstract: In this paper, we present the findings of a case study on the development of a radio frequency identification (RFID) prototype system that is integrated with mobile commerce (m-commerce) in a container depot. A system architecture capable of integrating mobile commerce and RFID applications is proposed. The system architecture is examined and explained in the context of the case study. The aims of the system are to (i) keep track of the locations of stackers and containers, (ii) provide greater visibility of the operations data, and (iii) improve the control processes. The case study illustrates the benefits and advantages of using an RFID system, particularly its support of m-commerce activities in the container depot, and describes some of the most important problems and issues. Finally, several research issues and directions of RFID applications in container depots are presented and discussed.

313 citations


Journal ArticleDOI
TL;DR: The five industrial software architecture design methods are compared and it is found that the five approaches have a lot in common and match more or less the ''ideal'' pattern that can be used for further method comparisons.

292 citations


Journal ArticleDOI
TL;DR: To optimize the system design and allow for plug and play of subsystems, automotive electronic system architecture evaluation and development must be supported with a robust design flow based on virtual platforms.
Abstract: To optimize the system design and allow for plug and play of subsystems, automotive electronic system architecture evaluation and development must be supported with a robust design flow based on virtual platforms.

214 citations


Journal ArticleDOI
TL;DR: The architecture, which is comprehensive since it is derived from the extended requirements from the lifecycle perspective, will provide a basis for research and development of process-oriented knowledge management systems.

168 citations


Patent
18 Apr 2007
TL;DR: In this paper, the authors describe a centralized, scalable, and dynamic system architecture which allows customers to replicate the internal build, integrate, and test environments that were previously used on the customer premises, provision and re-provision such resources on demand, and seamlessly integrate their internal environments with the system described herein.
Abstract: Systems (100) and methods are described that allow for the dynamic allocation and re-allocation of hardware and software resources (106-226) to support the build, integration, and test phases of complex software development projects. The invention provides customers who have medium to large development teams, which may be geographically distributed, with an integrated development environment, which includes centralized governance of software code repositories, build systems and test systems. Embodiments of the invention include a centralized, scalable, and dynamic system architecture which allows customers to (1) replicate the internal build, integrate, and test environments that were previously used on the customer premises, (2) provision and re-provision such resources on demand, and (3) seamlessly integrate their internal environments with the system described herein.

142 citations


Proceedings ArticleDOI
06 Jan 2007
TL;DR: This paper compares three static architecture compliance checking approaches (reflexion models, relation conformance rules, and component access rules) by assessing their applicability in 13 distinct dimensions and gives guidance on when to use which approach.
Abstract: The software architecture is one of the most important artifacts created in the lifecycle of a software system. It enables, facilitates, hampers, or interferes directly the achievement of business goals, functional and quality requirements. One instrument to determine how adequate the architecture is for its intended usage is architecture compliance checking. This paper compares three static architecture compliance checking approaches (reflexion models, relation conformance rules, and component access rules) by assessing their applicability in 13 distinct dimensions. The results give guidance on when to use which approach.

Journal ArticleDOI
TL;DR: This paper will examine the systems engineering processes in existence today and conclude with development of a process designed specifically for systems family applications.
Abstract: Today, many systems within government and industry are typically engineered, not as stand alone systems, but as part of an integrated system of systems, or a federation of systems, or systems family. A significant level of effort has been devoted in the past several years to the development, refinement, and ultimately, acceptance of processes for engineering systems or systems engineering processes. Today, we have four ldquostandardrdquo processes within present and past standards: EIA-632, IEEE 1220, ISO 15288, and MIL-STD-499C. We continue to use systems engineering processes espoused in our current set of standards, and are left to our devices to tailor these processes to one appropriate for a systems family context. This paper will examine the systems engineering processes in existence today and conclude with development of a process designed specifically for systems family applications.

Journal ArticleDOI
01 Aug 2007
TL;DR: A Web-based DSS to help Hollywood managers make better decisions on important movie characteristics, such as, genre, super stars, technical effects, release time, etc, is described.
Abstract: Herein we describe a Web-based DSS to help Hollywood managers make better decisions on important movie characteristics, such as, genre, super stars, technical effects, release time, etc. These parameters are used to build prediction models to classify a movie in one of nine success categories, from a ''flop'' to a ''blockbuster''. The system employs a number of traditional and non-traditional prediction models as distributed independent experts, implemented as Web services. The paper describes the purpose and the architecture of the system, the development environment, the user assessment results, and the lessons learned as they relate to Web-based DSS development.

Patent
31 Aug 2007
TL;DR: A system architecture and computing machine operating as a server executing virtualization software to generate virtual machines as virtual desktops for a plurality of users, the environment to support application program processing and providing a level of isolation that prevents user data and system operating system and application program templates from being corrupted by virus, hacker code or attack, spyware, bots, or other malicious code or attacks.
Abstract: Network computer system and method using thin user client and virtual machine to provide immunity to hacking, viruses and spyware. A system architecture and computing machine operating as a server executing virtualization software to generate a plurality of virtual machines as virtual desktops for a plurality of users, the environment to support application program processing by a plurality of users and providing a level of isolation that prevents user data and system operating system and application program templates from being corrupted by virus, hacker code or attack, spy-ware, bots, or other malicious code or attacks.

Journal ArticleDOI
TL;DR: This paper argues that microkernels are the best approach for delivering truly trustworthy computer systems in the foreseeable future and presents the NICTA operating-systems research vision, centred around the L4 microkernel and based on four core projects.
Abstract: As computer systems become increasingly mission-critical, used in life-critical situations, and relied upon to protect intellectual property, operating-system reliability is becoming an ever growing concern. In the past, mission- and life-critical embedded systems consisted of simple microcontrollers running a small amount of software that could be validated using traditional and informal techniques. However, with the growth of software complexity, traditional techniques for ensuring software reliability have not been able to keep up, leading to an overall degradation of reliability. This paper argues that microkernels are the best approach for delivering truly trustworthy computer systems in the foreseeable future. It presents the NICTA operating-systems research vision, centred around the L4 microkernel and based on four core projects. The seL4 project is designing an improved API for a secure microkernel, L4, verified will produce a full formal verification of the microkernel, Potoroo combines execution-time measurements with static analysis to determine the worst case execution profiles of the kernel, and CAmkES provides a component architecture for building systems that use the microkernel. Through close collaboration with Open Kernel Labs (a NICTA spinoff) the research output of these projects will make its way into products over the next few years.

Journal ArticleDOI
TL;DR: It is proposed a re-focussing of CIC research on the relatively under-represented area of semantically described and coordinated process oriented systems to better support the kind of short term virtual organisation that typifies the working environment in the construction sector.

Journal ArticleDOI
TL;DR: How process automation system architectures have evolved and discusses future trends are reviewed, drawing an analogy between the synergistic new technologies being developed today and the technology landscape of the early 1970s that resulted in the first DCS systems.


Proceedings ArticleDOI
11 Jun 2007
TL;DR: This paper describes the key aspects of the ADO.NET Entity Framework, the overall system architecture, and the underlying technologies that significantly reduces the impedance mismatch for applications and data-centric services.
Abstract: Traditional client-server applications relegate query and persistence operations on their data to database systems. The database system operates on data in the form of rows and tables, while the application operates on data in terms of higher-level programming language constructs (classes, structures etc.). The impedance mismatch in the data manipulation services between the application and the database tier was problematic even in traditional systems. With the advent of service-oriented architectures (SOA), application servers and multi-tier applications, the need for data access and manipulation services that are well-integrated with programming environments and can operate in any tier has increased tremendously. Microsoft's ADO.NET Entity Framework is a platform for programming against data that raises the level of abstraction from the relational level to the conceptual (entity) level, and thereby significantly reduces the impedance mismatch for applications and data-centric services. This paper describes the key aspects of the Entity Framework, the overall system architecture, and the underlying technologies.

Proceedings Article
01 Jan 2007
TL;DR: The first Trio prototype, dubbed Trio-One, is built on top of a conventional DBMS using data and query translation techniques together with a small number of stored procedures and the system architecture is described.
Abstract: Trio is a new kind of database system that supports data, uncertainty, and lineage in a fully integrated manner. The first Trio prototype, dubbed Trio-One, is built on top of a conventional DBMS using data and query translation techniques together with a small number of stored procedures. This paper describes Trio-One's translation scheme and system architecture, showing how it efficiently and easily supports the Trio data model and query language.

Book ChapterDOI
24 Sep 2007
TL;DR: This work discusses the current limitations of patterns on evaluating their impact on quality attributes, and proposes integrating the information of patterns' impact onquality attributes in order to increase the usefulness of architecture patterns.
Abstract: Architectural design has been characterized as making a series of decisions that have system-wide impact. These decisions have side effects which can have significant impact on the system. However, the impact may be first understood much later; when the system architecture is difficult to change. Architecture patterns can help architects understand the impact of the architectural decisions at the time these decisions are made, because patterns contain information about consequences and context of the pattern usage. However, this information has been of limited use because it is not presented consistently or systematically. We discuss the current limitations of patterns on evaluating their impact on quality attributes, and propose integrating the information of patterns' impact on quality attributes in order to increase the usefulness of architecture patterns.

Journal ArticleDOI
TL;DR: The results of performance evaluations demonstrate that the proposed hierarchical RFID network architecture reduces the network and database system loading by 41.8% and 83.2%, respectively.

Journal ArticleDOI
TL;DR: The curent study focuses on the system architecture of closed-loop PLM with respect to business model, hardware, and software and investigates the main components and how they are related to each other.
Abstract: The closed-loop product life cycle management (closed-loop PLM) system focuses on tracking and managing the information of whole product life cycle, with possible feedback of information to product life cycle phases. It provides opportunities to reduce the inefficiency of life cycle operations and gain competitiveness. Thanks to the advent of hardware and software related to product identification technologies, e.g. radio frequency identification (RFID) technology, recently the closed-loop PLM has been highlighted as a tool of companies to enhance the performance of their business models. However, implementing the PLM system requires a high level of coordination and integration. To this end, it is prerequisite to investigate what are the main components for closed-loop PLM and how they are related to each other. To address this need, the curent study focuses on the system architecture of closed-loop PLM with respect to business model, hardware, and software.

Patent
21 Jun 2007
TL;DR: In this paper, a server-side triggered policy caching mechanism that allows for previous classification policy decisions made for previous data flows to be applied to subsequent new flows is proposed. But the caching mechanism is not considered in this paper.
Abstract: A data and control plane architecture for network devices. An example system architecture includes a network processing unit implementing one or more data plane operations, and a network device operably coupled to the network processing unit that implements a control plane. In a particular implementation, the network processing unit is configured to process network traffic according to a data plane configuration, and sample selected packets to the network device. The network device processes the sampled packets and adjusts the data plane configuration responsive to the sampled packets. In particular implementations, the control plane and data plane implement, a server-side triggered policy caching mechanism that allows for previous classification policy decisions made for previous data flows to be applied to subsequent new flows.

Proceedings ArticleDOI
23 Jun 2007
TL;DR: A model to support the hypothesis that this network can to improve response to asymmetric threats is described, and a system architecture based on commercial-off-the-shelf technology for military operations in this new combat paradigm is described.
Abstract: The purpose of this investigative study is to develop a model for a system of systems to improve situation awareness and targeting through a real-time aerially deployed wireless sensor network. The main hypothesis of this work is that scalable, affordable, real-time wireless network can contribute to the common operating picture. This network can be used to overcome targeting discrepancies and counter asymmetric threats in the new warfare paradigm that pertains to opaque environments. Surveys identified requirements for a system of systems to meet requirements for situation assessment. This paper describes 1. a model to support the hypothesis that this network can to improve response to asymmetric threats, and 2. a system architecture based on commercial-off-the-shelf technology for military operations in this new combat paradigm.

Proceedings ArticleDOI
10 Nov 2007
TL;DR: The BlackWidow system is a distributed shared memory architecture that is scalable to 32K processors, each with a 4-way dispatch scalar execution unit and an 8-pipe vector unit capable of 20.8 Gflops.
Abstract: This paper describes the system architecture of the Cray BlackWidow scalable vector multiprocessor. The BlackWidow system is a distributed shared memory (DSM) architecture that is scalable to 32K processors, each with a 4-way dispatch scalar execution unit and an 8-pipe vector unit capable of 20.8 Gflops for 64-bit operations and 41.6 Gflops for 32-bit operations at the prototype operating frequency of 1.3 GHz. Global memory is directly accessible with processor loads and stores and is globally coherent. The system supports thousands of outstanding references to hide remote memory latencies, and provides a rich suite of built-in synchronization primitives. Each BlackWidow node is implemented as a 4-way SMP with up to 128 Gbytes of DDR2 main memory capacity. The system supports common programming models such as MPI and OpenMP, as well as global address space languages such as UPC and CAF. We describe the system architecture and microarchitecture of the processor, memory controller, and router chips. We give preliminary performance results and discuss design tradeoffs.

Journal ArticleDOI
TL;DR: The experimental results show that, besides increasing the degree of reusability and openness, application of above-mentioned methodology leads to significant decrease of development time as well as maintenance cost.

Journal ArticleDOI
TL;DR: In this paper, the authors present the concept of a virtual engineering community (VEC) to support concurrent product development within geographically distributed partners, and describe the system architecture, deployed security mechanisms, the prototype developed within collaborative product definition management (cPDM), and the system demonstration using a real test.
Abstract: Product definition management (PDM) is a system that supports management of both engineering data and the product development process during the total product life cycle. The formation of a virtual enterprise is becoming a growing trend, and vendors of PDM systems have recently developed a new generation of PDM systems called collaborative product definition management (cPDM). This paper presents the concept of a virtual engineering community (VEC) to support concurrent product development within geographically distributed partners. A previous case study has shown that collaborative engineering design may be modelled from a parameter perspective [1]. Effective implementation of the parameter approach raises the following problems: how to support data sharing and secure that span the partner borders. This paper describes the system architecture, deployed security mechanisms, the prototype developed within cPDM, and the system demonstration using a real test. The implementation of this architecture extends a common commercial PDM system (Axalan(TM)) and utilizes standard software to create a security framework for the involved resources. Collaboration infrastructure, shared team spaces and shared resources are essential to enable virtual teams to work together. Various organizational and technical challenges are implied. The outlined architecture features a federated data approach. These issues are discussed and potential perspectives in the area of collaboration engineering are identified.

Journal ArticleDOI
TL;DR: This paper's contribution for carrying out adaptive and intelligent Web-based Education Systems (WBES) that take into account the individual student learning requirements, by means of a holistic architecture and Framework for developing WBES is presented.
Abstract: In this paper it is presented our contribution for carrying out adaptive and intelligent Web-based Education Systems (WBES) that take into account the individual student learning requirements, by means of a holistic architecture and Framework for developing WBES. In addition, three basic modules of the proposed WBES are outlined: an Authoring tool, a Semantic Web-based Evaluation, and a Cognitive Maps-based Student Model. As well, it is stated a Service Oriented Architecture (SOA) oriented to deploy reusable, accessible, durable and interoperable services. The approach enhances the Learning Technology Standard Architecture, proposed by IEEE-LTSA (Learning Technology System Architecture) [IEEE 1484.1/D9 LTSA (2001). Draft standard for learning technology - learning technology systems architecture (LTSA). New York, USA. URL: http://ieee.ltsc.org/wg1], and the Sharable Content Object Reusable Model (SCORM), claimed by Advanced Distributed Learning (ADL) [Advanced Distributed Learning Initiative (2004). URL: http://www.adlnet.org].

Journal ArticleDOI
01 Aug 2007
TL;DR: An integrated method to help design and implement a Web-based Decision Support Systems (DSS) in a distributed environment and an instance of the layered software architecture and 3CoFramework applied to the Web- based National Agricultural Decision Support System (NADSS) is presented.
Abstract: This paper presents an integrated method to help design and implement a Web-based Decision Support Systems (DSS) in a distributed environment. First, a layered software architecture is presented to assist in the design of a Web-based DSS. The layered software architecture can provide a formal and hierarchical view of the Web-based DSS at the design stage. Next, a component-based framework is presented to implement the Web-based DSS in a distributed environment. Finally, an instance of the layered software architecture and 3CoFramework applied to the Web-based National Agricultural Decision Support System (NADSS) is presented.