scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Systems Journal in 2009"


Journal Article•DOI•
TL;DR: This paper defines resilience from different perspectives, provides a conceptual framework for understanding and analyzing disruptions, and presents principles and heuristics based on lessons learned that can be employed to build resilient systems.
Abstract: As systems continue to grow in size and complexity, they pose increasingly greater safety and risk management challenges. Today when complex systems fail and mishaps occur, there is an initial tendency to attribute the failure to human error. Yet research has repeatedly shown that more often than not it is not human error but organizational factors that set up adverse conditions that increase the likelihood of system failure. Resilience engineering is concerned with building systems that are able to circumvent accidents through anticipation, survive disruptions through recovery, and grow through adaptation. This paper defines resilience from different perspectives, provides a conceptual framework for understanding and analyzing disruptions, and presents principles and heuristics based on lessons learned that can be employed to build resilient systems.

477 citations


Journal Article•DOI•
TL;DR: The method includes resilience and interdependency measures, and focuses on the contribution of power delivery systems to post-event infrastructure recovery, to characterize the behavior of networked infrastructure for natural hazard events such as hurricanes and earthquakes.
Abstract: In this paper, we outline a method to characterize the behavior of networked infrastructure for natural hazard events such as hurricanes and earthquakes. Our method includes resilience and interdependency measures. Because most urban infrastructure systems rely on electric power to function properly, we focus on the contribution of power delivery systems to post-event infrastructure recovery. We provide a brief example of our calculations using power delivery and telecommunications data collected post-landfall for Hurricane Katrina. The model is an important component of a scheme to develop design strategies for increased resilience of urban infrastructure for extreme natural hazard scenarios.

418 citations


Journal Article•DOI•
TL;DR: A four-levels hierarchical wireless body sensor network (WBSN) system is designed for biometrics and healthcare applications and achieves a reduction of 99.573% or 99.164% in power consumption compared to those without using adaptive and encoding modules.
Abstract: A four-levels hierarchical wireless body sensor network (WBSN) system is designed for biometrics and healthcare applications. It also separates pathways for communication and control. In order to improve performance, a communication cycle is constructed for synchronizing the WBSN system with the pipeline. A low-power adaptive process is a necessity for long-time healthcare monitoring. It includes a data encoder and an adaptive power conserving algorithm within each sensor node along with an accurate control switch system for adaptive power control. The thermal sensor node consists of a micro control unit (MCU), a thermal bipolar junction transistor sensor, an analog-to-digital converter (ADC), a calibrator, a data encoder, a 2.4-GHz radio frequency transceiver, and an antenna. When detecting ten body temperature or 240 electrocardiogram (ECG) signals per second, the power consumption is either 106.3 ?W or 220.4 ?W. By switching circuits, multi sharing wireless protocol, and reducing transmission data by data encoder, it achieves a reduction of 99.573% or 99.164% in power consumption compared to those without using adaptive and encoding modules. Compared with published research reports and industrial works, the proposed method is 69.6% or 98% lower than the power consumption in thermal sensor nodes which consist only of a sensor and ADC (without MCU, 2.4-GHz transceiver, modulator, demodulator, and data encoder) or wireless ECG sensor nodes which selected Bluetooth, 2.4-GHz transceiver, and Zigbee as wireless protocols.

148 citations


Journal Article•DOI•
Qi Hao1, Fei Hu1, Yang Xiao1•
TL;DR: The developed wireless distributed infrared sensor system can run as a standalone prisoner/patient monitoring system under any illumination conditions, as well as a complement for conventional video and audio human tracking and identification systems.
Abstract: This paper presents a wireless distributed pyroelectric sensor system for tracking and identifying multiple humans based on their body heat radiation. This study aims to make pyroelectric sensors a low-cost alternative to infrared video sensors in thermal gait biometric applications. In this system, the sensor field of view (FOV) is specifically modulated with Fresnel lens arrays for functionality of tracking or identification, and the sensor deployment is chosen to facilitate the process of data-object-association. An Expectation-Maximization-Bayesian tracking scheme is proposed and implemented among slave, master, and host modules of a prototype system. Information fusion schemes are developed to improve the system identification performance for both individuals and multiple subjects. The fusion of thermal gait biometric information measured by multiple nodes is tested at four levels: sample, feature, score, and decision. Experimentally, the prototype system is able to simultaneously track two individuals in both follow-up and crossover scenarios with average tracking errors less than 0.5 m. The experimental results also demonstrate system's potential to be a reliable biometric system for the verification/identification of a small group of human subjects. The developed wireless distributed infrared sensor system can run as a standalone prisoner/patient monitoring system under any illumination conditions, as well as a complement for conventional video and audio human tracking and identification systems.

142 citations


Journal Article•DOI•
TL;DR: This paper attempts to develop a resilience index for urban infrastructure using a belief function framework that can handle subjective, independent information and hierarchical data, all of which are characteristics of the inputs required for proper resilience index analyses.
Abstract: Most urban infrastructure are interdependent in various ways. A variety of qualitative explanations are presented in the literature to analyze and address resiliency and vulnerability. Unfortunately, most of the explanation do not provide an objective resilience index computation. This paper attempts to develop a resilience index for urban infrastructure using a belief function framework. The belief function framework can handle subjective, independent information and hierarchical data, all of which are characteristics of the inputs required for proper resilience index analyses. The steps of the analyses are presented using a prototype urban highway infrastructure network.

131 citations


Journal Article•DOI•
TL;DR: The effects of Hurricane Katrina are discussed based on an on-site survey conducted in October 2005 and on public sources, which includes observations about power infrastructure damage in wire-line and wireless networks.
Abstract: This paper extends knowledge of disaster impact on the telecommunications power infrastructure by discussing the effects of Hurricane Katrina based on an on-site survey conducted in October 2005 and on public sources. It includes observations about power infrastructure damage in wire-line and wireless networks. In general, the impact on centralized network elements was more severe than on the distributed portion of the grids. The main cause of outage was lack of power due to fuel supply disruptions, flooding and security issues. This work also describes the means used to restore telecommunications services and proposes ways to improve logistics, such as coordinating portable generator set deployment among different network operators and reducing genset fuel consumption by installing permanent photovoltaic systems at sites where long electric outages are likely. One long term solution is to use of distributed generation. It also discusses the consequences on telecom power technology and practices since the storm.

118 citations


Journal Article•DOI•
TL;DR: A model to measure the base resiliency of this trans-oceanic telecommunication fiber-optics cable network is proposed, and the node to node and the overall resilency of the network is explored using existing data for demand, capacity and flow information.
Abstract: Resilience is the ability of the system to both absorb shock as well as to recover rapidly from a disruption so that it can return back to its original service delivery levels or close to it. The trans-oceanic telecommunication fiber-optics cable network that serves as the backbone of the internet is a particularly critical infrastructure system that is vulnerable to both natural and man-made disasters. In this paper, we propose a model to measure the base resiliency of this network, and explore the node to node and the overall resiliency of the network using existing data for demand, capacity and flow information. The submarine cable system is represented by a network model to which hypothetical disruptions can be introduced. The base resiliency of the system can be measured as the ratio of the value delivery of the system after a disruption to the value deliver of the system before a disruption. We further demonstrate how the resiliency of the trans-oceanic telecommunication cable infrastructure is enhanced through vulnerability reduction.

90 citations


Journal Article•DOI•
TL;DR: The approach has been used to study the resilience of logistic networks for aircraft maintenance and service and to guarantee the security and service quality of aeronautical systems and good comments have been achieved, which shows that the approach has potential for application in practice.
Abstract: To analyze the resilience of logistic networks, it is proposed to use a quantificational resilience evaluation approach. Firstly, the node resilience in a network is evaluated by its redundant resources, distributed suppliers and reachable deliveries. Then, an index of the total resilience of logistic network is calculated with the weighted sum of the node resilience. Based on the evaluation approach of resilience, the reasonable structure of the logistic networks is analyzed. A model is then studied to optimize the allocation of resources with connections, distribution centers or warehouses. Our approach has been used to study the resilience of logistic networks for aircraft maintenance and service and to guarantee the security and service quality of aeronautical systems. To monitor the operation of the logistic networks and enhance resilience, the architecture of a synthesized aircraft maintenance information management system and service logistic network is designed and being developed, which is called resilience information management system for aircraft service (RIMAS). The research results have been provided to the decision makers of the aviation management sector in the Chu Chiang Delta of China. Good comments have been achieved, which shows that the approach has potential for application in practice.

66 citations


Journal Article•DOI•
TL;DR: This paper examines the problem of remote authentication in online learning environments and explores the challenges and options of using biometric technology to defend against user impersonation attacks by certifying the presence of the user in front of the computer, at all times, and presents a biometrics-based client-server architecture for continuous user authentication in e-learning environments.
Abstract: With the rapid proliferation of online learning, students are increasingly demanding easy and flexible access to learning content at a time and location of their choosing. In these environments, remote users connecting via the public Internet or other unsecure networks must be authenticated prior to being granted access to sensitive content such as tests or personal/private records. Today, the overwhelming majority of online learning systems rely on weak authentication mechanisms to verify the identity of remote users only at the start of each session. One-time authentication using password, personal identification number (PIN), or even hardware tokens is clearly inadequate in that it cannot defend against insider attacks including remote user impersonation or illegal sharing or disclosure of these authentication secrets. As such, these methods are entirely unsuitable for circumstances where the outcome of an online assessment or a course of study is the granting of a formal degree, professional certification, or qualification or requalification for a particular skill or function. This paper examines the problem of remote authentication in online learning environments and explores the challenges and options of using biometric technology to defend against user impersonation attacks by certifying the presence of the user in front of the computer, at all times. It also leverages a 5-step process as the basis for a systems approach to ensuring that the proposed solution will meet the critical remote authentication assurance requirements. The process and systems approach employed here are generic, and can be exploited when introducing biometric-enabled authentication solutions to other applications and business domains. The paper concludes by presenting a biometrics-based client-server architecture for continuous user authentication in e-learning environments.

58 citations


Journal Article•DOI•
TL;DR: This paper presents the performance of the spectral minutiae fingerprint recognition system and shows a matching speed with 125 000 comparisons per second on a PC with Intel Pentium D processor 2.80 GHz and 1 GB of RAM.
Abstract: The spectral minutiae representation is a method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require as an input a fixed-length feature vector. Based on the spectral minutiae features, this paper introduces two feature reduction algorithms: the Column Principal Component Analysis and the Line Discrete Fourier Transform feature reductions, which can efficiently compress the template size with a reduction rate of 94%. With reduced features, we can also achieve a fast minutiae-based matching algorithm. This paper presents the performance of the spectral minutiae fingerprint recognition system and shows a matching speed with 125 000 comparisons per second on a PC with Intel Pentium D processor 2.80 GHz and 1 GB of RAM. This fast operation renders our system suitable as a preselector for a large-scale fingerprint identification system, thus significantly reducing the time to perform matching, especially in systems operating at geographical level (e.g., police patrolling) or in complex critical environments (e.g., airports).

44 citations


Journal Article•DOI•
TL;DR: A system of systems (SoS) in human health management that consists of image processing system and expert medical knowledge system described by fuzzy logic for health management and its practical applications are described.
Abstract: In this paper, we describe a human health management system scheme and its practical applications. Specifically, it focuses on health management, medical diagnosis, and surgical support system of systems engineering (SoSE). The application domains discussed here are broad and essential in health management and clinical practice. Firstly, we describe a system of systems (SoS) in human health management. Within it, a notion of health management is introduced and discussed from the viewpoint of SoS. Human health management is the first level of daily monitoring for a healthy human. Sensing and control technology during sleep are espectially focused on because the quality and quantity of sleep has considerable impact on health. Secondly, an SoS in medical diagnostic imaging is discussed. This section introduces a clinical usage of a magnetic resonance imaging (MRI) scanner for the diagnosis of certain diseases. In it, there is a new system that consists of image processing system and expert medical knowledge system described by fuzzy logic. To demonstrate the effectiveness of the new system, applications to human brain magnetic resonance images and orthopedic kinematic analyses are introduced. Thirdly, we describe an SoS in medical ultrasonic surgery support device. This section introduces a novel ultrasonic support system for supporting crash bone orthopedic surgery.

Journal Article•DOI•
TL;DR: Results of experiments show the decentralized multiagent coordination scheme among SEPs outperformed the current practice of the firm in terms of reducing work-in-process and parts inventory.
Abstract: Radio frequency identification (RFID) technology adoption in business environments has seen strong growth in recent years. Adopting an appropriate RFID-based information system has become increasingly important for enterprises making complex and highly customized products. However, most firms still use conventional barcode and run-card systems to manage their manufacturing processes. These systems often require human intervention during the production process. As a result, traditional systems are not able to fulfill the growing demand for managing dynamic process flows and are not able to obtain real-time work-in-process (WIP) views in mass customization manufacturing. This paper proposes an agent-based distributed production control framework with UHF RFID technology to help firms adapt to such a dynamic and agile manufacturing environment. This paper reports the design and development of the framework and the application of UHF RFID technology in manufacturing and logistic control applications. The framework's RFID event processing agent model is implemented in a smart end-point (SEP) device. A SEP can manage RFID readers, wirelessly communicate with shop-floor machines, make local decisions, and coordinate with other SEPs. A case study of a bicycle manufacturing company demonstrates how the proposed framework could improve a firm's mass customization operations. Results of experiments show the decentralized multiagent coordination scheme among SEPs outperformed the current practice of the firm in terms of reducing work-in-process and parts inventory.

Journal Article•DOI•
TL;DR: A time and cost-constrained scheduling strategy that is able to deploy scientific and business workflow tasks (or other kinds of application tasks) on pools of resources selected with the aim of minimizing the overall execution time is presented.
Abstract: The necessity of identifying suitable computing resources to solve a scientific or engineering problem in a Grid environment requires more and more sophisticated resource management systems: 1) strategies and technologies should be able to master the complexity of modern large-scale networks and computing facilities and 2) the convergence of Grid computing toward the service-oriented approach is fostering a new vision where economic aspects represent central issues to burst the adoption of computing as a utility. In this context, the design and execution of data and compute-intensive applications are often simplified by the adoption of model-driven approaches based on workflows. The execution of Grid workflows can leverage on meta-scheduling systems to automatically and transparently allocate tasks to resources that ensure the fulfillment of functional requirements and quality-of-service (QoS) constraints, specified by the user. This paper presents a time and cost-constrained scheduling strategy that, according to the data parallelism pattern, is able to deploy scientific and business workflow tasks (or other kinds of application tasks) on pools of resources selected with the aim of minimizing the overall execution time. The strategy was implemented as a plug-in in a matchmaker for Grid services and its validity and accuracy were experimentally proved on a real testbed leveraging a framework for the deployment of data parallel tasks. The results show that the tasks deployment is effective and accurate and pave the way for using the Internet as a utility computing facility.

Journal Article•DOI•
TL;DR: This paper focuses on a self-exclusion scenario (a special application of watch-list) of face recognition and proposes a novel design of a biometric encryption system deployed with a face recognition system under constrained conditions.
Abstract: This paper presents a biometric encryption system that addresses the privacy concern in the deployment of the face recognition technology in real-world systems. In particular, we focus on a self-exclusion scenario (a special application of watch-list) of face recognition and propose a novel design of a biometric encryption system deployed with a face recognition system under constrained conditions. From a system perspective, we investigate issues ranging from image preprocessing, feature extraction, to cryptography, error-correcting coding/decoding, key binding, and bit allocation. In simulation studies, the proposed biometric encryption system is tested on the CMU PIE face database. An important observation from the simulation results is that in the proposed system, the biometric encryption module tends to significantly reduce the false acceptance rate with a marginal increase in the false rejection rate.

Journal Article•DOI•
TL;DR: This paper investigates interoperability issues in Grid resource management, focusing on the following approaches: extending existing resource brokers with multiple middleware support; or developing a new middleware component, a meta-broker that not only interfaces various brokers but also coordinates their simultaneous utilization.
Abstract: Since the management and beneficial utilization of highly dynamic grid resources cannot be handled by the users themselves, various grid resource management tools have been developed, supporting different grids. To ease the simultaneous utilization of different middleware systems, researchers need to revise current solutions. Grid Interoperability can be achieved at different levels of grid systems. In this paper, we investigate interoperability issues in Grid resource management, focusing on the following approaches: 1) extending existing resource brokers with multiple middleware support; 2) interfacing grid portals to different brokers and middleware; or 3) developing a new middleware component, a meta-broker that not only interfaces various brokers but also coordinates their simultaneous utilization. We show that all of these approaches contribute to enable Grid Interoperability, and conclude that meta-brokering is a significant step towards the final solution.

Journal Article•DOI•
TL;DR: The main idea of this paper is to develop an adequate computational model under which agents in the formation will perform to coordinate among each other.
Abstract: In this paper, a decentralized formation control is proposed which enables collision free coordination and navigation of agents. We present a simple method to define the formation of multi-agents and individual identities (IDs) of agents. Two decentralized coordination and navigation techniques are proposed for the formation of rovers. Agents decide their own behaviors onboard depending upon the motion initiative of the master agent of the formation. In these approaches, any agent can estimate behavior of other agents in the formation. These will reduce the dependency of individual agent on other agents while taking decisions. These approaches reduce the communication burden on the formation where only the master agent broadcasts its motion status per sampled time. Any front agent can act as a master agent without affecting the formation in case of fault in initial master agent. The main idea of this paper is to develop an adequate computational model under which agents in the formation will perform to coordinate among each other. Assignments of IDs to agents are very useful in real-time applications. These proposed schemes are suitable for obstacle avoidance in unknown environment as a whole formation. Agents are free from collision among each other during navigation. These schemes can be used for velocity as well as orientation alignment problems for a multi-agent rover network. These schemes are tested with extensive simulations and responses of agents show satisfactory performances to deal with different environmental conditions.

Journal Article•DOI•
TL;DR: It is concluded that hedging using options is a promising approach to improve resource allocation in environments where resources are allocated by using a commodity market mechanism.
Abstract: In Grid environments, where virtual organization resources are allocated to users using mechanisms analogue to market economies, strong price fluctuations can have an impact on the nontrivial quality-of-service expected by end users. In this paper, we investigate the effects of the use of option contracts on the quality of service offered by a broker-based Grid resource allocation model. Option contracts offer users the possibility to buy or sell Grid resources in the future for a strike price specified in a contract. By buying, borrowing and selling option contracts using a hedge strategy users can benefit from expected price changes. In this paper, we consider three hedge strategies: the butterfly spread which profits from small changes, the straddle which benefits from large price changes, and the call strategy which benefits from soaring prices. Using our model based on an abstract Grid architecture, we find that the use of hedge strategies augment the ratio of successfully finished jobs to failed jobs. We show that the degree of successfulness from hedge strategies changes when the number of contributed resources changes. By means of a model, we also show that the effects of the butterfly spread is mainly explained by the amount of contributed resources. The dynamics of the two other hedge strategies are best explained by observing the price behavior. We also find that by using hedge strategies the users can increase the probability that a job will finish before the deadline. We conclude that hedging using options is a promising approach to improve resource allocation in environments where resources are allocated by using a commodity market mechanism.

Journal Article•DOI•
TL;DR: This paper shows the GVE is an efficient and lightweight middleware for building grid infrastructures with virtual machines, and uses a real example to illustrate how to apply the Gve to build an e-Science infrastructure at runtime.
Abstract: Virtual machines offer various advantages such as easy configuration, management, development and deployment of computing resources for cyberinfrastructures. Recent advances of employing virtual machines for Grid computing can help Grid communities to solve research issues, for example, qualities of service (QoS) provision and computing environment customization. The heterogeneous virtualization implementations, however, bring challenges for employing virtual machine as computing resources to build Grid infrastructures. The work proposed in this paper focuses on building a Web service based virtual machine provider for Grid infrastructures. The Grid Virtualization Engine (GVE) creates an abstract layer between users and underlying virtualization technologies. It implements a scalable distributed architecture in a hierarchical flavor. The GVE Site Service provides Web service interfaces for users to operate virtual machines, thereafter building Grid infrastructures. The underlying GVE Agent Service talks with different virtualization products inside the computing center and provides virtual machine resources to the GVE Site Service. The GVE is designed and implemented with the state of the arts of distributed computing technologies: Web service and Grid standards. The GVE is evaluated with CMS benchmark, a high-energy physics application from CERN. In addition to the GVE design and implementation, this paper also uses a real example to illustrate how to apply the GVE to build an e-Science infrastructure at runtime. By providing experiments, tests, and a use scenario, we show the GVE is an efficient and lightweight middleware for building grid infrastructures with virtual machines.

Journal Article•DOI•
TL;DR: The experimental results in the NS-2 simulation environment demonstrate that the proposed approach improves caching performance in terms of data accessibility, query delay and query distance compared to the caching scheme that does not adopt the cooperative caching strategy.
Abstract: Several protocols have been proposed to improve data accessibility and reduce query delay in MANETs. Some of these proposals have adopted the cooperative caching scheme, allowing multiple mobile hosts within a neighborhood to cache and share data items in their local caches. Cross-layer optimization has not been fully exploited to further improve the performance of cooperative caching in these proposals. In this paper we propose a cluster-based cooperative caching scheme. A cross-layer design approach is employed to further improve the performance of cooperative caching and prefetching schemes. The cross-layer information is maintained in a separate data structure and is shared among network protocol layers. The experimental results in the NS-2 simulation environment demonstrate that the proposed approach improves caching performance in terms of data accessibility, query delay and query distance compared to the caching scheme that does not adopt the cooperative caching strategy.

Journal Article•DOI•
TL;DR: In this paper, the performance of power line communication equipments (ethernet-to-powerline adapters) that come from different vendors and are based on different technologies and standards are examined.
Abstract: In this paper, we examine the performance of power line communication equipments (ethernet-to-powerline adapters) that come from different vendors and are based on different technologies and standards. The scope is to investigate commercially available power line communications (PLC) equipment in their actual working environment under real conditions. Coexistence issues are studied, as well as the possible degradation of performance in case powerline adapters from different manufacturers and technologies are simultaneously operating in the powerline network under consideration. The influence of potential noise sources (ac adaptors, cell phone chargers), as well as plug-in cases that are not recommended by the manufacturers but are, however, convenient in domestic grids (power strips, extension cords), are also examined.

Journal Article•DOI•
TL;DR: An architecture framework consisting of two software architectural patterns and an auto-fusion process to guide the development of distributed and scalable systems to support improved data fusion and distribution is presented.
Abstract: This paper addresses the need to efficiently fuse data from multiple sources and effectively control and monitor the distribution of the data in a dynamic service-oriented architecture based command and control system of systems. We present an architecture framework consisting of two software architectural patterns and an auto-fusion process to guide the development of distributed and scalable systems to support improved data fusion and distribution. We demonstrate the technical feasibility of applying the patterns and process by prototyping an auto-fusion system.

Journal Article•DOI•
TL;DR: The concept of half-life of learning curves as a predictive measure of system performance, which is an intrinsic indicator of the system's resilience, is introduced as another perspective to the large body of literature on learning curves.
Abstract: Learning curves are used extensively in business, science, technology, engineering, and industry to predict system performance over time. Most of the early development and applications have been in the area of production engineering. Over the past several decades, there has been an increasing interest in the behavior of learning curves. This paper introduces the concept of half-life of learning curves as a predictive measure of system performance, which is an intrinsic indicator of the system's resilience. Half-life is the amount of time it takes for a quantity to diminish to half of its original size through natural processes. The common application of half-life is in natural sciences. The longer the half-life of a substance, the more stable it is. Consequently, the more resilient it is. This approach adds another perspective to the large body of literature on learning curves. Derivation of the half-life equations of learning curves can reveal more about the properties of the various curves. This paper presents half-life derivations for some of the classical learning curve models available in the literature.

Journal Article•DOI•
TL;DR: The results of performance comparison of default self-scheduling used in DIANE with AWLB-based scheduling are presented, dynamic resource pool and resource selection mechanisms are evaluated, and dependencies of application performance on aggregate characteristics of selected resources and application profile are examined.
Abstract: This paper presents a hybrid resource management environment, operating on both application and system levels developed for minimizing the execution time of parallel applications with divisible workload on heterogeneous grid resources. The system is based on the adaptive workload balancing algorithm (AWLB) incorporated into the distributed analysis environment (DIANE) user-level scheduling (ULS) environment. The AWLB ensures optimal workload distribution based on the discovered application requirements and measured resource parameters. The ULS maintains the user-level resource pool, enables resource selection and controls the execution. We present the results of performance comparison of default self-scheduling used in DIANE with AWLB-based scheduling, evaluate dynamic resource pool and resource selection mechanisms, and examine dependencies of application performance on aggregate characteristics of selected resources and application profile.

Journal Article•DOI•
TL;DR: This paper presents a wireless unlicensed system that successfully coexists with the licensed systems in the same spectrum range and maximizes the un licensed system capacity for the optimal spectrum and power allocations.
Abstract: Future generations of communication systems will benefit from cognitive radio technology, which significantly improves the efficient usage of the finite radio spectrum resource. In this paper we present a wireless unlicensed system that successfully coexists with the licensed systems in the same spectrum range. The proposed unlicensed system determines the level of signals and noise in each frequency band and properly adjusts the spectrum and power allocations subject to rate constraints. It employs orthogonal frequency-division multiplexing (OFDM) modulation and distributes each transmitted bit energy over all the bands using a novel concept of bit spectrum patterns. A distributed optimization problem is formulated as a dynamic selection of spectrum patterns and power allocations that are better suited to the available spectrum range without degrading the licensed system performance. Bit spectrum patterns are designed based on a normalized gradient approach and the transmission powers are minimized for a predefined quality of service (QoS). At the optimal equilibrium point, the receiver that employs a conventional correlation operation with the replica of the transmitted signal will have the same efficiency as the minimum mean-squared error (MMSE) receiver in the presence of noise and licensed systems. Additionally, the proposed approach maximizes the unlicensed system capacity for the optimal spectrum and power allocations. The performance of the proposed algorithm is verified through simulations.

Journal Article•DOI•
TL;DR: A novel approach to implementation of a multimodal system based on negotiating agents is proposed and evaluated, and the relative merits of this strategy are set out in comparison with other commonly adopted approaches to practical system realization.
Abstract: Many approaches to the implementation of biometrics-based identification systems are possible, and different configurations are likely to generate significantly different operational characteristics. The choice of implementational structure is therefore very dependent on the performance criteria which are most important in any particular task scenario. In this paper we evaluate the merits of using multimodal structures, and we investigate how fundamentally different strategies for implementation can increase the degree of choice available in achieving particular performance criteria. In particular, we illustrate the merits of an implementation based on a multiagent computational architecture as a means of achieving high performance levels when recognition accuracy is a principal criterion. We also set out the relative merits of this strategy in comparison with other commonly adopted approaches to practical system realization. In particular we propose and evaluate a novel approach to implementation of a multimodal system based on negotiating agents.

Journal Article•DOI•
TL;DR: An adaptive policy design approach based on a system-of-systems (SoS) perspective is presented, using a case of carbon emissions reduction in the residential sector, to structure the policy issue into interdependent relevant systems.
Abstract: This paper presents an adaptive policy design approach based on a system-of-systems (SoS) perspective. Using a case of carbon emissions reduction in the residential sector, the SoS perspective is used as a way to structure the policy issue into interdependent relevant systems. This representation of the system provides a framework to test a large number of hypotheses about the evolution of the system's performance using computational experiments. In particular, in a situation where the realized emission level misses the intermediate target, policies can be adapted to meet the policy target. Our approach shows the different policy designs that decision-makers can envision to influence the overall system performance.

Journal Article•DOI•
Li Lin1, Jinpeng Huai1•
TL;DR: An adaptive resource management framework QGrid which integrates trust factor into economic-driven allocation process and introduces a simple isolation scheme to secure the Grid system by frustrating malicious participants from joining the system is proposed.
Abstract: Recent years have witnessed the rapid development of Grid computing over the Internet, which promises to empower highly desirable resource sharing and cooperation among different organizations. However, there remains a challenging issue facing Grid environments. Malicious or selfish nodes consume precious resources without contribution or even try to destroy the system intentionally. This can severely degrade the system performance and limit the healthy development of Grid systems. To encourage resource sharing and fight against malicious behaviors, we propose an adaptive resource management framework QGrid which integrates trust factor into economic-driven allocation process. Each provider allocates resources according to the bidding price and the trust value of a requester by controlling the corresponding threshold of price and trust value. The incomplete information is a key issue for a provider in determining the two thresholds. We employ a Q-learning technique to resolve the issue, which can adapt to the dynamics of Grid environments. Furthermore, we introduce a simple isolation scheme to secure the Grid system by frustrating malicious participants from joining the system. A QGrid prototype has been successfully implemented in a real Grid test-bed, CROWN. Theoretical analysis and comprehensive experiments have been conducted, which demonstrate the efficacy of QGrid.

Journal Article•DOI•
TL;DR: The legacy system challenge is discussed and a technology prototype developed by the Naval Surface Warfare Center (NSWC) Dahlgren to realize the NCALS concept is described, which works automatically, behind the scenes, to expose legacy data to the GIG and to make GIG data available to legacy systems.
Abstract: The Net-Centric Adapter for Legacy Systems (NCALS) is a software technology that makes legacy system data and services available in near real-time to the military Global Information Grid (GIG). The intent of NCALS is to lower the cost and risk, and to decrease the time required for legacy systems to comply with U.S. Department of Defense (DoD) net-centric technical standards. Many different systems could use a common, configurable NCALS software component to comply with these standards. The benefit to the warfighter is improved interoperability with joint and coalition forces. NCALS enables legacy systems to move to a Service-Oriented Architecture (SOA) compatible with the GIG without requiring a costly and risky re-architecture of their legacy software. In addition, NCALS enables mission critical systems such as weapon systems to segregate their real-time, mission critical software from enterprise integration software. This maintains the safety and security required by such systems, while accommodating rapid changes in Internet-based, enterprise technologies. This paper will discuss the legacy system challenge and describe a technology prototype developed by the Naval Surface Warfare Center (NSWC) Dahlgren to realize the NCALS concept. The prototype works automatically, behind the scenes, to expose legacy data to the GIG and to make GIG data available to legacy systems.

Journal Article•DOI•
TL;DR: It is argued that it is not only appropriate, but necessary to consider users-their behavior, cognition, perception and anthropometrics-as a component of a biometric system.
Abstract: As system designers, do we sometimes forget where biometrics come from? The ?usual? standard biometric system model includes the biometric presentation and a biometric sensor but not users themselves. Having this model facilitates having shared vocabulary and abstraction for technologists and systems developers. However, advancing the systems science of biometric systems will require a shift towards a user-centered viewpoint. After all, without a user there can be no biometric. In this paper, we argue that it is not only appropriate, but necessary to consider users-their behavior, cognition, perception and anthropometrics-as a component of a biometric system.

Journal Article•DOI•
TL;DR: By complementarily fusing the robust stabilizability condition, the orthogonal-functions approach (OFA) and the hybrid Taguchi-genetic algorithm (HTGA), an integrative method is proposed in this paper to design the robust-stable and quadratic-optimal static output feedback controller such that the linear singular control system with structured parameter uncertainties is regular, impulse-free and asymptotically stable.
Abstract: By complementarily fusing the robust stabilizability condition, the orthogonal-functions approach (OFA) and the hybrid Taguchi-genetic algorithm (HTGA), an integrative method is proposed in this paper to design the robust-stable and quadratic-optimal static output feedback controller such that i) the linear singular control system with structured parameter uncertainties is regular, impulse-free and asymptotically stable and ii) a quadratic finite-horizon integral performance index for the linear nominal singular control system can be minimized. Based on some essential properties of matrix measures, a new sufficient condition is presented for ensuring that the linear singular system with structured and quadratically-coupled structured parameter uncertainties is regular, impulse free and asymptotically stable. By using the OFA and the robust stabilizability condition, the dynamic-optimization problem for the robust-stable and quadratic-optimal static output feedback control design of the linear uncertain singular system is transformed into a static-constrained-optimization problem represented by the algebraic equations with constraint of robust stabilizability condition; thus greatly simplifying the robust-stable and quadratic-optimal static output feedback control design problem of the linear uncertain singular system. Then, for the static-constrained-optimization problem, the HTGA is employed to find the robust-stable and quadratic-optimal static output feedback controller of the linear uncertain singular control system. One design example of the robust-stable and quadratic-optimal static output feedback controller for a mass-spring-damper mechanical system with structured parameter uncertainties is given to demonstrate the applicability of the proposed integrative approach.