scispace - formally typeset
Search or ask a question

Showing papers on "Redundancy (engineering) published in 2002"


Book ChapterDOI
John R. Douceur1
07 Mar 2002
TL;DR: It is shown that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.
Abstract: Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these "Sybil attacks" is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.

4,816 citations


Patent
14 May 2002
TL;DR: In this article, the data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored.
Abstract: Multiple applications request data from multiple storage units over a computer network. The data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored. Redundancy information corresponding to each segment also is distributed randomly over the storage units. The redundancy information for a segment may be a copy of the segment, such that each segment is stored on at least two storage units. The redundancy information also may be based on two or more segments. This random distribution of segments of data and corresponding redundancy information improves both scalability and reliability. When a storage unit fails, its load is distributed evenly over to remaining storage units and its lost data may be recovered because of the redundancy information. When an application requests a selected segment of data, the request may be processed by the storage unit with the shortest queue of requests. Random fluctuations in the load applied by multiple applications on multiple storage units are balanced nearly equally over all of the storage units. Small data files also may be stored on storage units that combine small files into larger segments of data using a log structured file system. This combination of techniques results in a system which can transfer both multiple, independent high-bandwidth streams of data and small data files in a scalable manner in both directions between multiple applications and multiple storage units.

1,195 citations


Proceedings ArticleDOI
28 Sep 2002
TL;DR: A node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by turning off some redundant nodes, and guarantees that the original sensing coverage is maintained after turning off redundant nodes.
Abstract: In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime, as well as maintain sufficient sensing coverage and reliability. In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by turning off some redundant nodes. Our coverage-based off-duty eligibility rule and backoff-based node-scheduling scheme guarantees that the original sensing coverage is maintained after turning off redundant nodes. We implement our proposed scheme in NS-2 as an extension of the LEACH protocol. We compare the energy consumption of LEACH with and without the extension and analyze the effectiveness of our scheme in terms of energy saving. Simulation results show that our scheme can preserve the system coverage to the maximum extent. In addition, after the node-scheduling scheme turns off some nodes, certain redundancy is still guaranteed, which we believe can provide enough sensing reliability in many applications.

1,179 citations


Book
07 Nov 2002
TL;DR: In this paper, the authors proposed a model for system reliability using Fault Tree Analysis (FTA) to evaluate the performance of one-and two-stage systems with different types of components.
Abstract: PrefaceAcknowledgments1 Introduction11 Needs for Reliability Modeling12 Optimal Design2 Reliability Mathematics21 Probability and Distributions211 Events and Boolean Algebra212 Probabilities of Events213 Random Variables and Their Characteristics214 Multivariate Distributions215 Special Discrete Distributions216 Special Continuous Distributions22 Reliability Concepts23 Commonly Used Lifetime Distributions24 Stochastic Processes241 General Definitions242 Homogeneous Poisson Process243 Nonhomogeneous Poisson Process244 Renewal Process245 Discrete-Time Markov Chains246 Continuous-Time Markov Chains25 Complex System Reliability Assessment Using Fault Tree Analysis3 Complexity Analysis31 Orders of Magnitude and Growth32 Evaluation of Summations33 Bounding Summations34 Recurrence Relations341 Expansion Method342 Guess-and-Prove Method343 Master Method35 Summary4 Fundamental System Reliability Models41 Reliability Block Diagram42 Structure Functions43 Coherent Systems44 Minimal Paths and Minimal Cuts45 Logic Functions46 Modules within a Coherent System47 Measures of Performance48 One-Component System49 Series System Model491 System Reliability Function and MTTF492 System Availability410 Parallel System Model4101 System Reliability Function and MTTF4102 System Availability of Parallel System with Two iid Components4103 System Availability of Parallel System with Two Different Components4104 Parallel Systems with n iid Components411 Parallel-Series System Model412 Series-Parallel System Model413 Standby System Model4131 Cold Standby Systems4132 Warm Standby Systems5 General Methods for System Reliability Evaluation51 Parallel and Series Reductions52 Pivotal Decomposition53 Generation of Minimal Paths and Minimal Cuts531 Connection Matrix532 Node Removal Method for Generation of Minimal Paths533 Generation of Minimal Cuts from Minimal Paths54 Inclusion-Exclusion Method55 Sum-of-Disjoint-Products Method56 Markov Chain Imbeddable Structures561 MIS Technique in Terms of System Failures562 MIS Technique in Terms of System Success57 Delta-Star and Star-Delta Transformations571 Star or Delta Structure with One Input Node and Two Output Nodes572 Delta Structure in Which Each Node May Be either an Input Node or an Output Node58 Bounds on System Reliability581 IE Method582 SDP Method583 Esary-Proschan (EP) Method584 Min-Max Bounds585 Modular Decompositions586 Notes6 General Methodology for System Design61 Redundancy in System Design62 Measures of Component Importance621 Structural Importance622 Reliability Importance623 Criticality Importance624 Relative Criticality63 Majorization and Its Application in Reliability631 Definition of Majorization632 Schur Functions633 L-Additive Functions64 Reliability Importance in Optimal Design65 Pairwise Rearrangement in Optimal Design66 Optimal Arrangement for Series and Parallel Systems67 Optimal Arrangement for Series-Parallel Systems68 Optimal Arrangement for Parallel-Series Systems69 Two-Stage Systems610 Summary7 Thek-out-of-n System Model71 System Reliability Evaluation711 The k-out-of-n:G System with iid Components712 The k-out-of-n:G System with Independent Components713 Bounds on System Reliability72 Relationship between k-out-of-n G and F Systems721 Equivalence between k-out-of-n:G and (n - k + 1)-out-of-n:F Systems722 Dual Relationship between k-out-of-n G and F Systems73 Nonrepairable k-out-of-n Systems731 Systems with iid Components732 Systems with Nonidentical Components733 Systems with Load-Sharing Components Following Exponential Lifetime Distributions734 Systems with Load-Sharing Components Following Arbitrary Lifetime Distributions735 Systems with Standby Components74 Repairable k-out-of-n Systems741 General Repairable System Model742 Systems with Active Redundant Components743 Systems with Load-Sharing Components744 Systems with both Active Redundant and Cold Standby Components75 Weighted k-out-of-n:G Systems8 Design of k-out-of-n Systems81 Properties of k-out-of-n Systems811 Component Reliability Importance812 Effects of Redundancy in k-out-of-n Systems82 Optimal Design of k-out-of-n Systems821 Optimal System Size n822 Simultaneous Determination of n and k823 Optimal Replacement Time83 Fault Coverage831 Deterministic Analysis832 Stochastic Analysis84 Common-Cause Failures841 Repairable System with Lethal Common-Cause Failures842 System Design Considering Lethal Common-Cause Failures843 Optimal Replacement Policy with Lethal Common-Cause Failures844 Nonlethal Common-Cause Failures85 Dual Failure Modes851 Optimal k or n Value to Maximize System Reliability852 Optimal k or n Value to Maximize System Profit853 Optimal k and n Values to Minimize System Cost86 Other Issues861 Selective Replacement Optimization862 TMR and NMR Structures863 Installation Time of Repaired Components864 Combinations of Factors865 Partial Ordering9 Consecutive-k-out-of-n Systems91 System Reliability Evaluation911 Systems with iid Components912 Systems with Independent Components92 Optimal System Design921 B-Importances of Components922 Invariant Optimal Design923 Variant Optimal Design93 Consecutive-k-out-of-n:G Systems931 System Reliability Evaluation932 Component Reliability Importance933 Invariant Optimal Design934 Variant Optimal Design94 System Lifetime Distribution941 Systems with iid Components942 System with Exchangeable Dependent Components943 System with (k - 1)-Step Markov-Dependent Components944 Repairable Consecutive-k-out-of-n Systems95 Summary10 Multidimensional Consecutive-k-out-of-n Systems101 System Reliability Evaluation1011 Special Multidimensional Systems1012 General Two-Dimensional Systems1013 Bounds and Approximations102 System Logic Functions103 Optimal System Design104 Summary11 Other k-out-of-n and Consecutive-k-out-of-n Models111 The s-Stage k-out-of-n Systems112 Redundant Consecutive-k-out-of-n Systems113 Linear and Circular m-Consecutive-k-out-of-n Model114 The k-within-Consecutive-m-out-of-n Systems1141 Systems with iid Components1142 Systems with Independent Components1143 The k-within-(r, s)/(m, n):F Systems115 Series Consecutive-k-out-of-n Systems116 Combined k-out-of-n:F and Consecutive-kc-out-of-n:F System117 Combined k-out-of-mn:F and Linear (r, s)/(m, n):F System118 Combined k-out-of-mn:F, One-Dimensional Con/kc/n:F, and Two-Dimensional Linear (r, s)/(m, n):F Model119 Application of Combined k-out-of-n and Consecutive-k-out-of-n Systems1110 Consecutively Connected Systems1111 Weighted Consecutive-k-out-of-n Systems11111 Weighted Linear Consecutive-k-out-of-n:F Systems11112 Weighted Circular Consecutive-k-out-of-n:F Systems12 Multistate System Models121 Consecutively Connected Systems with Binary System State and Multistate Components1211 Linear Multistate Consecutively Connected Systems1212 Circular Multistate Consecutively Connected Systems1213 Tree-Structured Consecutively Connected Systems122 Two-Way Consecutively Connected Systems123 Key Concepts in Multistate Reliability Theory124 Special Multistate Systems and Their Performance Evaluation1241 Simple Multistate k-out-of-n:G Model1242 Generalized Multistate k-out-of-n:G Model1243 Generalized Multistate Consecutive-k-out-of-n:F System125 General Multistate Systems and Their Performance Evaluation126 SummaryAppendix: Laplace TransformReferencesBibliographyIndex

678 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: The ASCENT algorithm, which assesses its connectivity and adapts its participation in the multi-hop network topology based on the measured operating region, aims to establish a topology that provides communication and sensing coverage under stringent energy constraints.
Abstract: Advances in micro-sensor and radio technology will enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. The low per-node cost will allow these wireless networks of sensors and actuators to be densely distributed. The nodes in these dense networks will coordinate to perform the distributed sensing tasks. Moreover, as described in this paper, the nodes can also coordinate to exploit the redundancy provided by high density, so as to extend overall system lifetime. The large number of nodes deployed in these systems will preclude manual configuration, and the environmental dynamics will preclude design-time pre-configuration. Therefore, nodes will have to self-configure to establish a topology that provides communication and sensing coverage under stringent energy constraints. In ASCENT, each node assesses its connectivity and adapts its participation in the multi-hop network topology based on the measured operating region. This paper motivates and describes the ASCENT algorithm and presents simulation and experimental measurements.

539 citations


Proceedings ArticleDOI
11 Aug 2002
TL;DR: It is argued that relevance and redundance should each be modelled explicitly and separately and a set of five redundancy measures are proposed and evaluated in experiments with and without redundancy thresholds.
Abstract: This paper addresses the problem of extending an adaptive information filtering system to make decisions about the novelty and redundancy of relevant documents. It argues that relevance and redundance should each be modelled explicitly and separately. A set of five redundancy measures are proposed and evaluated in experiments with and without redundancy thresholds. The experimental results demonstrate that the cosine similarity metric and a redundancy measure based on a mixture of language models are both effective for identifying redundant documents.

478 citations


Journal ArticleDOI
TL;DR: It is shown that in the case of text classification, term-frequency transformations have a larger impact on the performance of SVM than the kernel itself.
Abstract: The choice of the kernel function is crucial to most applications of support vector machines. In this paper, however, we show that in the case of text classification, term-frequency transformations have a larger impact on the performance of SVM than the kernel itself. We discuss the role of importance-weights (e.g. document frequency and redundancy), which is not yet fully understood in the light of model complexity and calculation cost, and we show that time consuming lemmatization or stemming can be avoided even when classifying a highly inflectional language like German.

473 citations


Patent
18 Apr 2002
TL;DR: In this paper, a general-purpose, low-cost system provides comprehensive physiological data collection, with extensive data object oriented programmability and configurability for a variety of medical as well as other analog data collection applications.
Abstract: A general-purpose, low-cost system provides comprehensive physiological data collection, with extensive data object oriented programmability and configurability for a variety of medical as well as other analog data collection applications. In a preferred embodiment, programmable input signal acquisition and processing circuits are used so that virtually any analog and/or medical signal can be digitized from a common point of contact to a plurality of sensors. A general-purpose data routing and encapsulation architecture supports input tagging and standardized routing through modern packet switch networks, including the Internet; from one of multiple points of origin or patients, to one or multiple points of data analysis for physician review. The preferred architecture further supports multiple-site data buffering for redundancy and reliability, and real-time data collection, routing, and viewing (or slower than real-time processes when communications infrastructure is slower than the data collection rate). Routing and viewing stations allow for the insertion of automated analysis routines to aid in data encoding, analysis, viewing, and diagnosis.

458 citations


Journal ArticleDOI
TL;DR: In this paper, a review of electronic driver assisting systems such as ABS, traction control, electronic stability control, and brake assistant is presented, along with fault-detection methods for use in low-cost components.
Abstract: The article begins with a review of electronic driver assisting systems such as ABS, traction control, electronic stability control, and brake assistant. We then review drive-by-wire systems with and without mechanical backup. Drive-by-wire systems consist of an operating unit with an electrical output, haptic feedback to the driver, bus systems, microcomputers, power electronics, and electrical actuators. For their design safety, integrity methods such as reliability, fault tree and hazard analysis, and risk classification are required. Different fault-tolerance principles with various forms of redundancy are considered, resulting in fail-operational, fail-silent, and fail-safe systems. Fault-detection methods are discussed for use in low-cost components, followed by a review of principles for fault-tolerant design of sensors, actuators, and communication. We evaluate these methods and principles and show how they can be applied to low-cost automotive components and drive-by-wire systems. A brake-by-wire system with electronic pedal and electric brakes is then considered in more detail, showing the design of the components and the overall architecture. Finally, we present conclusions and an outlook for further development of drive-by-wire systems.

390 citations


Journal ArticleDOI
TL;DR: This paper analyzes some deficiencies of the dominant pruning algorithm and proposes two better approximation algorithms: total dominant pruned and partial dominant prune, which utilize 2-hop neighborhood information more effectively to reduce redundant transmissions.
Abstract: Unlike in a wired network, a packet transmitted by a node in an ad hoc wireless network can reach all neighbors. Therefore, the total number of transmissions (forward nodes) is generally used as the cost criterion for broadcasting. The problem of finding the minimum number of forward nodes is NP-complete. Among various approximation approaches, dominant pruning (Lim and Kim 2001) utilizes 2-hop neighborhood information to reduce redundant transmissions. In this paper, we analyze some deficiencies of the dominant pruning algorithm and propose two better approximation algorithms: total dominant pruning and partial dominant pruning. Both algorithms utilize 2-hop neighborhood information more effectively to reduce redundant transmissions. Simulation results of applying these two algorithms show performance improvements compared with the original dominant pruning. In addition, two termination criteria are discussed and compared through simulation under both the static and dynamic environments.

352 citations


Journal ArticleDOI
TL;DR: Ongoing developments include the further improvement of functional and automatic annotation in the databases including evidence attribution with particular emphasis on the human, archaeal and bacterial proteomes and the provision of additional resources such as the International Protein Index (IPI) and XML format of SWISS-PROT and TrEMBL to the user community.
Abstract: SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation (such as the description of the function of a protein, its domain structure, post-translational modifications, variants, etc.), a minimal level of redundancy and a high level of integration with other databases. Together with its automatically annotated supplement TrEMBL, it provides a comprehensive and high-quality view of the current state of knowledge about proteins. Ongoing developments include the further improvement of functional and automatic annotation in the databases including evidence attribution with particular emphasis on the human, archaeal and bacterial proteomes and the provision of additional resources such as the International Protein Index (IPI) and XML format of SWISS-PROT and TrEMBL to the user community.

Journal ArticleDOI
TL;DR: In this paper, the electrical characteristics of array interconnection schemes are investigated using simulation models to find a configuration that is comparatively less susceptible to shadow problem and power degradation resulting from the aging of solar cells.
Abstract: In this paper, the electrical characteristics of array interconnection schemes are investigated using simulation models to find a configuration that is comparatively less susceptible to shadow problem and power degradation resulting from the aging of solar cells. Three configurations have been selected for comparison: (i) simple series-parallel (SP) array which has zero interconnection redundancy; (ii) total-cross-tied (TCT) array which is obtained from the simple SP array by connecting ties across each row of junctions; it may be characterized as the scheme with the highest possible redundancy; and (iii) bridge-linked (BL) array in which all cells are interconnected in bridge rectifier fashion. The explicit computer simulations for the energy yield and current-voltage distributions in the array are presented, which seem to favor cross-tied configurations (TCT and BL) in coping with the effects of mismatch losses.

Proceedings ArticleDOI
03 Jun 2002
TL;DR: The Dwarf structure and the Dwarf cube construction algorithm are described and comparisons show that Dwarfs by far out-perform these techniques on all counts: storage space, creation time, query response time, and updates of cubes.
Abstract: Dwarf is a highly compressed structure for computing, storing, and querying data cubes. Dwarf identifies prefix and suffix structural redundancies and factors them out by coalescing their store. Prefix redundancy is high on dense areas of cubes but suffix redundancy is significantly higher for sparse areas. Putting the two together fuses the exponential sizes of high dimensional full cubes into a dramatically condensed data structure. The elimination of suffix redundancy has an equally dramatic reduction in the computation of the cube because recomputation of the redundant suffixes is avoided. This effect is multiplied in the presence of correlation amongst attributes in the cube. A Petabyte 25-dimensional cube was shrunk this way to a 2.3GB Dwarf Cube, in less than 20 minutes, a 1:400000 storage reduction ratio. Still, Dwarf provides 100% precision on cube queries and is a self-sufficient structure which requires no access to the fact table. What makes Dwarf practical is the automatic discovery,in a single pass over the fact table, of the prefix and suffix redundancies without user involvement or knowledge of the value distributions.This paper describes the Dwarf structure and the Dwarf cube construction algorithm. Further optimizations are then introduced for improving clustering and query performance. Experiments with the current implementation include comparisons on detailed measurements with real and synthetic datasets against previously published techniques. The comparisons show that Dwarfs by far out-perform these techniques on all counts: storage space, creation time, query response time, and updates of cubes.

Journal ArticleDOI
01 Dec 2002
TL;DR: The stochastic Markov nature in the heart of the system is discovered and studied, leading to a comprehensive fault-tolerant theory and might be a system solution for an ultra large integration of highly unreliable nanometer-scale devices.
Abstract: The shrinking of electronic devices will inevitably introduce a growing number of defects and even make these devices more sensitive to external influences. It is, therefore, likely that the emerging nanometer-scale devices will eventually suffer from more errors than classical silicon devices in large scale integrated circuits. In order to make systems based on nanometer-scale devices reliable, the design of fault-tolerant architectures will be necessary. Initiated by von Neumann, the NAND multiplexing technique, based on a massive duplication of imperfect devices and randomized imperfect interconnects, had been studied in the past using an extreme high degree of redundancy. In this paper, this NAND multiplexing is extended to a rather low degree of redundancy, and the stochastic Markov nature in the heart of the system is discovered and studied, leading to a comprehensive fault-tolerant theory. A system architecture based on NAND multiplexing is investigated by studying the problem of the random background charges in single electron tunneling (SET) circuits. It might be a system solution for an ultra large integration of highly unreliable nanometer-scale devices.

Journal ArticleDOI
TL;DR: In this article, the authors examine and compare four fault-tolerant techniques: R-fold multiple redundancy, cascaded triple modular redundancy, von Neumann's multiplexing method, and a reconfigurable computer technique.
Abstract: The proposed nanometre-sized electronic devices are generally expected to show an increased probability of errors both in manufacturing and in service. Hence, there is a need to use fault-tolerant techniques in order to make reliable information processing systems out of those devices. Here we examine and compare four fault-tolerant techniques: R-fold multiple redundancy; cascaded triple modular redundancy; von Neumann's multiplexing method; and a reconfigurable computer technique. It is shown that the reconfiguration technique is the most effective technique, able to cope with manufacturing defect rates of the order of 0.01-0.1, but the technique requires enormous amounts of redundancy, of the order of 103-105. However, in the case of transient errors, multiple modular redundancy and multiplexing are the only feasible options.

Patent
17 Sep 2002
TL;DR: A digital watermark system embeds auxiliary signals in multimedia types, including audio, still and moving images, and physical objects as discussed by the authors, which include multiple components that can perform differing functions and that are embedded with different levels of redundancy, potentially within different transform domains of the host signal.
Abstract: A digital watermark system embeds auxiliary signals in multimedia types, including audio, still and moving images, and physical objects. These auxiliary signals, referred to as digital watermarks, include multiple components that can perform differing functions and that are embedded with different levels of redundancy, potentially within different transform domains of the host signal.


Journal ArticleDOI
TL;DR: In this article, the authors used data on 100 entrepreneurs in Norway and found that simple measures such as the number and strength of ties are more important for entrepreneurs than redundancy because many weak and strong ties increase the entrepreneur's access to resources.
Abstract: Entrepreneurs use their social network to start businesses. According to Burt, low redundancy in the social network promotes entrepreneurial success. In non‐redundant networks the entrepreneurs’ contacts do not know each other and rarely have the same information. Low network redundancy gives entrepreneurs better information and it allows entrepreneurs to combine resources from non‐redundant sources. In contrast, when there is high redundancy the contacts know each other and may provide the same information. However, our study cannot confirm this hypothesis. Using data on 100 entrepreneurs in Norway we find that simple measures such as the number and strength of ties are more important for entrepreneurs than redundancy because many weak and strong ties increase the entrepreneur’s access to resources. We find that much redundancy is beneficial. Entrepreneurs get information and support more easily if they have many ties with redundant relations.


Journal ArticleDOI
TL;DR: It is shown that, not only does this type of multiplier contain redundancy in that special class of finite fields, but it also has redundancy in fields GF(2/sup m/) defined by any irreducible polynomial, and a new architecture for the normal basis parallel multiplier is proposed, which is applicable to any arbitrary finite field and has significantly lower circuit complexity compared to the original Massey-Omura normal basis Parallel multiplier.
Abstract: The Massey-Omura multiplier of GF(2/sup m/) uses a normal basis and its bit parallel version is usually implemented using m identical combinational logic blocks whose inputs are cyclically shifted from one another. In the past, it was shown that, for a class of finite fields defined by irreducible all-one polynomials, the parallel Massey-Omura multiplier had redundancy and a modified architecture of lower circuit complexity was proposed. In this article, it is shown that, not only does this type of multiplier contain redundancy in that special class of finite fields, but it also has redundancy in fields GF(2/sup m/) defined by any irreducible polynomial. By removing the redundancy, we propose a new architecture for the normal basis parallel multiplier, which is applicable to any arbitrary finite field and has significantly lower circuit complexity compared to the original Massey-Omura normal basis parallel multiplier. The proposed multiplier structure is also modular and, hence, suitable for VLSI realization. When applied to fields defined by the irreducible all-one polynomials, the multiplier's circuit complexity matches the best result available in the open literature.

Proceedings ArticleDOI
15 Jul 2002
TL;DR: This paper examines the use of transparent agent replication, a technique in which the replicates of agents appear and act as one entity thus avoiding an increase in system complexity and minimizing additional system loads.
Abstract: Despite the considerable efforts spent on developing multi-agent systems the actual number of deployed systems is surprisingly small. One of the reasons for the significant gap between developed and deployed systems is their brittleness.The absence of centralized control components makes it difficult to detect and treat failures of individual agents thus risking fault-propagation that can seriously impact the performance of the system. Using redundancy by replication of individual agents within a multi-agent system is one possible approach for improving fault-tolerance. Unfortunately the introduction of replicates leads to increased complexity and system load. In this paper we examine the use of transparent agent replication, a technique in which the replicates of agents appear and act as one entity thus avoiding an increase in system complexity and minimizing additional system loads. The paper defines transparent agent replication and identifies the key challenges in using it. Special attention is given to the inter-agent communication, read/write consistency, resource locking, resource synthesis and state synchronization. An implementation of the transparent agent replication for the FIPA-OS framework is presented and the results of testing it within a real-world multi-agent system are shown.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the application of a fault diagnosis and accommodation method to a real system composed of three tanks and developed a unique structured residual generator able to isolate and estimate both sensor and actuator faults.
Abstract: This paper investigates the application of a fault diagnosis and accommodation method to a real system composed of three tanks. The performance of a closed-loop system can be altered by the occurrence of faults which can, in some circumstances, cause serious damage on the system. The research goal is to prevent the system deterioration by developing a controller that has some capabilities to compensate for faults, that is, the fault accommodation or fault-tolerant control. In this paper, a two-step scheme composed of a fault detection, isolation and estimation module, and a control compensation module is presented. The main contribution is to develop a unique structured residual generator able to isolate and estimate both sensor and actuator faults. This estimation is of paramount importance to compensate for these faults and to preserve the system performances. The application of this method to the three-tank system gives encouraging results which are presented and commented on in case of various kinds of faults.

Patent
28 Jun 2002
TL;DR: In this paper, a fiber channel storage area network (SAN) provides virtualized storage space for a number of servers to virtual disks implemented on various virtual redundant array of inexpensive disks (RAID) devices striped across a plurality of physical disk drives.
Abstract: A fiber channel storage area network (SAN) provides virtualized storage space for a number of servers to a number of virtual disks implemented on various virtual redundant array of inexpensive disks (RAID) devices striped across a plurality of physical disk drives. The SAN includes plural controllers and communication paths to allow for fail-safe and fail-over operation. The plural controllers can be loosely-coupled to provide n-way redundancy and have more than one independent channel for communicating with one another. In particular, respective portions from each of the back-end physical disk drives within the SAN are used as one of these alternative communication channels to pass messages between controllers. Such an alternative communications channel provides even further redundancy and robustness in the system.

Journal ArticleDOI
TL;DR: This paper presents a symbol time offset estimator for coherent orthogonal frequency division multiplexing (OFDM) systems that exploits both the redundancy in the cyclic prefix and available pilot symbols used for channel estimation.
Abstract: This paper presents a symbol time offset estimator for coherent orthogonal frequency division multiplexing (OFDM) systems. The estimator exploits both the redundancy in the cyclic prefix and available pilot symbols used for channel estimation. The estimator is robust against frequency offsets and is suitable for use in dispersive channels. We base the estimator on the maximum-likelihood estimator for the additive white Gaussian noise channel. Simulations for an example system indicate a system performance as close as 0.6 dB to a perfectly synchronized system.

Patent
28 Jun 2002
TL;DR: In this article, the authors describe a failure involving a controller or controller interface, the virtual disks that are accessed via the affected interfaces are re-mapped to another interface in order to continue to provide high data availability.
Abstract: A fibre channel storage area network (SAN) provides virtualized storage space for a number of servers to a number of virtual disks implemented on various virtual redundant array of inexpensive disks (RAID) devices striped across a plurality of physical disk drives. The SAN includes plural controllers and communication paths to allow for fail-safe and fail-over operation. The plural controllers can be loosely-coupled to provide n-way redundancy and have more than one independent channel for communicating with one another. In the event of a failure involving a controller or controller interface, the virtual disks that are accessed via the affected interfaces are re-mapped to another interface in order to continue to provide high data availability.

Proceedings ArticleDOI
09 Sep 2002
TL;DR: Area overhead results show that TMR is more appropriated for modules using single registers like in pipelines, control and datapath circuits, while Hamming code is a better trade-off for groups of registers, such as register files, caches and embedded memories.
Abstract: This work compares two fault tolerance techniques, Hamming code and triple modular redundancy (TMR), that are largely used to mitigate single event upsets in integrated circuits, in terms of area and performance penalty. Both techniques were implemented in VHDL and tested in two target applications: arithmetic circuits with pipeline and registers files. Area overhead results show that TMR is more appropriated for modules using single registers like in pipelines, control and datapath circuits, while Hamming code is a better trade-off for groups of registers, such as register files, caches and embedded memories.


Patent
Robert S. Hoblit1
30 Aug 2002
TL;DR: In this article, a method and system for organizing an email thread, the email thread comprising a plurality of email messages, is disclosed, which includes minimizing redundancy within the plurality of emails to provide a minimized email thread and displaying the minimized thread.
Abstract: A method and system for organizing an email thread, the email thread comprising a plurality of email messages, is disclosed. The method and system comprise minimizing redundancy within the plurality of email messages to provide a minimized email thread and displaying the minimized email thread. Through the use of the method and system in accordance with the present invention, email threads are organized and listed in a more comprehensive fashion. Furthermore, the amount of redundancy that occurs within an email thread is substantially reduced thereby minimizing the amount of computer memory/disk space hat is consumed by the email system.

Proceedings ArticleDOI
23 Jun 2002
TL;DR: This paper presents an empirical investigation on the soft error sensitivity (SES) of microprocessors, using the picoJava-II as an example, through software simulated fault injections in its RTL model and shows that a reasonable prediction of the SES is possible by deduction from the processor's microarchitecture.
Abstract: This paper presents an empirical investigation on the soft error sensitivity (SES) of microprocessors, using the picoJava-II as an example, through software simulated fault injections in its RTL model. Soft errors are generated under a realistic fault model during program run-time. The SES of a processor logic block is defined as the probability that a soft error in the block causes the processor to behave erroneously or enter into an incorrect architectural state. The SES is measured at the functional block level. We have found that highly error-sensitive blocks are common for various workloads. At the same time soft errors in many other logic blocks rarely affect the computation integrity. Our results show that a reasonable prediction of the SES is possible by deduction from the processor's microarchitecture. We also demonstrate that the sensitivity-based integrity checking strategy can be an efficient way to improve fault coverage per unit redundancy.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: A unified approach for the two subsystems is formulated using a redundant scheme to conserve the dynamic stability of the system and define the performance index for the redundant system using the ZMP (zero moment point).
Abstract: The dynamic stability of a mobile manipulator using ZMP compensation is considered. A unified approach for the two subsystems is formulated using a redundant scheme. First, to conserve the dynamic stability of the system, we define the performance index for the redundant system using the ZMP (zero moment point). This performance index represents the stability of the whole mobile manipulator system. Then, the redundancy resolution problem for optimizing the given performance index is solved using the null motion. Finally, the performance of this method is demonstrated by simulation study.