scispace - formally typeset
Search or ask a question

Showing papers on "Redundancy (engineering) published in 1995"


01 Jan 1995
TL;DR: The technology described, known as HP AutoRAID, automatically and transparently manages migration of data blocks between these two levels as access patterns change, resulting in a fully redundant storage system that is extremely easy to use, is suitable for a wide variety of workloads, and is largely insensitive to dynamic workload changes.
Abstract: Configuring redundant disk arrays is a black art. To properly configure an array, a system administrator must understand the details of both the array and the workload it will support; incorrect understanding of eithec or changes in the workload over time, can lead to poor performance. We present a solution to this problem: a two-level storage hierarchy implemented inside a single disk-array controller In theupper level of this hierarchy, two copies of active data are stored to provide full redundancy and excellent performance. In the lower level, RAID 5 parity protection is used to provide excel[ent storage cost for inactive data, at somewhat lower performance. The technology we describe in this pape< known as HP AutoRAID, automatically and transparently manages migration of data blocks between these two levels as access patterns change. The result is a fully-redundant storage system that is extremely easy to use, suitable for a wide variety of workloads, largely insensitive to dynamic workload changes, and that performs much better than disk arrays with comparable numbers of spindles and much larger amounts offront-end ~~ cache. Because the implementation of the I-W AUtORAID technology is almost entirely in embedded software, the additional hardware cost for these benejits is very small. We describe the HP AUtORAID technology in detail, and provide performance data for an embodiment of it in a prototype storage array, together with the results of simulation studies used to choose algorithms used in the array.

264 citations


Journal ArticleDOI
TL;DR: This work surveys several fault injection studies and discusses tools such as React (Reliable Architecture Characterization Tool) that facilitate its application.
Abstract: A fault tolerant computer system's dependability must be validated to ensure that its redundancy has been correctly implemented and the system will provide the desired level of reliable service. Fault injection-the deliberate insertion of faults into an operational system to determine its response offers an effective solution to this problem. We survey several fault injection studies and discuss tools such as React (Reliable Architecture Characterization Tool) that facilitate its application. >

257 citations


Book
01 Jan 1995
TL;DR: Fundamentals.
Abstract: Fundamentals. Reliability Indexes. Unrepairable Systems. Load-Strength Reliability Models. Distributions with Monotone Intensity Functions. Repairable Systems. Repairable Duplicated Systems. Analysis of Performance Effectiveness. Two-Pole Networks. Optimal Redundancy. Optimal Technical Diagnosis. Additional Optimization Problems in Reliability Theory. Heuristic Methods in Reliability. Index.

186 citations


Journal ArticleDOI
01 Aug 1995
TL;DR: This paper presents a layered fault tolerance framework containing new fault detection and tolerance schemes, divided into servo, interface, and supervisor layers which provide different levels of detection andolerance capabilities for structurally diverse robots.
Abstract: This paper presents a layered fault tolerance framework containing new fault detection and tolerance schemes. The framework is divided into servo, interface, and supervisor layers. The servo layer is the continuous robot system and its normal controller. The interface layer monitors the servo layer for sensor or motor failures using analytical redundancy based fault detection tests. A newly developed algorithm generates the dynamic thresholds necessary to adapt the detection tests to the modeling inaccuracies present in robotic control. Depending on the initial conditions, the interface layer can provide some sensor fault tolerance automatically without direction from the supervisor. If the interface runs out of alternatives, the discrete event supervisor searches for remaining tolerance options and initiates the appropriate action based on the current robot structure indicated by the fault tree database. The layers form a hierarchy of fault tolerance which provide different levels of detection and tolerance capabilities for structurally diverse robots. >

182 citations


Journal ArticleDOI
TL;DR: A method is proposed to estimate the fault tolerance (FT) of feedforward artificial neural nets (ANNs) and synthesize robust nets and a procedure is developed to build FT ANNs by replicating the hidden units that exploits the intrinsic weighted summation operation performed by the processing units to overcome faults.
Abstract: A method is proposed to estimate the fault tolerance (FT) of feedforward artificial neural nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes for permanent stuck-at type faults. A procedure is developed to build FT ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units to overcome faults. Metrics are devised to quantify the FT as a function of redundancy. A lower bound on the redundancy required to tolerate all possible single faults is analytically derived. Less than triple modular redundancy (TMR) cannot provide complete FT for all possible single faults. The actual redundancy needed to synthesize a completely FT net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The conventional TMR scheme of triplication and majority voting is the best way to achieve complete FT in most ANNs. Although the redundancy needed for complete FT is substantial, the ANNs exhibit good partial FT to begin with and degrade gracefully. The first replication yields maximum enhancement in partial FT compared with later successive replications. For large nets, exhaustive testing of all possible single faults is prohibitive, so the strategy of randomly testing a small fraction of the total number of links is adopted. It yields partial FT estimates that are very close to those obtained by exhaustive testing. When the fraction of links tested is held fixed, the accuracy of the estimate generated by random testing is seen to improve as the net size grows. >

163 citations


Proceedings ArticleDOI
03 Dec 1995
TL;DR: HP AutoRAID as discussed by the authors is a two-level storage hierarchy implemented inside a single disk-array controller, where two copies of active data are stored to provide full redundancy and excellent performance and RAID 5 parity protection is used to provide excellent storage cost for inactive data, at somewhat lower performance.
Abstract: Con@uring redundant disk arrays is a black art. To configure an array properly, a system administrator must understand the details of both the array and the workload it will support. Incorrect understanding of either, or changes in the workload over time, can lead to poor performance, We present a solution to this problem: a two-level storage hierarchy implemented inside a single disk-array controller. In the upper level of this hierarchy, two copies of active data are stored to provide full redundancy and excellent performance. In the lower level, RAID 5 parity protection is used to provide excellent storage cost for inactive data, at somewhat lower performance. The technology we describe in this article, known as HP AutoRAID, automatically and transparently manages migration of data blocks between these two levels as access patterns change. The result is a fully redundant storage system that is extremely easy to use, is suitable for a wide variety of workloads, is largely insensitive to dynamic workload changes, and performs much better than disk arrays with comparable numbers of spindles and much larger amounts of front-end RAM cache, Because the implementation of the HP AutoRAID technology is almost entirely in software, the additional hardware cost for these benefits is very small. We describe the HP AutoRAID technology in detail, provide performance data for an embodiment of it in a storage array, and summarize the results of simulation studies used to choose algorithms implemented in the array.

135 citations


Journal ArticleDOI
TL;DR: This paper presents a method for multilevel logic optimization for combinational and synchronous sequential circuits that can efficiently identify those wires for addition that would create more redundancies elsewhere in the network.
Abstract: This paper presents a method for multilevel logic optimization for combinational and synchronous sequential circuits. The circuits are optimized through iterative addition and removal of redundancies. Adding redundant wires to a circuit may cause one or many existing irredundant wires and/or gates to become redundant. If the amount of added redundancies is less than the amount of created redundancies, the transformation of adding followed by removing redundancies will result in a smaller circuit. Based upon the Automatic Test Pattern Generation (ATPG) techniques, the proposed method can efficiently identify those wires for addition that would create more redundancies elsewhere in the network. Experiments on ISCAS-85 combinational benchmark circuits show that best results are obtained for most of them. For sequential circuits, experimental results on MCNC FSM benchmarks and ISCAS-89 sequential benchmark circuits show that a significant amount of area reduction can be achieved beyond combinational optimization and sequential redundancy removal. >

127 citations


Journal ArticleDOI
TL;DR: A tutorial report of the literature on the damped-least squares method which has been used for computing velocity inverse kinematics of robotic manipulators, and an iterative method to compute the optimal damping factor for one of the redundancy resolution techniques.
Abstract: In this paper, we present a tutorial report of the literature on the damped-least squares method which has been used for computing velocity inverse kinematics of robotic manipulators. This is a local optimization method that can prevent infeasible joint velocities near singular configurations by using a damping factor to control the norm of the joint velocity vector. However, the exactness of the inverse kinematic solution has to be sacrificed in order to achieve feasibility. The damping factor is an important parameter in this technique since it determines the trade-off between the accuracy and feasibility of the inverse kinematic solution. Various methods that have been proposed to compute an appropriate damping factor are described. Redundant manipulators, possessing extra degrees of freedom, afford more choice of inverse kinematic solutions than do non-redundant ones. The damped least-squares method has been used in conjunction with redundancy resolution schemes to compute feasible joint velocities for redundant arms while performing an additional subtask. We outline the different techniques that have been proposed to achieve this objective. In addition, we introduce an iterative method to compute the optimal damping factor for one of the redundancy resolution techniques.

120 citations


Journal ArticleDOI
TL;DR: A neural-network-based approach for the problem of sensor failure detection, identification, and accommodation for a flight control system without physical redundancy in the sensors based on the introduction of on-line learning neural network estimators is presented.
Abstract: This paper presents a neural-network-based approach for the problem of sensor failure detection, identification, and accommodation for a flight control system without physical redundancy in the sensors. The approach is based on the introduction of on-line learning neural network estimators. For a system with n sensors, a combination of a main neural network and a set of n decentralized neural networks achieves the design goal. The main neural network and the ith decentralized neural network detect and identify a failure of the ith sensor, whereas the output of the ith decentralized neural network accommodates for the failure by replacing the signal from the failed ith sensor with its estimate. The on-line learning for these neural network architectures is performed using the extended back-propagation algorithm. The document describes successful simulations of the sensor failure detection, identification, and accommodation process following both soft and hard sensor failures. The simulations have shown remarkable capabilities for this neural scheme.

100 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a formal model of muddling through, which clarifies the logical structure of the informal theory and presents a clearer target for criticism. And they show that some of Lindblom's less controversial conjectures, such as seriality and redundancy, are correct if conflict across domains is absent or takes certain specified forms.
Abstract: As arguments about the effectiveness of “muddling through” have proven frustratingly inconclusive, incrementalism—once a major approach to the study of boundedly rational policy processes—has gone dormant. In an attempt to revitalize the debate, I present a formal model of muddling through. The model, by clarifying the logical structure of the informal theory, presents a clearer target for criticism. More importantly, it establishes numerous deductive results. First, some of Lindblom's less controversial conjectures—about the benefits of seriality (repeated attacks on the same policy problem) and redundancy (multiple decision makers working on the same problem)—turn out to be correct if conflict across policy domains is absent or takes certain specified forms. But given other empirically reasonable types of conflict, even these claims are wrong. Second, the advantages of incremental (local) policy search (Lindblom's best-known and most controversial claim) turn out to be still less well founded: in many empirically plausible contexts the claim is invalid.

89 citations


Patent
13 Jan 1995
TL;DR: In this article, a redundant array storage system including storage units divided into two logical arrays is described, in which a plurality of array control units are fully utilized to control data transfers between the logical arrays and a central processing unit, each controller being capable of taking over the task of a failed controller.
Abstract: A redundant array storage system including storage units divided into two logical arrays. The redundant array storage system further includes a plurality of array control units which are all fully utilized to control data transfers between the logical arrays and a central processing unit, each controller being capable of taking over the task of a failed controller. In normal operation, each redundant array controller may only access data stored in a logical array assigned to that controller. If the other redundant array controller fails, the remaining controller may access the data stored in the logical array assigned to the failed controller only through a secondary control process that is independent from the primary control process of the remaining controller. Thus, the invention prevents parity data associated with user data placed in storage from being corrupted by attempts of two or more array control units to access the same redundancy group of data concurrently.

Journal ArticleDOI
TL;DR: The structure generator MOLGEN as mentioned in this paper produces all the molecular graphs that correspond to a given chemical formula and (optionally) prescribed and forbidden substructures as well as further optional conditions like intervals for possible ring sizes, hybridization of carbon atoms, etc.

Journal ArticleDOI
01 Nov 1995
TL;DR: A more computationally efficient scheme which will improve the efficiency of the compact QP method-the improved compact BP method-is developed and it is believed that the improved compactQP method is one of the most efficient and effective optimization algorithm for resolving the manipulator redundancy under inequality constraints.
Abstract: The compact QP method is an effective and efficient algorithm for resolving the manipulator redundancy under inequality constraints. In this paper, a more computationally efficient scheme which will improve the efficiency of the compact QP method-the improved compact BP method-is developed. With the technique of work space decomposition, the redundant inverse kinematics problem can be decomposed into two subproblems. Thus, the size of the redundancy problem can be reduced. For an n degree-of-freedom spatial redundant manipulator, instead of a 6/spl times/n matrix, only a 3/spl times/(n-3) matrix is needed to be manipulated by Gaussian elimination with partial pivoting for selecting the free variables. The simulation results on the CESAR manipulator indicate that the speedup of the compact QP method as compared with the original QP method is about 3.3. Furthermore, the speedup of the improved compact QP method is about 5.6. Therefore, it is believed that the improved compact QP method is one of the most efficient and effective optimization algorithm for resolving the manipulator redundancy under inequality constraints. >

Journal ArticleDOI
TL;DR: This work presents a technique based on a continuous task model that very closely approximates discrete models and tasks with varying characteristics and aims to maximize the total performance index, which is a performance-related reliability measurement.
Abstract: Many real-time systems have both performance requirements and reliability requirements. Performance is usually measured in terms of the value in completing tasks on time. Reliability is evaluated by hardware and software failure models. In many situations, there are trade-offs between task performance and task reliability. Thus, a mathematical assessment of performance-reliability trade-offs is necessary to evaluate the performance of real-time fault-tolerance systems. Assuming that the reliability of task execution is achieved through task replication, we present an approach that mathematically determines the replication factor for tasks. Our approach is novel in that it is a task schedule based analysis rather than a state based analysis as found in other models. Because we use a task schedule based analysis, we can provide a fast method to determine optimal redundancy levels, we are not limited to hardware reliability given by constant failure rate functions as in most other models, and we hypothesize that we can more naturally integrate with online real-time scheduling than when state based techniques are used. In this work, the goal is to maximize the total performance index, which is a performance-related reliability measurement. We present a technique based on a continuous task model and show how it very closely approximates discrete models and tasks with varying characteristics. >

Proceedings ArticleDOI
12 Sep 1995
TL;DR: The paper suggests that artificial evolution can be used to produce systems that are inherently insensitive to faults, with fault tolerance becoming part of the task specification.
Abstract: The conventional mechanism used to gain fault tolerance is redundancy. In contrast, the paper suggests that artificial evolution can be used to produce systems that are inherently insensitive to faults, with fault tolerance becoming part of the task specification. The possible techniques are investigated, and the study is grounded in a real world evolved electronic control system for a robot.

Proceedings ArticleDOI
06 Nov 1995
TL;DR: In this paper, a control system for parallel operation of nonredundant UPSs based on current control is presented, where the relative phase between the inverters is constant and the overall control system is implemented on a simple and low cost platform.
Abstract: The parallel operation of static inverters is, in a large amount of cases, the appropriate solution to achieve the high power required by some applications or to improve power system reliability The limited inverter capacity obliges to parallel the individual units to obtain the nominal load power In UPS systems, there are situations where a high reliability/availability is required by critical loads Parallel redundancy appears an immediate solution to satisfy this requirement This paper presents a control system for parallel operation of nonredundant UPSs based on current control The relative phase between the inverters is constant Simulation results as well as experimental studies are presented The overall control system is implemented on a simple and low cost platform

Proceedings ArticleDOI
M. Franklin1
13 Nov 1995
TL;DR: The schemes investigated do not require any modifications to the instruction set architecture of the machine, and no additional instructions are added by the compiler, and the performance impact of the investigated fault tolerance schemes are analyzed.
Abstract: As more and more transistors are incorporated into processor chips, the circuits are becoming more and more error-prone, necessitating the introduction of fault tolerance techniques. This paper investigates techniques to incorporate fault tolerance in superscalar processors by exploiting the functional unit redundancy available in these processors. The schemes investigated in this paper do not require any modifications to the instruction set architecture of the machine, and no additional instructions are added by the compiler. The paper also presents the results of a simulation study that we conducted to analyze the performance impact of the investigated fault tolerance schemes.

Journal ArticleDOI
TL;DR: Surprisingly, this principle for cold-standby redundancy does not hold for even series systems if the spares do not match the original components in distribution, but it is true for series systems however for matching spares.
Abstract: Design engineers are well aware of the stochastic result which says that (under the appropriate assumptions) redundancy at the component level is superior to redundancy at the system level. Given the importance of the hazard rate in reliability and life testing, we investigate to what extent this principle holds for the stronger stochastic ordering, viz, hazard rate ordering. Surprisingly, this does not hold for even series systems if the spares do not match the original components in distribution. It is true for series systems however for matching spares, and we conjecture that this is the case in general for k-out-of-n:G systems. We also investigate this principle for cold-standby redundancy (as opposed to active or parallel redundancy).

Patent
Tsang-Ling Sheu1
11 Oct 1995
TL;DR: In this paper, a fault-tolerant bridge/router with a distributed switch-over mechanism is proposed, which can tolerate any single failures and does not rely on network reconfiguration.
Abstract: A fault-tolerant bridge/router ("brouter") with a distributed switch-over mechanism of the present invention can tolerate any single failures and does not rely on network reconfiguration (or alternative paths) and, therefore, substantially improves system reliability/availability. The fault-tolerant brouter utilizes a plurality of processing elements communicating through a multiple-bus switching fabric. Each processing element can effectively support two ports, each port providing an interface to an individual LAN. Each LAN is then linked to two different ports on two different processing elements, respectively, thereby providing processing element redundancy. If a processing element fails, bridging/routing functions can be performed by the other, redundant processing element. The functions are switched using the switch-over mechanism. Because the switch-over mechanism is distributed, no centralized control mechanism is required. The fault-tolerant brouter of the present invention provides the prevention of packet loss so that a source station does not have to resend lost packets blocked due to a failed processing element and provides transparency to end stations so that the packet recovery is independent of the networking protocols implemented. In addition, due to the redundancy of the processing elements for each LAN, traffic from unlike LANs with different media speeds can be evenly balanced. In this manner, the fault-tolerant brouter of the present invention provides significant improvement in system reliability and availability.

Journal ArticleDOI
TL;DR: The method to construct a matrix has been proposed and it is shown that any matrix constructed by the proposed method can be mapped into a solution to the placement problem if a certain condition holds between N and p.
Abstract: In this paper, we deal with the data/parity placement problem which is described as follows: how to place data and parity evenly across disks in order to tolerate two disk failures, given the number of disks N and the redundancy rate p which represents the amount of disk spaces to store parity information. To begin with, we transform the data/parity placement problem into the problem of constructing an N/spl times/N matrix such that the matrix will correspond to a solution to the problem. The method to construct a matrix has been proposed and we have shown how our method works through several illustrative examples. It is also shown that any matrix constructed by our proposed method can be mapped into a solution to the placement problem if a certain condition holds between N and p where N is the number of disks and p is a redundancy rate.

Journal ArticleDOI
15 Dec 1995
TL;DR: A novel method based on theminimum cycle mean algorithm to determine the iteration bound of the MRDFG with a lower polynomial time complexity than the two existing techniques is proposed.
Abstract: Digital signal processing algorithms are repetitive in nature. These algorithms are described by iterative data-flow graphs where nodes represent computations and edges represent communications. For all data-flow graphs, there exists a fundamental lower bound on the iteration period referred to as theiteration bound. Determining the iteration bound for signal processing algorithms described by iterative data-flow graphs is an important problem. In this paper we review two existing algorithms for determination of the iteration bound. Then we propose another novel method based on theminimum cycle mean algorithm to determine the iteration bound with a lower polynomial time complexity than the two existing techniques. It is convenient to represent many multi-rate signal processing algorithms by multi-rate data-flow graphs. The iteration bound of a multi-rate data-flow graph (MRDFG) can be determined by considering the single-rate data-flow graph (SRDFG) equivalent of the MRDFG. However, the equivalent single-rate data-flow graph contains many redundant nodes and edges. The iteration bound of the MRDFG can be determined faster if these redundancies in the equivalent SRDFG are first removed. A previous approach has considered elimination of edge redundancy. In this paper we present an approach to eliminatenode redundancy in the MRDFG. We combine elimination of node and edge redundancies to propose a novel algorithm for faster determination of the iteration bound of the MRDFG.

Journal ArticleDOI
TL;DR: The effectiveness of the optimal task-allocation algorithm is demonstrated by comparing it to a competing optimal Task-Allocation Algorithm, for maximizing reliability, which obtains sub-optimal solutions in a reasonable computation time.
Abstract: This paper considers the problem of finding an optimal and sub-optimal task-allocation (assign the processing node for each module of a task or program) in redundant distributed computing systems, with the goal of maximizing system-reliability (probability that the system completes the entire task successfully). Finding an optimal task-allocation is NP-hard in the strong sense. An efficient algorithm is presented for optimal task-allocation in a distributed computing system with level-2 redundancy. The algorithm, (a) uses branch and bound with underestimates for reducing the average time complexity of optimal task-allocation computations, and (b) reorders the list of modules to allow a subset of modules that does not intra-communicate to be assigned last, for further reduction in the computations of optimal task-allocation for maximizing reliability. An efficient heuristics algorithm is given which obtains sub-optimal solutions in a reasonable computation time. The performance of our algorithms is given over a wide range of parameters such as number of modules, number of processing nodes, ratio of average execution cost to average communication cost, and connectivity of modules. The effectiveness of the optimal task-allocation algorithm is demonstrated by comparing it to a competing optimal task-allocation algorithm, for maximizing reliability. The performance of our algorithm improves very much when the difference between the number of modules and the connectivity increases.

Patent
06 Jun 1995
TL;DR: In this paper, a cache memory architecture has a separate redundant read bus fully dedicated to redundancy and fed by a single spare sub-array common to all memory sub-arrays of the cache memory.
Abstract: A cache memory architecture having a separate redundant read bus fully dedicated to redundancy and fed by a single spare sub-array common to all memory sub-arrays of the cache memory. Redundant sense amplifiers are dotted to the redundant read bus, and normal sense amplifiers are connected to a main read bus. Normal and redundant data are valid and available at the same time at the outputs of the normal and redundant sense amplifiers. When the late select address signals become valid, then the correct information can be selected via a multiplexer provided with an INHIBIT input. The multiplexer is normally controlled by decoded signals generated by a decoder, unless redundancy is required. If redundancy is required, the information generated by the bit address comparator forces the multiplexer, via the INHIBIT input, to select the redundant read bus, instead of one read bus of the main read bus, and to output the redundant byte as the selected one.

01 Dec 1995
TL;DR: A review of the different optimization approaches is presented and a new approach is presented, a genetic algorithm (GA) which can solve the general class of the redundancy allocation problem.
Abstract: The redundancy allocation problem involves the selection of components and the appropriate levels of redundancy to maximize reliability or minimize cost of a series-parallel system given design constraints. Different optimization approaches have been previously applied to the problem including dynamic programming, integer programming, and mixed integer and nonlinear programming. However these approaches can only solve sub-classes of the problem. This paper presents a review of the different optimization approaches and presents a new approach, a genetic algorithm (GA) which can solve the general class of the redundancy allocation problem. The GA is demonstrated on two different problems and compared with the other techniques.

Patent
Churoo Park1
24 Aug 1995
TL;DR: A column redundancy circuit as mentioned in this paper consists of a programming element for programming a repair column address, a comparing element for comparing the repaired column address with a column address inputted from outside to generate a redundancy enable control signal according to the result of the comparison, a decoding element for decoding the repair column addresses signal to thereby generate a decoding signal, and a redundancy column select element for compounding the decoding signal and a data input signal to enable a redundancy select signal.
Abstract: A column redundancy circuit and method of a semiconductor memory device. The column redundancy circuit comprises a programming element for programming a repair column address; a comparing element for comparing the programmed repair column address with a column address inputted from outside to thereby generate a redundancy enable control signal according to result of the comparison; a decoding element for decoding the repair column address signal to thereby generate a decoding signal; and a redundancy column select element for compounding the decoding signal and a data input signal to thereby enable a redundancy column select signal.

Journal ArticleDOI
TL;DR: A review of the various approaches which have been proposed to solve the apparent redundancy of muscles and joints in biological and artificial robot manipulators with special emphasis on recent models, which try to deal with this problem by eliminating the number of degrees of freedom.

Patent
16 Feb 1995
TL;DR: In this paper, a recording device which improves a response time to write data by reporting to a host computer the completion of writing operation prior to the complete of writing redundancy data is presented.
Abstract: A recording device which improves a response time to write data by reporting to a host computer the completion of writing operation prior to the completion of writing redundancy data. Data writing command information from the host computer is stored in a command/status memory backed up by a power source in an array controller, and the completion of writing operation is reported to the host computer at the time of the completion of writing updated data prior to the completion of writing redundancy data. Redundancy data is written as a background process. If redundancy data could not be written due to any abnormality of the power source or the like, such redundancy data may be generated from the data in other disk units of the same redundancy group in accordance with writing command information stored in the command/status memory, thereby completing the writing of redundancy data.

Patent
22 Mar 1995
TL;DR: In this article, a data processing system containing a monolithic network of cells with sufficient redundancy provided through direct logical replacement of defective cells by spare cells to allow a large monolithic array of cells without uncorrectable defects to be organized.
Abstract: A data processing system containing a monolithic network of cells with sufficient redundancy provided through direct logical replacement of defective cells by spare cells to allow a large monolithic array of cells without uncorrectable defects to be organized, where the cells have a variety of useful properties. The data processing system according to the present invention overcomes the chip-size limit and off-chip connection bottlenecks of chip-based architectures, the von Neumann bottleneck of uniprocessor architectures, the memory and I/O bottlenecks of parallel processing architectures, and the input bandwidth bottleneck of high-resolution displays, and supports integration of up to an entire massively parallel data processing system into a single monolithic entity.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: A fast lossy Internet image transmission scheme (FLIIT) for compressed images which eliminates retransmission delays by strategically shielding important portions of the image with redundancy bits is introduced.
Abstract: Images are usually transmitted across the Internet using a lossless protocol such as TCP/IP. Lossless protocols require retransmission of lost packets, which substantially increases transmission time. We introduce a fast lossy Internet image transmission scheme (FLIIT) for compressed images which eliminates retransmission delays by strategically shielding important portions of the image with redundancy bits. We describe a joint source and channel coding algorithm for images which minimizes the expected distortion of transmitted images. The algorithm efficiently allocates quantizer resolution bits and redundancy bits to control quantization errors and expected packet transmission losses. We describe an implementation of this algorithm and compare its performance on the Internet to lossless TCP/IP transmission of the same images. In our experiments, the FLIIT scheme transmitted images five times faster than TCP/IP during the day, with resulting images of equivalent quality.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: This work presents a simulation-based method for combinational design verification that aims at complete coverage of specified design errors using conventional ATPG tools, and shows how to map all the foregoing error types into SSL faults.
Abstract: We present a simulation-based method for combinational design verification that aims at complete coverage of specified design errors using conventional ATPG tools. The error models used in prior research are examined and reduced to four types: gate substitution errors (GSEs), gate count errors (GCEs), input count errors (ICEs), and wrong input errors (WIEs). Conditions are derived for a gate to be completely testable for GSEs. These conditions lend to small rest sets for GSEs. Near-minimal test sets are also derived for GCEs. We analyze redundancy in design errors and relate this to single stuck-line (SSL) redundancy. We show how to map all the foregoing error types into SSL faults, and describe an extensive set of experiments to evaluate the proposed method. Our experiments demonstrate that high coverage of the modeled design errors can be achieved with small test sets.