scispace - formally typeset
Search or ask a question

Showing papers on "Redundancy (engineering) published in 1981"


Journal ArticleDOI
TL;DR: It is argued that flooding schemes have significant drawbacks for such networks, and a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology is proposed.
Abstract: We consider the problem of maintaining communication between the nodes of a data network and a central station in the presence of frequent topological changes as, for example, in mobile packet radio networks. We argue that flooding schemes have significant drawbacks for such networks, and propose a general class of distributed algorithms for establishing new loop-free routes to the station for any node left without a route due to changes in the network topology. By virtue of built-in redundancy, the algorithms are typically activated very infrequently and, even when they are, they do not involve any communication within the portion of the network that has not been materially affected by a topological change.

386 citations


Journal ArticleDOI
TL;DR: In this article, the manipulatability and redundancy of multi-articulated robot arms are analyzed and utilization of redundancy is discussed, and experimental results are also presented concerning the use of redundancy for trajectory control with provisions for an obstacle.

153 citations


Posted Content
TL;DR: The climate construct is defined and key issues concerning climate, which have been identified by past research, are addressed and a model which represents the traditional conceptualization of climate is given.
Abstract: Climate is presented as a perceptual attribute on an organizational, group, and individual level. The climate construct is defined and key issues concerning climate, which have been identified by past research, are addressed. These issues are level of analysis, measurement, validity, redundancy and usefulness. A model which represents the traditional conceptualization of climate is given. This model is later revised by integrating aspects from the discussion of the key issues. The paper concludes with recommendations for future climate research.

152 citations


Journal ArticleDOI
R.T. Smith1, J.D. Chlipala, J.F.M. Bindels1, R.G. Nelson1, F.H. Fischer1, T.F. Mantz1 
TL;DR: In this article, the authors discuss the explosion and wicking phenomenon of polysilicon links by ~50 ns, 1.064-/spl mu/m wavelength laser pulses, and the target geometry, laser spot size and targeting accuracy.
Abstract: Yield improvement obtained with laser programmed redundancy in a 64K DRAM has range from 3000 percent during early model making to 500-800 percent after two years of volume production. The electrical design constraints on 64K redundancy organization are reviewed. The explosion and wicking phenomenon of polysilicon links by ~50 ns, 1.064-/spl mu/m wavelength laser pulses is discussed in relation to the target geometry, laser spot size and targeting accuracy. The system hardware and main software modules are detailed. In particular, the algorithms for testing, repair diagnosis, and target coordinate calculation are explained. Elemental time analysis of the main operational steps is reviewed and emphasis on strategy for main operational steps is reviewed with emphasis on strategy for improved throughput. Evolution of the laser programming technology to the next generation of VLSI devices involves smaller spot sizes and submicrometer positioning accuracy.

146 citations


Journal ArticleDOI
TL;DR: Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.
Abstract: Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.

103 citations


01 Jul 1981
TL;DR: In this paper, a method for the design of automatic flight control systems for aircraft with complex characteristics and operational requirements, such as the powered lift STOL and V/STOL configurations, is presented.
Abstract: A practical method for the design of automatic flight control systems for aircraft with complex characteristics and operational requirements, such as the powered lift STOL and V/STOL configurations, is presented. The method is effective for a large class of dynamic systems requiring multi-axis control which have highly coupled nonlinearities, redundant controls, and complex multidimensional operational envelopes. It exploits the concept of inverse dynamic systems, and an algorithm for the construction of inverse is given. A hierarchic structure for the total control logic with inverses is presented. The method is illustrated with an application to the Augmentor Wing Jet STOL Research Aircraft equipped with a digital flight control system. Results of flight evaluation of the control concept on this aircraft are presented.

94 citations


Journal Article
TL;DR: This paper reports recent research into methods for creating natural language text using a new processing paradigm called Fragment-and-Compose and the computational methods of KDS, which embodies this paradigm.
Abstract: This paper reports recent research into methods for creating natural language text. A new processing paradigm called Fragment-and-Compose has been created and an experimental system implemented in it. The knowledge to be expressed in text is first divided into small propositional units, which are then composed into appropriate combinations and converted into text.KDS (Knowledge Delivery System), which embodies this paradigm, has distinct parts devoted to creation of the propositional units, to organization of the text, to prevention of excess redundancy, to creation of combinations of units, to evaluation of these combinations as potential sentences, to selection of the best among competing combinations, and to creation of the final text. The Fragment-and-Compose paradigm and the computational methods of KDS are described.

74 citations


Journal ArticleDOI
01 Sep 1981
TL;DR: With good fault-detection mechanisms it is now possible to cover a very high percentage of all the possible failures that can occur, and once a fault is detected, systems are designed to reconfigure and proceed either with full or degraded performance depending on how much redundancy is built into the system.
Abstract: As the field of fault-tolerant computing is maturing and results from this field are taken into practical use the effects of a failure in a computer system need not be catastrophic. With good fault-detection mechanisms it is now possible to cover a very high percentage of all the possible failures that can occur. Once a fault is detected, systems are designed to reconfigure and proceed either with full or degraded performance depending on how much redundancy is built into the system. It should be noted that one particular failure may have different effects depending on the circumstances and the time at which it occurs.Today we see that large numbers of resources are being tied together in complex computer systems, either locally or in geographically distributed systems and networks. In such systems it is obviously very undesirable that the failure of one element can bring the entire system down. On the other hand one can usually not afford to design the system with sufficient redundancy to mask the effect of all failures immediately.

67 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explored the relationship between local observability, global observability and calculability and showed that these concepts are useful in characterizing the performance of process data estimators with regard to bias and uniqueness of an estimate.

65 citations


Proceedings ArticleDOI
K. Kokkonen1, P. Sharp, R. Albers, J. Dishaw, F. Louie, R. Smith 
01 Jan 1981
TL;DR: Circuitry to implement redundancy for fast static RAMs will be discussed, adding 6% to die area, 3ns to access time and ∼ 3% to circuit power, while allowing substantial yield improvements.
Abstract: Circuitry to implement redundancy for fast static RAMs will be discussed For a 16K×1 static RAM, with 40ns typical access time, the circuitry adds 6% to die area, 3ns to access time and ∼ 3% to circuit power, while allowing substantial yield improvements

60 citations


Proceedings ArticleDOI
01 Jan 1981
TL;DR: A 64K×1 dynamic RAM with 100ns access time, fault-tolerant circuitry, high-speed serial data output and on-chip refresh circuitry, without the use of pin 1 for control, will be reported.
Abstract: A 64K×1 dynamic RAM with 100ns access time, fault-tolerant circuitry, high-speed serial data output and on-chip refresh circuitry, without the use of pin 1 for control, will be reported.


Book
01 Jan 1981
TL;DR: In this paper, Telgen distinguishes weakly and strictly redundant inegualities, redundant and implicit equalities, and introduces a concept of minimal representation of a system of linear constraints.
Abstract: The theoretical results are collected in Chapter 3. Dr Telgen builds on earlier work by Boot (1962), Zionts (1965), Thompson, Tonge and Zionts (1966) and others to construct a general theory of redundancy in linear programming. Telgen distinguishes weakly and strictly redundant inegualities, redundant and implicit equalities. He introduces a concept of minimal representation of a system of linear constraints. The new theoretical work is more general than previous work, which mostly can be regarded as special cases of the general theory now developed. There is little previous work on identifying implicit equalities and on minimal representation.

Journal ArticleDOI
TL;DR: Methods are presented here which allow reliability evaluation for systems with both intermittent and permanent faults and two reliability measures, instantaneous and durational reliabilities, are defined and methods to compute them are given.
Abstract: While significant results are available which allow estimation of reliability measure for systems with permanent faults, no generally applicable results are available for intermittent (transient) faults. Methods are presented here which allow reliability evaluation for systems with both intermittent and permanent faults. Two reliability measures, instantaneous and durational reliabilities, are defined and methods to compute them are given. Computed results for the durational reliability for various redundancy schemes are compared.

Posted Content
TL;DR: In this paper, a component method is presented which maximizes user specified convex combinations of canonical correlation and the two nonsymmetric redundancy measures presented by Stewart and Love, and an empirical example is also provided.
Abstract: The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation between linear functions (canonical factors) of the two sets of variab|es. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.A component method is presented which maximizes user specified convex combinations of canonical correlation and the two nonsymmetric redundancy measures presented by Stewart and Love. Monte Carlo work comparing canonical correlation analysis, redundancy analysis, and various canonical/redundancy factoring analyses or/the Van Den Wollenberg data is presented. An empirical example is also provided.

Journal ArticleDOI
TL;DR: In this article, a method is presented to automatically inspect the block boundaries of a reconstructed two-dimensional transform coded image, to locate blocks which are most likely to contain errors, to approximate the size and type of error in the block, and to eliminate this estimated error from the picture.
Abstract: A method is presented to automatically inspect the block boundaries of a reconstructed two-dimensional transform coded image, to locate blocks which are most likely to contain errors, to approximate the size and type of error in the block, and to eliminate this estimated error from the picture. This method uses redundancy in the source data to provide channel error correction. No additional channel error protection bits or changes to the transmitter are required. It can be used when channel errors are unexpected prior to reception.

Journal ArticleDOI
Wayne S. DeSarbo1
TL;DR: The canonical correlation is the maximum correlation between linear functions (canonical factors) of the two sets of variables of the same set of measurements made on the same subjects as mentioned in this paper, and the redundancy measure, developed by Stewart and Love [1968], is a redundancy measure that measures the redundancy of a set of variables.
Abstract: The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation betweenlinear functions (canonical factors) of the two sets of variables. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.


Patent
Fumio Baba1
12 Nov 1981
TL;DR: In this paper, an integrated semiconductor circuit device is provided with a special purpose readable indicator without providing additional pins, which can be utilized to store information pertinent to the operativeness of the integrated circuit.
Abstract: An integrated semiconductor circuit device is provided with a special purpose readable indicator without providing additional pins. The indicator may be utilized to store information pertinent to the operativeness of the integrated circuit. The results of quality control monitoring may be written into the store to serve as a flag. A specific application is in semiconductor memory arrays having redundancy memory capability to automatically replace defective memory cells in the primary array. To enable one to know whether or not the redundancy memory array is being used, this information is written into a quality control storage cell. This cell may be a ROM system which may be accessed during a check mode.

Proceedings Article
01 Jan 1981
TL;DR: In this paper, the authors present a statistical reliability analysis of photovoltaic component design, including cell failure, interconnect fatigue, glass breakage, and electrical insulation breakdown, and present a means of selecting the cost-optimal level of component failures, circuit redundancy, and module replacement.
Abstract: Several statistical reliability studies have been conducted in areas of photovoltaic component design covering cell failure, interconnect fatigue, glass breakage and electrical insulation breakdown. This paper integrates the results from these various studies and draws general conclusions relative to optimal reliability features for future modules. The described analysis is based on designing for specified low levels of component failures and then controlling the degrading effects of the failures through the use of fault tolerant circuitry and module replacement. Means of selecting the cost-optimal level of component failures, circuit redundancy, and module replacement are described.

Journal ArticleDOI
TL;DR: The investigation presented in this paper aims to utilize the inherent information embedded in the code interleaving scheme when used with burst-error channels to reduce the overall redundancy requirement.
Abstract: Burst-error channels have been used to model a large class of modern communication media, and the problem of communicating reliably through such media has received much study [1]-[9]. Existing techniques include two-way communication schemes that involve error detection and retransmission, and schemes that utilize error correcting codes in code interleaving. The error-detection and retransmission scheme is simple, but its applicability has been restricted to limited environments. On the other hand, the concept of code interleaving has proved to be versatile and effective. Code interleaving distributes the error detection and correction burden among the component codes and thus lowers the overall redundancy requirement. However, the memory characteristics of the burst-error channel have not been used. This omission has prompted the investigation presented in this paper to utilize the inherent information embedded in the code interleaving scheme when used with burst-error channels. The concept of erasure decoding is introduced, leading to some useful coding and decoding strategies. Theoretical formulations are devised to predict code performance, and their validity is verified with computer simulations.

Journal ArticleDOI
TL;DR: In this article, a scheme using multiple redundant computing devices with a majority voter improves the reliability of the output of the computing device, and an iterative structure is introduced for voters in order to improve the total reliability.
Abstract: A scheme using multiple redundant computing devices with a majority voter improves the reliability of the output of the computing device. This paper analyzes some modular redundant systems with majority voters. An iterative structure is introduced for voters in order to improve the total reliability. The reliability of voters is introduced in several ways. Two models of imperfect voters are discussed in detail, that is, output-imperfect model and semiperfect model. The former is suitable for the case where voters are treated in the same manner as other modules. The latter is appropriate for the case where the characteristics of the voter are taken into account. Three theorems show the existence of optimal majority-voted logic circuit for a given value of the reliability of each module under some reasonable assumptions.


Journal ArticleDOI
TL;DR: The authors found that women respond to redundancy differently from men and even in a unique way, for example by using their redundancy payments to bring forward their plans to have a baby, even though they may well be more susceptible to redundancy than men and have distinctive orientations to work.
Abstract: t has become commonpiace to argue that women employees are more vulnerable to redundancy than their male counterparts.^ Their greater 'disposability' is often attributed^ at least partly, to their own attitudes which foster a passive response to redundancy. This paper reports two studies of redundancy situations in which women workers were involved^ Whilst they support some of the assumptions underlying the greater-disposability thesis, they point to the need for more sophisticated conceptions of reactions to redundancy than can be captured by a simple acquiescence/opposition dichotomy.' In certain situations women can respond to redundancy differently from men and even in a unique way, for example by using their redundancy payments to bring forward their plans to have a baby. Nevertheless, the implication of the paper is that we cannot assume women v/ill respond qualitatively difterently from men, even though they may well be more susceptible to redundancy than men and have distinctive orientations to work. Brown has argued that there has been a general neglect of female employees in industrial studies, and moreover that what consideration has been given to them has been largely unsatisfactory. Writers often treat employees as 'unisex\" or alternatively only consider women in so far as they are a 'special category of employees who give rise to certain problems'.^ The implication of Brown's treatment of women is that we should locate them within a more general sociology in which the differences between workers are accorded significance and indeed are its central concern, since it is exploring variability in orientations to work, alienation, job satisfaction and collective action. This appears to be the kind of approach adopted by such writers as Barron and

Journal ArticleDOI
TL;DR: Differences were found at the level of highly and medium repetitive sequences, thus demonstrating that some DNA reassociation classes may undergo amplification during root development.
Abstract: SummaryThe first step of differentiation in the root segments ofAllium cepa containing metaxylem cells in different stages of differentiation were studied by DNA reassociation curves and compared to meristem cell extracted DNA. Upon sonication of DNA samples to about 400 base pairs, the reassociation profiles of the heat denatured DNA, were spectrophotometrically followed at two different concentrations. The kinetic complexities,i.e., the number of base pairs per haploid genome of a given sequence and its redundancy were calculated. Differences were found at the level of highly and medium repetitive sequences, thus demonstrating that some DNA reassociation classes may undergo amplification during root development.

Journal ArticleDOI
TL;DR: Methods for the construction of optimal checks, required software and hardware redundancy, and implementation of the corresponding error detecting/correcting procedures by a distributed system are described.
Abstract: We consider methods of error detection and/or error correction in software and hardware of a distributed system computing values of numerical functions. These methods are based on software and hardware redundancy for the computation of additional check functions. The check functions are easily derived for any given multiplicity of errors. The redundancy does not depend on the number of processors in the original system and depends only on the multiplicity of errors. We describe methods for the construction of optimal checks, required software and hardware redundancy, and implementation of the corresponding error detecting/correcting procedures by a distributed system.

Proceedings ArticleDOI
01 Jan 1981
TL;DR: In this article, the authors present an algorithm to detect and isolate the first failure of any one of twelve duplex control sensor signals being monitored using like-signal differences for fault detection while relying upon analytic redundancy relationships among unlike quantities.
Abstract: This paper reviews the formulation and flight test results of an algorithm to detect and isolate the first failure of any one of twelve duplex control sensor signals being monitored. The technique uses like-signal differences for fault detection while relying upon analytic redundancy relationships among unlike quantities to isolate the faulty sensor. The fault isolation logic utilizes the modified sequential probability ratio test, which explicitly accommodates the inevitable irreducible low frequency errors present in the analytic redundancy residuals. In addition, the algorithm uses sensor output selftest, which takes advantage of the duplex sensor structure by immediately removing a highly erratic sensor from control calculations and analytic redundancy relationships while awaiting a definitive fault isolation decision via analytic redundancy. This study represents a proof of concept demonstration of a methodology that can be applied to duplex or higher flight control sensor configurations and, in addition, can monitor the health of one simplex signal per analytic redundancy relationship.


Journal ArticleDOI
TL;DR: The extent of possible trade-offs between query time and redundancy relative to a large class of possible data structures is explored.

Journal ArticleDOI
TL;DR: In this paper, a benefit analysis is carried out for Warm Stand-By (WSB) and Triple Modular Redundancy (TMR) for commercial systems, where maintenance and disruption of service are the main sources of operating cost and only a policy of corrective maintenance is considered.
Abstract: The use of redundancy in commercial systems can bring two improvements: 1) more reliable systems less subject to disruption of service due to failures, and 2) a reduction in maintenance and outage costs. This paper is concerned with evaluating the reduction in maintenance and outage cost. We introduce a figure of merit, the cost reduction (CR), to show the cost advantages of redundancy. Using the CR concept, a benefit analysis is carried out for Warm Stand-By (WSB) and Triple Modular Redundancy (TMR). Maintenance and disruption of service are assumed to be the main sources of operating cost and only a policy of corrective maintenance is considered. Results are expressed as a function of the ratio between the outage cost rate and the maintenance cost rate (ro/rm) and the conditions are stated for which 1) WSB and TMR are economically attractive with respect to a non-redundant system, and 2) TMR is economically better than WSB. It is shown that when designing for benefit; 1) maintainability is the main feature when outage costs and maintenance cost have the same order of magnitude. 2) coverage factor is the main feature when disruption of service has a appreciable cost consequence.