scispace - formally typeset
Search or ask a question

Showing papers on "Control reconfiguration published in 1988"


Journal ArticleDOI
TL;DR: In this paper, a scheme that utilizes feeder reconfiguration as a planning and/or real-time control tool to restructure the primary feeder for loss reduction is presented.
Abstract: Feeder reconfiguration is defined as altering the topological structures of distribution feeders by changing the open/closed states of the sectionalizing and tie switches. A scheme is presented that utilizes feeder reconfiguration as a planning and/or real-time control tool to restructure the primary feeder for loss reduction. The mathematical foundation of the scheme is given. The solution is illustrated on simple examples. >

1,297 citations


Journal ArticleDOI
TL;DR: A new approach is described in which a historical database forms the conceptual basis for the information processed by the monitor, which permits advances in specifying the low-level data collection, specifying the analysis of the collected data, performing the analysis, and displaying the results.
Abstract: Monitoring is an essential part of many program development tools, and plays a central role in debugging, optimization, status reporting, and reconfiguration. Traditional monitoring techniques are inadequate when monitoring complex systems such as multiprocessors or distributed systems. A new approach is described in which a historical database forms the conceptual basis for the information processed by the monitor. This approach permits advances in specifying the low-level data collection, specifying the analysis of the collected data, performing the analysis, and displaying the results. Two prototype implementations demonstrate the feasibility of the approach.

205 citations


Journal ArticleDOI
TL;DR: A method is proposed for achieving fault tolerance by introducing a redundant stage for a special-purpose fast Fourier transform (FFT) processor and has 100% detection and location capability, regardless of the magnitude of the roundoff errors.
Abstract: A method is proposed for achieving fault tolerance by introducing a redundant stage for a special-purpose fast Fourier transform (FFT) processor. A concurrent error-detection technique, called recomputing by alternate path, is used to detect errors during normal operation. Once an error is detected, a faulty butterfly can be located with log (N+5) additional cycles. The method has 100% detection and location capability, regardless of the magnitude of the roundoff errors. A gracefully degraded reconfiguration using a redundant stage is introduced. This technique ensures a high improvement in reliability and availability. Hardware overhead is O(1/log N) with some additional comparators and switches. The method can be applied to other algorithms implementable on the butterfly structure. >

121 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: A description is given of Clouds, an operating system designed to run on a set of general-purpose computers that are connected via a medium-to-high-speed local area network that does away with the need for file systems and replaces it with a more powerful concept, namely, the object system.
Abstract: A description is given of Clouds, an operating system designed to run on a set of general-purpose computers that are connected via a medium-to-high-speed local area network. The structure of Clouds promotes transparency, support for advanced programming paradigms, and integration of resource management, as well as a fair degree of autonomy at each site. The system structuring paradigm chosen for Clouds is an object/thread model. All instances of services, programs, and data in Clouds are encapsulated in objects. The concept of persistent objects does away with the need for file systems, replacing it with a more powerful concept, namely, the object system. The facilities in Clouds include integration of resources by location transparency; support for various types of atomic operations, including conventional transactions; advanced support for achieving fault tolerance; and provisions for dynamic reconfiguration. >

89 citations


Proceedings ArticleDOI
23 May 1988
TL;DR: In this paper, the authors report evaluation results obtained on the six-degree-of-freedom nonlinear simulation of the Grumman Combat Reconfigurable Control Aircraft (CRCA).
Abstract: The authors report evaluation results obtained on the six-degree-of-freedom nonlinear simulation of the Grumman Combat Reconfigurable Control Aircraft (CRCA). The reconfiguration strategy consists of a robust flight control system tolerant of low-level surface damage, a hierarchical failure detection, isolation, and estimation (FDIE) system identifying actuator failures and moderate-to-serve surface damage, and a reconfiguration logic in which the pseudo surface resolver (PSR) is reconfigured after impairment to recover performance and minimize transients. Preliminary performance results are presented. >

60 citations


Proceedings ArticleDOI
05 Jun 1988
TL;DR: In this article, the authors consider image computations on a mesh with reconfigurable bus (reconfigurable mesh) and propose a reconfiguration scheme to dynamically obtain various interconnection patterns among the processor elements.
Abstract: The authors consider image computations on a mesh with reconfigurable bus (reconfigurable mesh). The architecture consists of an array of processors overlaid with a reconfigurable bus system. The reconfiguration scheme can be used to dynamically obtain various interconnection patterns among the processor elements. The reconfiguration scheme supports several parallel techniques developed on the CRCW PRAM (concurrent read, concurrent write parallel random-access machine) model, leading to asymptotically superior solution times to image problems compared to those on the mesh with multiple broadcasting, the mesh with multiple buses, the mesh-of-trees, and the pyramid computer. >

57 citations


Proceedings ArticleDOI
23 May 1988
TL;DR: The Self-Repairing Flight Control System Program was to improve the reliability, maintainability, survivability, and life cycle cost of aircraft flight control systems through aerodynamic reconfiguration and maintenance diagnostics.
Abstract: A description is given of the Self-Repairing Flight Control System Program, which began in 1984. The program objective was to improve the reliability, maintainability, survivability, and life cycle cost of aircraft flight control systems through aerodynamic reconfiguration and maintenance diagnostics. A description is given of the four program tasks designed to satisfy the objective. Reconfiguration technology development for future fighters addresses the reliability, survivability, and life-cycle cost objectives. Maintenance diagnostics tasks address the maintainability objective. The proof of concept flight demonstration and advanced flight demonstration tasks support the transition to new weapon systems. The technology is being applied to current and advanced fighter aircraft through feasibility studies, development studies, design criteria development, ground simulations field demonstrations, and fighter flight tests. >

49 citations


Proceedings ArticleDOI
27 Jun 1988
TL;DR: A discussion is presented of a fault-tolerant hypercube multiprocessor architecture which uses a novel algorithm-based fault-detection approach for identifying faulty processors and proposes a reconfiguration strategy for reconfiguring the system around faulty processors by introducing spare links and nodes.
Abstract: A discussion is presented of a fault-tolerant hypercube multiprocessor architecture which uses a novel algorithm-based fault-detection approach for identifying faulty processors. The scheme involves the detection and location of faulty processors concurrently with the actual execution of parallel applications on the hypercube. The authors have implemented system-level fault-detection mechanisms for various parallel applications on a 16-processor Intel iPSC hypercube multiprocessor. They report on the results of two applications: matrix multiplication and fast Fourier transform. They have performed extensive studies of fault coverage of their system-level fault-detection schemes in the presence of finite-precision arithmetic, which affects the system-level encodings. They propose a reconfiguration strategy for reconfiguring the system around faulty processors by introducing spare links and nodes. >

44 citations


Patent
26 Sep 1988
TL;DR: In this article, a data processing system includes a plurality of host systems and peripheral subsystems, particularly data storage subsystems (DASD), which can be clustered into power clusters.
Abstract: A data processing system includes a plurality of host systems and peripheral subsystems, particularly data storage subsystems. Each of the data storage subsystems includes a plurality of control units attaching a plurality of data storage devices such as direct access storage devices (DASD) for storing data on behalf of the various host systems. Each of the control units have a separate storage path for accessing the peripheral data storage devices using dynamic pathing. The storage paths can be clustered into power clusters. Maintenance personnel acting through maintenance panels on either the control units or the peripheral data storage devices activate the subsystem to request reconfiguration of the subsystem from all of the host systems connected top the subsystem. The host systems can honor the request or reject it based upon diverse criteria. Upon each of the host systems approving the reconfiguration, the subsystem 13 is reconfigured for maintenance purposes. Upon completion of the maintenance procedures, a second reconfiguration request is sent to the host systems for causing quiesce devices to resume normal operations.

36 citations



Journal ArticleDOI
TL;DR: It is argued that all of these levels are useful, in the sense that proven dependability procurement techniques can be applied at each level, and that it is beneficial to have distinct, precisely defined terminology for describing impairments to and procurement strategies for computer system dependability at these levels.
Abstract: A unified framework and terminology for the study of computer system dependability is presented. Impairments to dependability are viewed from six abstraction levels. It is argued that all of these levels are useful, in the sense that proven dependability procurement techniques can be applied at each level, and that it is beneficial to have distinct, precisely defined terminology for describing impairments to and procurement strategies for computer system dependability at these levels. The six levels in the proposed framework are:1. Defect level or component level, dealing with deviant atomic parts.2. Fault level or logic level, dealing with deviant logic values or path selections.3. Error level or information level, dealing with deviant internal states.4. Malfunction level or system level, dealing with deviant functional behavior.5. Degradation level or service level, dealing with deviant performance.6. Failure level or result level, dealing with deviant outputs or actions.Briefly, a hardware or software component may be defective (perfect hardware may also become defective due to wear and aging). Certain system states expose the defect, resulting in the development of a logic-level fault. Information flowing within a faulty system may become contaminated, leading to the presence of an error. An erroneous system state may result in a subsystem malfunction. Automatic or manually controlled reconfiguration can isolate or bypass the malfunctioning subsystem but may lead to a degraded performance or service. Serious performance degradation may lead to a result-level system failure when untrustworthy or untimely results are produced. Finally, a failed computer system can have adverse effects on the larger societal or corporate system into which it is incorporated.

Proceedings ArticleDOI
01 Jun 1988
TL;DR: A summary of systematic approach developed by the authors for spare allocation and reconfiguration is presented, modeled in graph-theoretic terms in which spare allocation for a specific reconfigurable system is shown to be equivalent to either a graph-matching or agraph-dominating-set problem.
Abstract: One approach to enhancing the yield of large area VLSI is through design for yield enhancement by means of restructurable interconnect, logic and computational elements. Although extensive literature exists concerning architectural design for inclusion of spares and restructuring mechanisms in memories and processor arrays, little research has been published on optimal spare allocation and reconfiguration in the presence of multiple defects. In this paper, a summary of a systematic approach developed by the authors for spare allocation and reconfiguration is presented. Spare allocation is modeled in graph theoretic terms in which spare allocation for a specific reconfigurable system is shown to be equivalent to either a graph matching or a graph dominating set problem. The complexity of optimal spare allocation for each of the problem classes is analyzed in this paper and reconfiguration algorithms are provided.

Journal ArticleDOI
TL;DR: A reconfiguration scheme is presented that is suitable for both yield and reliability enhancement of large-area VLSI implementations of binary tree architectures and makes use of partially global redundancy to allow clustered effects to be tolerated.
Abstract: A reconfiguration scheme is presented that is suitable for both yield and reliability enhancement of large-area VLSI implementations of binary tree architectures. The approach proposed makes use of partially global redundancy to allow clustered effects to be tolerated. The binary tree is cut a few levels above the leaves to form an upper subtree and many lower subtrees, with spare processors being grouped adjacent to the root of each lower subtree. Redundant links with programmable switches are used to permit reconfiguration. The cost of the scheme, in terms of redundant hardware, is comparable to that of other schemes. An O(N) H-tree layout is used. In comparison to existing schemes, the proposed scheme gives much better yield and better reliability. >

Proceedings ArticleDOI
25 May 1988
TL;DR: Backtracking is introduced into the algorithm for maximizing the processor utilization, at the same time keeping the complexity of the interconnection network as simple as possible.
Abstract: The fault-tolerance scheme consists of two phases: testing and locating faults (fault diagnosis), and reconfiguration. The first phase uses an online error-detection technique that achieves a compromise between the space and time redundancy approaches. This technique reduces the rollback time considerably and is capable of detecting permanent as well as transient faults. Reconfiguration consists of mapping the function of the faulty processor element onto an adjacent nonfaulty neighbor, which is achieved by using a global control responsible for changing the states of the switches in the interconnection network. Backtracking is introduced into the algorithm for maximizing the processor utilization, at the same time keeping the complexity of the interconnection network as simple as possible. A reliability analysis of this scheme using a Markov model and a comparison with some previous schemes are given. >

Journal ArticleDOI
TL;DR: The recovery metaprogram (RPM) which monitors the run-time behavior of the application program and coordinates error detection, recovery, and reconfiguration, is examined, focusing on privilege levels, which provide protection against error propagation, RMP implementation, and conversations.
Abstract: The effect on software fault tolerance of hardware features such as hierarchical privilege levels (rings), the use of descriptors for memory protection, separated virtual address spaces, and ring crossings that enforce specific energy points is considered. A strategy that uses a separate programming layer, the recovery layer, to handle fault-tolerant aspects of process interactions is discussed. The recovery metaprogram (RPM) which monitors the run-time behavior of the application program and coordinates error detection, recovery, and reconfiguration, is examined, focusing on privilege levels, which provide protection against error propagation, RMP implementation, and conversations. The intel 80286 has been used as a sample implementation vehicle, but most of the discussion applies to any machine with a similar range of features. Extension to multiprocessor systems is indicated. >

Patent
04 Aug 1988
TL;DR: In this article, a data processing system includes a plurality of host systems and peripheral subsystems, particularly data storage subsystems; each of the control units has a separate storage path for accessing the peripheral data storage devices using dynamic pathing.
Abstract: A data processing system includes a plurality of host systems and peripheral subsystems, particularly data storage subsystems. Each of the data storage subsystems includes a plurality of control units attaching a plurality of data storage devices such as direct access storage devices (DASD) for storing data on behalf of the various host systems. Each of the control units have a separate storage path for accessing the peripheral data storage devices using dynamic pathing. The storage paths can be clustered into power clusters. Maintenance personnel acting through maintenance panels on either the control units or the peripheral data storage devices activate the sub-system to request reconfiguration of the sub-system from all of the host systems connected to the sub-system. The host systems can honour the request or reject it based upon diverse criteria. Upon each of the host systems approving the reconfiguration, the sub-system 13 is reconfigured for maintenance purposes. Upon completion of the maintenance procedures, a second reconfiguration request is sent to the host systems for causing quiesce devices to resume normal operations.

Journal ArticleDOI
TL;DR: In this article, a dynamic model for a nuclear power plant steam generator (vertical, preheated, U-tube recirculation-type) is formulated as a sixth-order nonlinear system.

Journal ArticleDOI
TL;DR: The MAPCON (MAP configuration) system is a knowledge-based tool used to configure MAP (manufacturing automation protocol) version 2.1 networks and was built entirely in Knowledge Craft, a language and development environment for building expert systems on the TI Explorer Lisp machine.
Abstract: The MAPCON (MAP configuration) system is a knowledge-based tool used to configure MAP (manufacturing automation protocol) version 2.1 networks. Configuration management deals with determining the characteristics of the network and setting the parameters of the devices attached to the network in such a way as to make the network operational. MAPCON performs static configuration and can be used for reconfiguration purposes in an offline mode. MAPCON does not interface to the network and hence does not interact with the operational network. The parameters that are considered are the ones required for MAP 2.1 networks. The rules for identifying values of these parameters are developed and implemented. The MAPCON expert system was built entirely in Knowledge Craft, a language and development environment for building expert systems on the TI Explorer Lisp machine. Knowledge for the MAPCON expert system was obtained through interviews with MAP experts. This knowledge was then organized and implemented in Knowledge Craft. In addition, an elegant user interface was developed for the system. >

Proceedings ArticleDOI
27 Jun 1988
TL;DR: The authors propose mapping schemes for certain length of loops so that the resulting systems can recover from any fault within one step.
Abstract: Reconfiguration algorithms are invoked when a fault is detected in the original loop or multidimensional grid. The reconfiguration algorithms are able to reach an equivalent set of topologies within the same architecture in a distributed manner. The authors propose mapping schemes for certain length of loops so that the resulting systems can recover from any fault within one step. With application of one of the schemes, the reconfiguration strategy for a multidimensional grid system can have better performance. >

Journal ArticleDOI
TL;DR: The distributed associative memory (DAM) model is proposed for distributed and fault-tolerant computation related to retrieval tasks and has been developed and backed up by experimental results that show the feasibility of such an approach.
Abstract: The distributed associative memory (DAM) model is proposed for distributed and fault-tolerant computation related to retrieval tasks. The fault tolerance is with respect to noise in the input key data and/or local and global failures in the memory itself. Working models for fault-tolerant image reconfiguration and database information retrieval have been developed and backed up by experimental results that show the feasibility of such an approach. >

Patent
09 Dec 1988
TL;DR: In this paper, the authors describe a system with two processors and an interconnected modem, with one processor functioning as a control for both the modem and the second processor for test or reconfiguration purposes.
Abstract: Preferred embodiments include systems with two processors and an interconnected modem, one processor functioning as a control for both the modem and the second processor. This permits remote communication with the second processor for test or reconfiguration purposes.

Proceedings ArticleDOI
C. J. Dittmar1
15 Jun 1988
TL;DR: In this article, the Hyperstable Model-Following Flight Control (HMFC) approach is proposed for reconfigurable flight control systems, which is an implicit approach that is an order of magnitude smaller in size than the explicit approach, termed the Control Reconfiguration Feature (CRF).
Abstract: Techniques have been developed for remixing the commands issued by flight control laws that assume unimpaired operation. This approach allows impairments to be accommodated that previously were not. This increase in fault tolerance does not decrease reliability because no additional hardware is installed on the aircraft. Instead, previously existing redundant control surfaces are used to greater advantage. A recent effort has focused on an implicit approach as opposed to a previously mechanized explicit approach. The implicit approach, which is termed Hyperstable Model-Following Flight Control (HMFC), is estimated to be an order-of-magnitude smaller in size than the explicit approach, which is termed the Control Reconfiguration Feature (CRF). This reduction in size is accomplished without a loss in performance. In fact, performance can be increased because the reduced complexity allows a higher iteration rate and, hence, reduced reconfiguration time. Furthermore, HMFC will successfully reconfigure under conditions for which the CRF will not, while possessing robustness with respect to disturbances and unmodeled states. The completion of the HMFC system design constitutes an advance in the simplicity and comprehensiveness of reconfigurable flight control systems.

Proceedings ArticleDOI
07 Jun 1988
TL;DR: The reconfiguration of rectangular arrays is considered from a novel, totally general point of view (that can be immediately extended to an array connectivity besides the rectangular one), and existing algorithms to this purpose are analyzed to identify the best-suited ones.
Abstract: The reconfiguration of rectangular arrays is considered from a novel, totally general point of view (that can be immediately extended to an array connectivity besides the rectangular one). The only constraint specifically taken into account is that of interconnection locality, represented through the adjacency domain of any given cell in the array; reconfiguration is described as index-mapping. A coverage table is used to represent such index mapping: it is seen that its solution coincides with that of a complete matching problem, and existing algorithms to this purpose are analyzed to identify the best-suited ones. Complexity bounds for the interconnection networks supporting reconfiguration are then determined, and they are seen to be dependent only on the adjacency domain chosen, not on the dimensions of the array. The results are pertinent to the problem of the fault tolerance of VLSI and WSI arrays. >

01 Jan 1988
TL;DR: This thesis argues for the viability of state transition histories, or logs, as a more suitable storage representation for abstract data types in distributed computing environments and proposes a new protocol for reducing message and storage requirements of histories, and a novel reconfiguration and recovery method.
Abstract: Data replication enhances the availability of data in distributed systems. This thesis deals with the management of a particular representation of replicated data objects that belong to Abstract Data Types (ADT). Traditionally, replicated data objects have been stored in terms of their states, or values. In this thesis, I argue for the viability of state transition histories, or logs, as a more suitable storage representation for abstract data types in distributed computing environments. We present two main contributions: a new protocol for reducing message and storage requirements of histories, and a novel reconfiguration and recovery method. In the first protocol, we introduce the notion of two phase gossip as the primary mechanism for managing distributed replicated event histories. We focus our second protocol for reconfiguration and recovery on enhancing the availability of distributed objects in the face of sequences of failures. Additionally, our reconfiguration protocol supports system administration functions related to the storage of distributed objects. In combination, the two protocols that we propose demonstrate the viability and desirability of the distributed representation of an ADT object as a history of the state transitions that the data undergoes, rather than as the value or the sequence of values that it assumes.

Proceedings ArticleDOI
03 Oct 1988
TL;DR: Reconfiguration strategies in VLSI processor arrays have been advocated in the recent literature as a means of achieving higher production yield and higher reliability and the authors present reconfiguration techniques for rectangular arrays that are generalizations of the Diogenes approach.
Abstract: Reconfiguration strategies in VLSI processor arrays have been advocated in the recent literature as a means of achieving higher production yield and higher reliability. The authors present reconfiguration techniques for rectangular arrays that are generalizations of the Diogenes approach proposed earlier by A.L. Rosenberg (1983). These techniques overcome many of the limitations of the earlier Diogenes schemes. Two schemes for reconfiguring rectangular arrays are presented. The first scheme for reconfiguring an N*N rectangular array uses r additional rows and c additional columns. It guarantees reconfiguration as long there are at most c columns that have more than r faulty processors. The second scheme uses spare processors. It guarantees reconfiguration as long as there are enough spare processors, and in any window of N processors in a particular linearization there are at most k faulty processors, where k is a parameter of the design. >

Proceedings ArticleDOI
09 Aug 1988
TL;DR: A brief review of where autonomous agents may use neural networks and their learning algorithms is presented, and Architectures are proposed for implementing self-repairing sensor and identification systems aboard autonomous agents.
Abstract: Neural networks are ideally suited for sensing images and waveforms, processing them into intermediate levels of representation and outputing identification and/or characteristics of the sensed object. These networks can solve problems that conventional algorithms haven't and already in several cases this new technology has performed better than humans (e.g. sonar signal classification). A brief review of where autonomous agents may use neural networks and their learning algorithms is presented. A high yielding area is seen in the self-repair of damaged or faulted components. Architectures are proposed for implementing self-repairing sensor and identification systems aboard autonomous agents. One example is presented for a system which identifies visual objects. This system has four layers of massively connected simple parallel processors. Each connection has a weight attribute and the collected assignment of weights in a layer determines what function the layer will perform. The first layer (the imput layer) is simply the pixel detector layer. The second layer has eight sublayers which are sensitive to short line segments in eight different orientations. The third layer detects elementary combinations of the lower lines such as oriented corners or curve segments. The fourth layer has one sublayer for each macroscopic object to be identified which may be fused with a pinpoint location sensor. The crux of using reconfiguration in this type of sensor is that when one (or several) of the units or detectors become inoperative then neighboring detectors in that layer may be used to reprogram the weights connecting surviving units to restore functionality. This strategy takes advantage of the redundancy of parallel processors present in most types of neural networks. Alternatively a properly functioning agent may teach the injured agent or competitive learning for repairing middle processing layers may be utilized when an operative after-the-fact sensor is available for teaching the output layer.

Proceedings ArticleDOI
15 Jun 1988
TL;DR: The structure of a fault tolerant controller built for the control of a chemical process with unreliable sensors is presented, which enables on-line and automatic reconfiguration of the control structure.
Abstract: Unstable, dynamically changing environments represent a major challenge for the design of adaptive control systems Significant changes in the process model, various hardware failures may require the modification of the basic control law in order to maintain the control of the plant To support the design and implementation of structurally adaptive systems, we have developed a new architecture and programming tools The architecture enables on-line and automatic reconfiguration of the control structure The most important features of our approach are: (1) knowledge-based techniques are used to represent the model of the process and the possible control schemes including alternatives and selection rules, (2) a graph-oriented computational model has been defined which allows the efficient implementation of reconfigurable signal processing systems and (3) a special interpretation technique has been developed which can build and dynamically change the signal-flow of signal processing and control systems The operation of the system can be conceptualized in the following way The actual state of the process is continuously monitored If a significant change is detected, the process model will be updated which will start a reasoning process on the effects of the change The result may be a partial or full reconfiguration of the cotroller without interrupting the system operation The method has been implemented and tested in different applications The paper will present the structure of a fault tolerant controller built for the control of a chemical process with unreliable sensors

29 Jan 1988
TL;DR: The CKF design is described which is nearing maturity and is currently entering a computer simulation test cycle, and approaches to sensor information requirements definition, local sensor integration/filtering, master filter modeling, and automatic fault detection/isolation and system reconfiguration are discussed.
Abstract: Common Kalman Filter is an ongoing U.S. Air Force Avionics Laboratory program to develop, simulate. and evaluate computational techniques and software architectures that will enhance the robustness and reliability of Integrated navigation systems for the next generation of military aircraft. The project addresses four areas: filter mechanization and architecture including decentralized filtering; fault detection and isolation; system software reconfiguration strategies; and exploitation of advanced avionics technologies such as VHSIC, parallel processing and Pave Pillar data bus architectures. Problems of Kalman filter throughput, cascaded filter instability, and the high rate data interface requirements of decentralized filtering are being addressed. Decentralized filtering is a natural technique for tying together the many sensors of the modern navigation suite aboard the next-generation fighter. Each navigation sensor typically has been designed as a locally autonomous navigation system, with its own dedicated Kalman filter. The decentralized filter framework ideally would improve fault tolerance and fault recovery by retaining the autonomy of the filters and at the same time, preserve navigation accuracy by maintaining a central master filter with a globally optimal or near optimal solution, while enhancing throughput in a distributed processing environment. The decentralized, fault-tolerant filter scheme described here demonstrates the Carlson technique of trading off optimality of the master solution to gain an extra measure of fault tolerance and throughput. The small accuracy loss due to suboplimality is more than compensated by the increase in robustness of the navigation solution. The navigation sensor suite around which this is designed represents the avionics suite of the next-generation fighter aircraft, with dual inertial units, GPS, JTIDS, SIT AN, and other landmark reference sensors. This paper describes the CKF design which is nearing maturity and Is currently entering a computer simulation test cycle. Approaches to sensor information requirements definition, local sensor integration/filtering, master filter modeling, and automatic fault detection/isolation and system reconfiguration are discussed. Failure modes are described which will be the basis for the evaluation of CKF failure response in subsequent simulations.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The authors introduce a parallel DSP (digital signal processing) system, consisting of 36 processor nodes, which provides high programmability and effective scheduling capability and its program-development-assist system facilitates powerful debugging functions to observe all states of NOVI and to control its execution.
Abstract: The authors introduce a parallel DSP (digital signal processing) system called NOVI, consisting of 36 processor nodes. NOVI has a multicomputer architecture, which provides high programmability and effective scheduling capability. Its program-development-assist system facilitates powerful debugging functions to observe all states of NOVI and to control its execution. Each processor node comprises a transputer, a floating-point ALU (arithmetic logic unit), and 1 Mbyte of local memory, which can deal with even rather large tasks within a single node. An interconnection board allows easy reconfiguration into various network topologies. >

Proceedings ArticleDOI
01 Oct 1988
TL;DR: The paper describes a means to change the coverage zone of a spacecraft antenna by reconfiguration of the reflector surface by using a realistic model for an offset mesh reflector.
Abstract: The paper describes a means to change the coverage zone of a spacecraft antenna by reconfiguration of the reflector surface. By using a realistic model for an offset mesh reflector, reconfiguration is successfully demonstrated for the case of two future INTELSAT regional beams. Suggestions are made concerning both the mesh characteristics and the means by which it might be controlled.