scispace - formally typeset
Search or ask a question

Showing papers on "Control reconfiguration published in 1987"


Journal ArticleDOI
TL;DR: Two algorithms for spare allocation that are based on graph-theoretic analysis are presented, which provide highly efficient and flexible reconfiguration analysis and are shown to be NP-complete.
Abstract: Yield degradation from physical failures in large memories and processor arrays is of significant concern to semiconductor manufacturers. One method of increasing the yield for iterated arrays of memory cells or processing elements is to incorporate spare rows and columns in the die or wafer. These spare rows and columns can then be programmed into the array. The authors discuss the use of CAD approaches to reconfigure such arrays. The complexity of optimal reconfiguration is shown to be NP-complete. The authors present two algorithms for spare allocation that are based on graph-theoretic analysis. The first uses a branch-and-bound approach with early screening based on bipartite graph matching. The second is an efficient polynomial time-approximation algorithm. In contrast to existing greedy and exhaustive search algorithms, these algorithms provide highly efficient and flexible reconfiguration analysis.

298 citations


Proceedings ArticleDOI
Ed Andert1
01 Dec 1987
TL;DR: The PPCS system implements fifteen different heuristic scheduling algorithms to map a set of tasks onto the processing nodes of a distributed computer and shows the feasibility of using fast algorithms to heuristically schedule a system of multiple processors allowing dynamic task allocation.
Abstract: Distributed processor systems are currently used for advanced, high-speed computation in application areas such as image processing, artificial intelligence, signal processing, and general data processing. The use of distributed and parallel processor computer systems today requires systems designers to partition an application into at least as many functions as there are processors. Spare processors must be allocated and function migration paths must be designed to allow fault tolerant reconfiguration. The parallel process/ parallel architecture control simulation (PPCS) models parallel task allocation on a distributed processor architecture. Parallel task allocation is a first step in designing a dynamic parallel processor operating system that automatically assigns and reassigns application tasks to processors. Advantages of this approach are: dynamic reconfigurability removing the need for spare processing power reserved for failures; the reduced need for fallback and recovery software for fault detection; more optimized partitioning of functions; and better load balancing over available processors. PPCS models various distributed processing configurations, task dependencies, and the scheduling of the tasks onto the processor architecture. The PPCS system implements fifteen different heuristic scheduling algorithms to map a set of tasks onto the processing nodes of a distributed computer. The simulation shows the feasibility of using fast algorithms to heuristically schedule a system of multiple processors allowing dynamic task allocation.

39 citations


Journal ArticleDOI
TL;DR: This paper presents the technique of concurrent hierarchical fault simulation, a performance model, and two hierarchical optimization techniques to enhance fault simulator performance and indicates that the speedup should increase with circuit size.
Abstract: This paper presents the technique of concurrent hierarchical fault simulation, a performance model, and two hierarchical optimization techniques to enhance fault simulator performance. The mechanisms for these enhancements are demonstrated with a performance model and are validated experimentally via CHIEFS, the Concurrent Hierarchical and Extensible Fault Simulator, and WRAP, an offline hierarchy compressor. Hieararchy-based fault partitioning and circuit reconfiguration are shown to improve simulator performance to O(n log n) under appropriate conditions. A decoupled fault modeling technique permits further performance improvements via a bottom-up hierarchy compression technique where macros of primitives are converted to single primitives. When combined, these techniques have produced a factor of 180 speedup on a mantissa multiplier. The performance model indicates that the speedup should increase with circuit size.

34 citations


Proceedings Article
10 Jun 1987
TL;DR: In this paper, a digital controller that reconfigures control strategies as a result of changes in plant operating conditions has been developed and demonstrated on the 5 MWt MIT Research Reactor.
Abstract: A digital controller that reconfigures control strategies as a result of changes in plant operating conditions has been developed and demonstrated. The reconfiguration logic involves the organization of available plant information, the definition and identification of plant operating conditions, the selection of the appropriate control law for the given operating condition, and the verification of the control choice through on-line performance evaluation. Also included is a decision and supervisory logic which interfaces with a knowledge base to insure that plant operating guidelines, procedures, and specifications are not exceeded. This methodology is entirely general and may be used with any process system. It was demonstrated experimentally on the 5 MWt MIT Research Reactor. The reconfigurable controller successfully controlled the reactor power under steady-state, power maneuver, and experimentally-induced anomalies in the control laws.

22 citations


Journal ArticleDOI
TL;DR: The need and existence of an active reconfiguration strategy, in which the system reconfigures itself on the basis of not only the occurrence of a failure but also the progression of the mission, are shown.
Abstract: A new quantitative approach to the problem of reconfiguring a degradable multimodule system is presented. The approach is concerned with both assigning some modules for computation and arranging others for reliability. Conventionally, a fault-tolerant system performs reconfiguration only upon a subsystem failure. Since there exists an inherent trade-off between the computation capacity and fault tolerance of a multimodule computing system, the conventional approach is a passive action and does not yield a configuration that provides an optimal compromise for the trade-off. By using the expected total reward as the optimal criterion, the need and existence of an active reconfiguration strategy, in which the system reconfigures itself on the basis of not only the occurrence of a failure but also the progression of the mission, are shown.Following the problem formulation, some important properties of an optimal reconfiguration strategy, which specify (i) the times at which the system should undergo reconfiguration and (ii) the configurations to which the system should change, are investigated. Then, the optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems. The algorithms for solving these problems and a demonstrative example are given. Extensions of the optimal reconfiguration problem are also discussed.

20 citations


Journal ArticleDOI
TL;DR: The architecture of proposed system, its instruction set and control possibilities for bus controlling, communication among processors and for reconfiguration will be described in this contribution.

15 citations


Proceedings ArticleDOI
18 May 1987
TL;DR: A fault-tolerant architecture is proposed to allow reliable computation of systolic arrays by using physical redundancy and residue number coding.
Abstract: Much attention has been recently given to VLSI and WSI processing arrays: systolic arrays are often adopted to execute a wide class of algorithms, e.g for matrix arithmetic or signal and image processing. In this paper a fault-tolerant architecture is proposed to allow reliable computation of systolic arrays by using physical redundancy and residue number coding. Such architecture supplies also information for fast reconfiguration.

14 citations


Journal ArticleDOI
TL;DR: A simulation of an NMR redundant processor system was constructed using a gate level simulation package and the ability of each digital processor to react to randomly induced stuck-at faults was measured.
Abstract: Latent faults represent a potential obstacle in the synthesis of highly reliable digital computer systems. A simulation of an NMR redundant processor system was constructed using a gate level simulation package. The ability of each digital processor to react to randomly induced stuck-at faults is measured, and the amount of time it took the processor's control program to propagate faults to an output was recorded. These propagation times represent the latency times of the faults. The effect of fault latency in degrading system reliability is explored.

13 citations


DOI
01 Nov 1987
TL;DR: Several broad classes of nonplanar arrays/architectures are analyzed in this article, which have a natural interpretation in terms of data flow on the surface of a torus, sphere, cylinder and other geometric forms.
Abstract: Several broad classes of nonplanar arrays/architectures are analysed. Included in this are architectures which have a natural interpretation in terms of data flow on the surface of a torus, sphere, cylinder and other geometric forms. A definitive quantification is given of the several architectural classes and architectural reconfiguration is demonstrated to facilitate iterative computations.

13 citations


Journal ArticleDOI
TL;DR: Analysis and synthesis techniques for multicomputer networks that perform fast reconfiguration into rings, stars, and trees are presented and some fine mathematical properties exhibited by special shift registers with variable bias are introduced.
Abstract: This paper presents analysis and synthesis techniques for multicomputer networks that perform fast reconfiguration into rings, stars, and trees. Each reconfiguration into a new network structure requires only two codes (reconfiguration code RC and bias B), and can be performed during one clock period. Because the reconfiguration methodology presented is based on some fine mathematical properties exhibited by special shift registers with variable bias (SRVB), they are also introduced in this paper.

11 citations


Journal ArticleDOI
TL;DR: Tolerance to sensor failure is developed by creating functional redundancy in a computer controlled process with time delay as mentioned in this paper, which is implemented by formulating time delay compensators and control a priori by reconfiguring the controller automatically when a sensor failure was detected.

Journal ArticleDOI
TL;DR: A global approach to the design of reliable Flight Control Systems incorporates into a single analytical framework the three basic steps of the design, components selection, components location and control law synthesis, and provides a cost index that is sensitive to both reliability and performance issues.

Journal ArticleDOI
W. Hutcheson1, T. Snyder2
TL;DR: The basic elements of the architecture underlying the ACCUNET®family of digital services, the role of digital cross-connect systems, their associated support systems, and the services they provide are described.
Abstract: The basic elements of the architecture underlying the ACCUNET®family of digital services, the role of digital cross-connect systems, their associated support systems, and the services they provide are described. As an illustrative example, a description of the deployment of customer controlled reconfiguration (CCR) and the control system design considerations for user interfaces is provided.

Journal ArticleDOI
TL;DR: A heuristic algorithm for dimensioning of hardware architecture is proposed to find a configuration allowing a task allocation which minimizes costs and satisfies predefined performance constraints.

01 May 1987
TL;DR: This thesis focuses on how to design and implement replication and reconfiguration for the distributed mail repository, considering these questions in the context of the programming language Argus, which was designed to support distributed programming.
Abstract: Conventional approaches to programming produce centralized programs that run on a single computer. However, an unconventional approach can take advantage of low-cost communication and small, inexpensive computers. A distributed program provides service through programs executing at several nodes of a distributed system. Distributed programs can offer two important advantages over centralized programs: high availability and scalability. In a highly-available system, it is very likely that a randomly-chosen transaction will complete successfully. A scalable system''s capacity can be increased or decreased to match changes in the demands placed on the system. When a node is unavailable because of maintenance or a crash, transactions may fail unless copies of the node''s information are stored at other nodes. Thus, high availability requires replication of data. Both the maintenance of a highly-available system and scalability require the ability to modify and extend a system while it is running, called dynamic reconfiguration or simply reconfiguration. This thesis considers the problem of building scalable and highly-available distributed programs without using special processors with redundant hardware and software. It describes a design and implementation of an example distributed program, an electronic mail repository. The thesis focuses on how to design and implement replication and reconfiguration for the distributed mail repository, considering these questions in the context of the programming language Argus, which was designed to support distributed programming. The thesis makes three distinct contributions. First, it presents the replication techniques chosen for the distributed repository and a discussion of their implementation in Argus. Second, it describes a new method for designing and implementing reconfigurable distributed systems. The new method allows replacement of software components while preserving their state, but requires no changes to the underlying system or language. This contrasts with previous work on guardian replacement in Argus. Third, the thesis evaluates the utility of Argus for applications involving replication and reconfiguration.

Journal ArticleDOI
TL;DR: A random access protocol is presented for multiaccess broadcast bus systems that has the capability to reconfigure the available bandwidth instantaneously and achieves better delay throughput characteristics than the best available static or quasi-static allocation protocols.
Abstract: A random access protocol is presented for multiaccess broadcast bus systems. Unlike previous algortihms which operate over a fixed set of channels (usually one), this protocol has the capability to reconfigure the available bandwidth instantaneously. It can thus present a more efficient channel configuration to the users subject to local loading conditions. As a result, the protocol achieves better delay throughput characteristics than the best available static or quasi-static allocation protocols. Simulation results are presented indicating the relative performance of the proposed protocol.

Journal ArticleDOI
Peter H. Bartels1, A. Graham1, W. Kuhn1, S. Paplanus1, George L. Wied1 
TL;DR: Knowledge engineering methods are used extensively as development tools in the development of an expert system-controlled scene segmentation, processing task scheduling, in the diagnostic expert system module, and its validation procedures.
Abstract: Ongoing research on a system for the automated evaluation of digitized images of histopathologic sections is described. The system comprises an ultrafast laser scanner microscope capable of recording image data at 64 Mhz in each of two wavelength channels, and a multiprocessor computer containing thirty-six Motorola 68000 processing elements. The operating system allows an image-data-driven dynamic reconfiguration of the computer architecture; this reconfiguration is based on prior knowledge of the histopathologic sections to be processed. Knowledge engineering methods are used extensively as development tools in the development of an expert system-controlled scene segmentation, processing task scheduling, in the diagnostic expert system module, and its validation procedures.

Journal ArticleDOI
TL;DR: This paper shows how an object-oriented programming environment can make use of dynamic Link reconfiguration, and makes suggestion as to how a reconfigurable system might be designed and used.

Journal ArticleDOI
K. Kawano1, Kinji Mori1, M. Koizumi1
TL;DR: A fault-tolerant decentralized control system ensured by autonomous controllability is presented in this paper, where a weak coupling method and adaptive reconfiguration method for designing an autonomously controllable system are proposed.

Journal ArticleDOI
TL;DR: This work considers problems associated with the coordination of movement within a multiple-robot system in which all motion is restricted to a single track to minimize the reconfiguration time.
Abstract: We consider problems associated with the coordination of movement within a multiple-robot system in which all motion is restricted to a single track. Our objective is to minimize the reconfiguration time, that is, the total time required to move a collection of robots from an initial to a goal configuration. We show that various models give rise to a wide range of problem complexities. For these problems we design and analyze optimization and approximation strategies.


Journal ArticleDOI
TL;DR: A multiprocessor communications architecture designed to achieve high performance, good modularity and expandability, good fault tolerance and the ability to support processor reconfiguration is presented.


Proceedings ArticleDOI
10 Jun 1987
TL;DR: In this article, a deterministic controller is employed to control a d.c. motor which experiences a varying inertia load, Coriolis forces and unknown friction disturbances, all of the uncertainties in the system are assumed to be bounded.
Abstract: The control of d.c. motors is often complicated by the fact that the dynamics of the system to be controlled are unknown. In this paper a deterministic controller is emnployed to control a d.c. motor which experiences a varying inertia load, Coriolis forces and unknown friction disturbances. All of the uncertainties in the system are assumed to be bounded. A hybrid controller, using a continuous velocity loop and a digital position loop, is developed. Experimental results verify the controller effectiveness in dealing with bounded deterministic uncertainties.

Journal ArticleDOI
TL;DR: This paper deals with the dimensioning of a multiprocessor system and allocation of processs minimizing hardware costs and satisfying performance and reliablity constraints: easy reconfiguration and reactivation after hardware faults are the main goals.

01 Jan 1987
TL;DR: A novel decomposition theory to find an optimal or near optimal policy for the sample path problem is developed, which requires the time-average cost to be below a specified value with probability one and which lends itself to parallel processing.
Abstract: A reconfigurable computer system can change its physical, functional, architectural and other characteristics to improve its reliability, performance or both. Since different notions of reconfiguration exist, we have attempted to provide a formal definition of reconfiguration and reconfigurable computers systems. One of the main issues in reconfigurable computers is the design of reconfiguration algorithms which choose at times of reconfiguration, system configurations optimal with respect to some reliability and performance attributes. We propose two types of optimization models for reconfiguration algorithms, namely "Commodity distribution models" and "Sample path constrained Markov decision models". The commodity distribution models are useful for load balancing with resource migration in distributed systems. When the bottleneck cost of migration is minimized, the model reduces to a bottleneck transportation problem. We demonstrate its application to the two cases: File migration in distributed databases and host migration in mobile computer networks. We have also developed an efficient algorithm to solve a special case of the bottleneck transportation problem. The constrained Markov decision models are useful for reconfigurable fault-tolerant computers in which reconfiguration involves reliability, performance tradeoff. We propose "sample path constraint" which requires the time-average cost to be below a specified value with probability one. We have developed a novel decomposition theory to find an optimal or near optimal policy for the sample path problem. This approach lends itself to parallel processing.

Proceedings Article
10 Jun 1987
TL;DR: In this article, the authors examined the application of digital technology to the process control industry and demonstrated the feasibility of the technology by presenting results from on-line trials in which the concepts of fault-tolerance and system reconfiguration were applied to the operation of the 5 MWt MIT Research Reactor.
Abstract: The application of digital technology to the process control industry is examined. The feasibility of the technology is demonstrated by presenting results from on-line trials in which the concepts of fault-tolerance and system reconfiguration were applied to the operation of the 5 MWt MIT Research Reactor. The effectiveness of digital technology as an aid to the human operator is then discussed in the context of the human approach to process control. Based on a study of licensed MIT Reactor operators, it was noted that the essential skill that humans must master if they are to exercise effective control is the ability to model and thereby predict the future state of the process in question. Accordingly, it is suggested that `faster than real-time' plant models be employed as an operator aid. This approach might be of particular use during plant up-sets when human experience and therefore human predictive capability is likely to be limited.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: A novel approach to real-time control which is based on developing and maintaining a "knowledge base" about the system to be controlled is outlined, and methods of updating the system representation for environmental changes are proposed through the use of "indirect" and "direct" identification techniques.
Abstract: The paper outlines a novel approach to real-time control which is based on developing and maintaining a "knowledge base" about the system to be controlled. The knowledge base comprises information relating to the plant and its environment and is used to determine the control signals required to obtain the desired response from the plant. The control methodology devised has been used to achieve motion control of a single axis electric module intended for robotic applications. The results obtained are encouraging and show a marked improvement over classical control techniques. For the "knowledge based" controller, methods of updating the system representation for environmental changes are proposed through the use of "indirect" and "direct" identification techniques.

Journal ArticleDOI
01 Jan 1987
TL;DR: The concepts of a server-network methodology that includes inter-server communication, the provision of and subscription to utility services, and the data-driven control-flow (DDCF) of server networks are presented, which makes the remote dynamic reconfiguration ofServer networks possible through graphical programming.
Abstract: Server networks are generalized networks of asynchronous software processes interconnected by prescribed communication links according to prescribed protocols. As large-system-integration tools, they facilitate a divide-and-conquer approach, separating large software system development problems into a set of smaller problems programmed-in-the-small via standard software techniques within the confines of each server's virtual machine. In addition, server networks, programmed-in-the-large, knit together the solutions to these smaller problems. This paper presents the concepts of a server-network methodology that includes inter-server communication, the provision of and subscription to utility services, and the data-driven control-flow (DDCF) of server networks, which makes the remote dynamic reconfiguration of server networks possible through graphical programming. A server-network generator (SNG), which is an expert system that facilitates the graphical configuration, modification, and archival of server networks, is presented. Our APL2 implementation of server networks, including the implementation of utility services as generic building blocks, is described. Enhancements to the APL2 system services, which would benefit the future of server-network research and its application to production system management through horizontal integration of the factory floor and cooperative processing among networks of mainframes and engineering workstations running the VM/370 operating system, are presented.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: In this article, a simple discrete adaptive control procedure that maintains global stability and robustness with respect to boundedness in the presence of any bounded input or output disturbances is presented, restricted to the class of systems that can be stabilized via some known output feedback configuration.
Abstract: This paper presents a simple discrete adaptive control procedure that maintains global stability and robustness with respect to boundedness in the presence of any bounded input or output disturbances. The order of the plant and the pole-excess may be large and unkown, while the order of the controller may be very low. The present algorithm is restricted to the class of systems that can be stabilized via some known output feedback configuration, yet it may provide a simple adaptive control solution to most practical problems that appear in the control literature and it may be especially effective in large-scale systems.