scispace - formally typeset
Search or ask a question

Showing papers on "Control reconfiguration published in 2000"


Journal Article
TL;DR: Dynamic power management (DPM) is a design methodology for dynamically reconfiguring systems to provide the requested services and performance levels with a minimum number of active components or a minimum load on such components as mentioned in this paper.
Abstract: Dynamic power management (DPM) is a design methodology for dynamically reconfiguring systems to provide the requested services and performance levels with a minimum number of active components or a minimum load on such components. DPM encompasses a set of techniques that achieves energy-efficient computation by selectively turning off (or reducing the performance of) system components when they are idle (or partially unexploited). In this paper, we survey several approaches to system-level dynamic power management. We first describe how systems employ power-manageable components and how the use of dynamic reconfiguration can impact the overall power consumption. We then analyze DPM implementation issues in electronic systems, and we survey recent initiatives in standardizing the hardware/software interface to enable software-controlled power management of hardware components.

1,181 citations


Journal ArticleDOI
TL;DR: This paper describes how systems employ power-manageable components and how the use of dynamic reconfiguration can impact the overall power consumption, and survey recent initiatives in standardizing the hardware/software interface to enable software-controlled power management of hardware components.
Abstract: Dynamic power management (DPM) is a design methodology for dynamically reconfiguring systems to provide the requested services and performance levels with a minimum number of active components or a minimum load on such components DPM encompasses a set of techniques that achieves energy-efficient computation by selectively turning off (or reducing the performance of) system components when they are idle (or partially unexploited) In this paper, we survey several approaches to system-level dynamic power management We first describe how systems employ power-manageable components and how the use of dynamic reconfiguration can impact the overall power consumption We then analyze DPM implementation issues in electronic systems, and we survey recent initiatives in standardizing the hardware/software interface to enable software-controlled power management of hardware components

1,138 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the entire optical network design problem can be considerably simplified and made computationally tractable, and that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions.
Abstract: We present algorithms for the design of optimal virtual topologies embedded on wide-area wavelength-routed optical networks. The physical network architecture employs wavelength-conversion-enabled wavelength-routing switches (WRS) at the routing nodes, which allow the establishment of circuit-switched all-optical wavelength-division multiplexed (WDM) channels, called lightpaths. We assume packet-based traffic in the network, such that a packet travelling from its source to its destination may have to multihop through one or more such lightpaths. We present an exact integer linear programming (ILP) formulation for the complete virtual topology design, including choice of the constituent lightpaths, routes for these lightpaths, and intensity of packet flows through these lightpaths. By minimizing the average packet hop distance in our objective function and by relaxing the wavelength-continuity constraints, we demonstrate that the entire optical network design problem can be considerably simplified and made computationally tractable. Although an ILP may take an exponential amount of time to obtain an exact optimal solution, we demonstrate that terminating the optimization within the first few iterations of the branch-and-bound method provides high-quality solutions. We ran experiments using the CPLEX optimization package on the NSFNET topology, a subset of the PACBELL network topology, as well as a third random topology to substantiate this conjecture. Minimizing the average packet hop distance is equivalent to maximizing the total network throughput under balanced flows through the lightpaths. The problem formulation can be used to design a balanced network, such that the utilizations of both transceivers and wavelengths in the network are maximized, thus reducing the cost of the network equipment. We analyze the trade-offs in budgeting of resources (transceivers and switch sizes) in the optical network, and demonstrate how an improperly designed network may have low utilization of any one of these resources. We also use the problem formulation to provide a reconfiguration methodology in order to adapt the virtual topology to changing traffic conditions.

486 citations


Journal ArticleDOI
TL;DR: To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it, investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications.
Abstract: Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article.

478 citations


Journal ArticleDOI
TL;DR: DynamicTAO as mentioned in this paper is a CORBA-compliant reflective ORB that supports dynamic configuration and maintains an explicit representation of its own internal structure and uses it to carry out run time customization safely.
Abstract: Conventional middleware systems fail to address important issues related to dynamism. Modern computer systems have to deal not only with heterogeneity in the underlying hardware and software platforms but also with highly dynamic environments. Mobile and distributed applications are greatly affected by dynamic changes of the environment characteristic such as security constraints and resource availability. Existing middleware is not prepared to react to these changes. In many cases, application developers know when adaptive changes in communication and security strategies would improve system performance. But often, they are not able to benefit from it because the middleware lacks the mechanisms to support monitoring (to detect when adaptation should take place) and on-the-fly reconfiguration. dynamicTAO is a CORBA-compliant reflective ORB that supports dynamic configuration. It maintains an explicit representation of its own internal structure and uses it to carry out run time customization safely. After describing dynamicTAO's design and implementation, we discuss our experience on the development of two systems benefiting from the reflective nature of our ORB: a flexible monitoring system for distributed objects and a mechanism for enforcing access control based on dynamic security policies.

299 citations


Journal ArticleDOI
TL;DR: It is far from universally true that it is easier to switch to the weaker task, and inhibition of the stronger task-set is a strategy used only in the special case of extreme inequality in strength, or its consequences for later performance may be masked by slower post-stimulus control operations for more complex tasks.
Abstract: Switching between two tasks afforded by the same stimuli results in slower reactions and more errors on the first stimulus after the task changes This "switch cost" is reduced, but not usually eliminated, by the opportunity to prepare for a task switch While there is agreement that this preparation effect indexes a control process performed before the stimulus, the "residual" cost has been attributed to several sources: to a control process essential for task-set reconfiguration that can be carried out only after the stimulus onset, to probabilistic failure to engage in preparation prior to the stimulus, and to two kinds of priming from previous trials: positive priming of the now-irrelevant task set and inhibition of the now-relevant task-set The main evidence for the carry-over of inhibition is the observation that it is easier to switch from the stronger to the weaker of a pair of tasks afforded by the stimulus than vice versa We survey available data on interactions between task switching and three manipulations of relative task strength: pre-experimental experience, stimulus-response compatibility, and intra-experimental practice We conclude that it is far from universally true that it is easier to switch to the weaker task Either inhibition of the stronger task-set is a strategy used only in the special case of extreme inequality in strength, or its consequences for later performance may be masked by slower post-stimulus control operations for more complex tasks Inhibitory priming may also be stimulus specific

284 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: A hardware/software partitioning algorithm that performs fine-grained partitioning of an application to execute on the combined CPU and datapath and optimizes the global application execution time, including the software and hardware execution times, communication time anddatapath reconfiguration time.
Abstract: In this paper we describe a new hardware/software partitioning approach for embedded reconfigurable architectures consisting of a general-purpose processor (CPU), a dynamically reconfigurable datapath (e.g. an FPGA), and a memory hierarchy. We have developed a framework called Nimble that automatically compiles system-level applications specified in C to executables on the target platform. A key component of this framework is a hardware/software partitioning algorithm that performs fine-grained partitioning (at loop and basic-block levels) of an application to execute on the combined CPU and datapath. The partitioning algorithm optimizes the global application execution time, including the software and hardware execution times, communication time and datapath reconfiguration time. Experimental results on real applications show that our algorithm is effective in rapidly finding close to optimal solutions.

280 citations


Proceedings ArticleDOI
27 Nov 2000
TL;DR: A weighted clustering algorithm (WCA) which takes into consideration the ideal degree, transmission power, mobility and battery power of a mobile node to maintain the stability of the network, thus lowering the computation and communication costs associated with it.
Abstract: We consider a multi-cluster, multi-hop packet radio network architecture for wireless systems which can dynamically adapt itself with the changing network configurations. Due to the dynamic nature of the mobile nodes, their association and dissociation to and from clusters perturb the stability of the system, and hence a reconfiguration of the system is unavoidable. At the same time it is vital to keep the topology stable as long as possible. The clusterheads, which form a dominant set in the network, decide the topology and are responsible for its stability. In this paper, we propose a weighted clustering algorithm (WCA) which takes into consideration the ideal degree, transmission power, mobility and battery power of a mobile node. We try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control (MAC) protocol, Our clusterhead election procedure is not periodic as in earlier research, but adapts based on the dynamism of the nodes. This on-demand execution of WCA aims to maintain the stability of the network, thus lowering the computation and communication costs associated with it. Simulation experiments are conducted to evaluate the performance of WCA in terms of the number of clusterheads, reaffiliation frequency and dominant set updates, Results show that the WCA performs better than the existing algorithms and is also tunable to different types of ad hoc networks.

279 citations


Journal ArticleDOI
TL;DR: In this paper, a fault estimation and compensation method was proposed to compensate for actuator and sensor faults in highly automated systems. But the method is limited to the case when there is a complete loss of an actuator.
Abstract: The general fault-tolerant control method described in the article addresses actuator and sensor faults, which often affect highly automated systems. These faults correspond to a loss of actuator effectiveness or fault sensor measurements. After describing these faults, a fault estimation and compensation method was proposed. In addition to providing information to operators concerning the system operating conditions, the fault diagnosis module is especially important in fault-tolerant control systems where one needs to know exactly which element is faulty to react safely. The method's abilities to compensate for such faults are illustrated by applying it to a winding machine, which represents a subsystem of many industrial systems. The results show that once the fault is detected and isolated, it is easy to reduce its effect on the system, and process control is resumed with degraded performances close to nominal ones. Thus, stopping the system immediately can be avoided. However, the limits of this method are reached when there is the complete loss of an actuator. In this case, only a hardware redundancy is effective and could ensure performance reliability. The method proposed here assumes the availability of the state variables for measurement.

269 citations


Proceedings ArticleDOI
04 Apr 2000
TL;DR: With the proposed method, for any input load conditions of the system, the optimum switching configuration can automatically be identified within a reasonable computer time and hence the method can be effectively employed for continuous reconfiguration for loss reduction.
Abstract: Network reconfiguration for loss minimization is the determination of switching-options that minimizes the power losses for a particular set of loads on a distribution system. In this paper, a novel method is proposed by formulating an algorithm to reconfigure distribution networks for loss minimization. An efficient technique is used to determine the switching combinations, select the status of the switches, and find the best combination of switches for minimum loss. In the first stage of the proposed algorithm, a limited number of switching combinations is generated and the best switching combination is determined. In the second stage, an extensive search is employed to find out any other switching combination that may give rise to minimum loss compared to the loss obtained in the first stage. The proposed method has been tested on a 33-bus system, and the test results indicate that it is able to determine the appropriate switching-options for optimal (or near optimal) configuration with less computation. The results have been compared with those of established methods reported earlier and a comparative study is presented. With the proposed method, for any input load conditions of the system, the optimum switching configuration can automatically be identified within a reasonable computer time and hence the method can be effectively employed for continuous reconfiguration for loss reduction. The method can be effectively used to plan and design power systems before actually implementing the distribution networks for locating the tie-switches and providing the minimum number of sectionalizing switches in the branches to reduce installation and switching costs.

264 citations


Journal ArticleDOI
TL;DR: In this paper, controllability tests and motion control algorithms for underactuated mechanical control systems on Lie groups with Lagrangian equal to kinetic energy were provided, and two algebraic tests were derived in terms of the symmetric product and the Lie bracket of the input vector fields.
Abstract: We provide controllability tests and motion control algorithms for underactuated mechanical control systems on Lie groups with Lagrangian equal to kinetic energy. Examples include satellite and underwater vehicle control systems with the number of control inputs less than the dimension of the configuration space. Local controllability properties of these systems are characterized, and two algebraic tests are derived in terms of the symmetric product and the Lie bracket of the input vector fields. Perturbation theory is applied to compute approximate solutions for the system under small-amplitude forcing; in-phase signals play a crucial role in achieving motion along symmetric product directions. Motion control algorithms are then designed to solve problems of point-to-point reconfiguration, static interpolation and exponential stabilization. We illustrate the theoretical results and the algorithms with applications to models of planar rigid bodies, satellites and underwater vehicles.

Journal ArticleDOI
TL;DR: Examination of key interrelated technologies that should be developed and implemented to achieve reconfigurable manufacturing system characteristics including modularity, integrability, customisation, convertibility and diagnosability.
Abstract: A reconfigurable manufacturing system (RMS) is designed for rapid adjustment of production capacity and functionality in response to new market conditions and new process technology. It has several distinct characteristics including modularity, integrability, customisation, convertibility and diagnosability. There are a number of key interrelated technologies that should be developed and implemented to achieve these characteristics. This paper examines and identifies these technologies. After a brief description of the RMSs and their goals, aspects of reconfiguration (reconfigurable system, software, controller, machine, and process) are explained; this provides one with a better understanding of the enabling technologies of RMSs. Some of the issues related to the technology requirements of RMSs at the system and machine design levels, and ramp -up time reduction are then explained. The paper concludes with descriptions of some of the future research directions for RMSs.

Patent
11 Jul 2000
TL;DR: In this paper, a monitor agent is used to track the IP and MAC addresses of networked devices as well as port information, and if a device fails, maintenance personnel make an in-field replacement of the failed device and the monitor agent automatically reassigns the IP address to the replacement device.
Abstract: The present invention is for automatic reconfiguration of industrial networked devices. More particularly, the system described herein facilitates use of TCP/IP networks, such as Ethernet, as an alternative for industrial fieldbus or device buses by removing the need to perform significant reconfiguration of devices such as I/O modules, sensors, or transducers under field replacement situations. The present invention uses a monitor agent to track the IP and MAC addresses of networked devices as well as port information. If a device fails, maintenance personnel make an in-field replacement of the failed device and the monitor agent automatically reassigns the IP address to the replacement device.

Journal ArticleDOI
TL;DR: This paper concerns an implementation of a fuzzy logic controller (FLC) on a reconfigurable field-programmable gate array (FPGA) system, and each module is implemented individually on the FLC automatic design and implementation system, which is an integrated development environment for performing many subtasks.
Abstract: This paper concerns an implementation of a fuzzy logic controller (FLC) on a reconfigurable field-programmable gate array (FPGA) system. In the proposed implementation method, the FLC is partitioned into many temporally independent functional modules, and each module is implemented individually on the FLC automatic design and implementation system, which is an integrated development environment for performing many subtasks such as automatic VHSIC hardware description language description, FPGA synthesis, optimization, placement and routing, and downloading. Each implemented module forms a downloadable hardware object that is ready to configure the FPGA chip. Then, the FPGA chip is consequently reconfigured with one module at a time by using the run-time reconfiguration method. This implementation method is effective when a single FPGA chip cannot fit the FLC due to the limited size of its constituent cells. We test the proposed implementation method by building the FLC for the truck backer-upper control on VCC Corporation's EVC-1 reconfigurable FPGA board directly.

Proceedings ArticleDOI
25 Sep 2000
TL;DR: The design of optimal sensor networks thus resumes to finding pseudo-minimal sensor sets such that the mean time before losing the observability property is larger than a pre-defined value.
Abstract: The selection of measurements is one of the most important problems in the design of process instrumentation. This paper deals with the design of sensor networks such that the observability of the variables, which are necessary for the process control, remains satisfied in the presence of sensor failures. Pseudo-minimal and minimal sensor sets are organized into an oriented graph which contains all the possible reconfiguration paths for which those variables remain observable. A bottom-up analysis of this graph allows one to compute reliability functions which evaluate the robustness of the observability property with respect to sensor failures. The design of optimal sensor networks thus resumes to finding pseudo-minimal sensor sets such that the mean time before losing the observability property is larger than a pre-defined value.

Journal ArticleDOI
TL;DR: In this article, the authors introduce tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and availability.

Book ChapterDOI
01 Jan 2000
TL;DR: Metaglue is described, an extension to the Java programming language for building software agent systems for controlling Intelligent Environments that has been specifically designed to address these needs.
Abstract: Intelligent Environments (IEs) have specific computational properties that generally distinguish them from other computational systems. They have large numbers of hardware and software components that need to be interconnected. Their infrastructures tend to be highly distributed, reflecting both the distributed nature of the real world and the IEs’ need for large amounts of computational power. They also tend to be highly dynamic and require reconfiguration and resource management on the fly as their components and inhabitants change, and as they adjust their operation to suit the learned preferences of their users. Because IEs generally have multimodal interfaces, they also usually have high degrees of parallelism for resolving multiple, simultaneous events. Finally, debugging IEs present unique challenges to their creators, not only because of their distributed parallelism, but also because of the difficulty of pinning down their “state” in a formal computational sense. This paper describes Metaglue, an extension to the Java programming language for building software agent systems for controlling Intelligent Environments that has been specifically designed to address these needs. Metaglue has been developed as part of the MIT Artificial Intelligence Lab’s Intelligent Room Project, which has spent the past four years designing Intelligent Environments for research in Human-Computer Interaction.

Patent
11 Feb 2000
TL;DR: In this paper, a method and apparatus for controlling an electric power distribution system including the use and coordination of information conveyed over communications to dynamically modify the protection characteristics of distribution devices is described.
Abstract: Method and apparatus is disclosed for controlling an electric power distribution system including the use and coordination of information conveyed over communications to dynamically modify the protection characteristics of distribution devices (including but not limited to substation breakers, reclosing substation breakers, and line reclosers). In this way, overall protection and reconfigurability of the distribution system or “team” is greatly enhanced. According to additional aspects of the invention, devices within the system recognize the existence of cooperating devices outside of the team's domain of direct control, managing information from these devices such that more intelligent local decision making and inter-team coordination can be performed. This information may include logical status indications, control requests, analog values or other data.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: Model-based feedforward laws are derived for the two basic motion tasks of state-to-state transfer in given time and exact trajectory execution and a new solution to the finite-time reconfiguration problem for a one-link flexible arm is presented.
Abstract: We present a survey of the nominal motion generation schemes and of the associated simple control solutions for robots displaying flexibility effects. Two model classes are considered: robots with elastic joints but rigid links, and robots with flexible links. Model-based feedforward laws are derived for the two basic motion tasks of state-to-state transfer in given time and exact trajectory execution. In particular, we present a new solution to the finite-time reconfiguration problem for a one-link flexible arm. Finally, we use the developed commands into a simple feedback scheme that requires only standard sensors on the motors.

Proceedings ArticleDOI
17 Apr 2000
TL;DR: On-line, multi-level fault tolerant (FT) technique for system functions and applications mapped to partially and dynamically reconfigurable FPGAs based on the roving self testing areas (STARs) fault detection/location strategy.
Abstract: In this paper we present an on-line, multi-level fault tolerant (FT) technique for system functions and applications mapped to partially and dynamically reconfigurable FPGAs. Our method is based on the roving self testing areas (STARs) fault detection/location strategy presented in Abramovici et al. (1999). In STARs, the area under test uses partial reconfiguration properties to modify the configuration of the area under test without affecting the configuration of the system function and dynamic reconfiguration properties to allow uninterrupted execution of the system function while reconfiguration takes place. In this paper we take this one step further. Once a fault (or multiple faults) is detected we dynamically reconfigure the working area application around the fault with no additional system function interruption (other than the interruption when a STAR moves to a new location). We also apply the concept of partially usable blocks to increase fault tolerance. Our method has been successfully implemented and demonstrated on the ORCA 2CA series FPGAs from Lucent Technologies.

Journal ArticleDOI
01 Nov 2000
TL;DR: In this article, a refined GA for a distribution feeder reconfiguration to reduce losses is presented, where the initial population is determined by opening the switches with the lowest current in every mesh derived in the optimal power flow, with all switches closed.
Abstract: A refined genetic algorithm for a distribution feeder reconfiguration to reduce losses is presented. The problem is optimised in a stochastic searching manner similar to that of the conventional GA. The initial population is determined by opening the switches with the lowest current in every mesh derived in the optimal power flow (OPF), with all switches closed. Solutions provided by OPF are generally the optimum or near-optimal solutions for most cases, so prematurity could occur. To avoid prematurity, the conventional crossover and mutation scheme was refined by a competition mechanism. So the dilemma of choosing a proper probability for crossover and mutation can be avoided. The two processes were also combined into one to save computation time. Tabu lists with heuristic rules were also employed in the searching process to enhance performance. The new approach provides an overall switching decision instead of a successive pattern, which tends to converge to a local optimum. Many tests were conducted and the results have shown that RGA has advantages over many other previously developed algorithms.

01 Jan 2000
TL;DR: An important feature in the Xilinx VirtexTM architecture is the ability to reconfigure a portion of the FPGA while the remainder of the design is still operational.
Abstract: An important feature in the Xilinx VirtexTM architecture is the ability to reconfigure a portion of the FPGA while the remainder of the design is still operational. Partial reconfiguration is useful for applications that require the loading of different designs into the same area of the device or the flexibility to change portions of a design without having to either reset or completely reconfigure the entire device. With this capability, entirely new application areas become possible:

Journal ArticleDOI
01 May 2000
TL;DR: It is proposed that a subset of the tasks executing on the FPGA be rearranged when to do so allows the next pending task to be processed sooner, and methods are described and evaluated for overcoming the NP-hard problems of identifying feasible rearrangements and scheduling the rearranger when moving tasks are reloaded from off-chip.
Abstract: Field-programmable gate arrays (FPGAs) which allow partial reconfiguration at run time can be shared among multiple independent tasks. When the sequence of tasks to be performed is unpredictable, the FPGA controller needs to make allocation decisions online. Since online allocation suffers from fragmentation, tasks can end up waiting despite there being sufficient, albeit noncontiguous, resources available to service them. The time to complete tasks is consequently longer and the utilisation of the FPGA is lower than it could be. It is proposed that a subset of the tasks executing on the FPGA be rearranged when to do so allows the next pending task to be processed sooner. Methods are described and evaluated for overcoming the NP-hard problems of identifying feasible rearrangements and scheduling the rearrangements when moving tasks are reloaded from off-chip.

Proceedings ArticleDOI
25 Oct 2000
TL;DR: The proposed methodology for SEU injection exploits FPGAs and, contrarily to the most common fault injection techniques, realises the injection directly in the reconfigurable hardware, taking advantage of run-time reconfiguration capabilities of the device.
Abstract: In this paper, a new methodology for the injection of single event upsets (SEU) in memory elements is introduced. SEUs in memory elements can occur due to many reasons (e.g. particle hits, radiation) and at any time. It becomes therefore important to examine the behaviour of circuits when an SEU occurs in them. Reconfigurable hardware (especially FPGAs) was shown to be suitable to emulate the behaviour of a logic design and to realise fault injection. The proposed methodology for SEU injection exploits FPGAs and, contrarily to the most common fault injection techniques, realises the injection directly in the reconfigurable hardware, taking advantage of run-time reconfiguration capabilities of the device. In this case, no modification of the initial design description is needed to inject a fault, that results in avoiding hardware overheads and specific synthesis, place and route phases.

Journal ArticleDOI
TL;DR: A means of measuring the level of redundancy in connection with feedback control by borrowing the notion of the second-order modes by calculating the control reconfigurability for two process models to show its relevance to redundant actuating capabilities in the models.

Book ChapterDOI
01 Jan 2000
TL;DR: This paper describes two feeder reconfiguration algorithms for the purpose of service restoration and load balancing in a real-time operation environment that combine optimization techniques with heuristic rules and fuzzy logic for efficiency and robust performance.
Abstract: This paper describes two feeder reconfiguration algorithms for the purpose of service restoration and load balancing in a real-time operation environment. The developed methodologies combine optimization techniques with heuristic rules and fuzzy logic for efficiency and robust performance. Many of practical operating concerns of feeder reconfiguration and the coordination with other distribution automation applications are also addressed. The developed algorithms have been implemented as 8 production grade software. Test results on PG&E distribution feeders show that the performance is efficient and robust.

Journal ArticleDOI
TL;DR: It is argued that the goal is to find a set of controlled variables which, when kept at constant setpoints, indirectly lead to near-optimal operation with acceptable loss.

Proceedings ArticleDOI
13 Jul 2000
TL;DR: An overview of some key concepts of EHW is presented, describing also a set of selected applications, including a fine-grained Field Programmable Transistor Array (FPTA) architecture for reconfigurable hardware.
Abstract: Evolvable Hardware (EHW) refers to HW design and self reconfiguration using evolutionary/genetic mechanisms. The paper presents an overview of some key concepts of EHW, describing also a set of selected applications. A fine-grained Field Programmable Transistor Array (FPTA) architecture for reconfigurable hardware is presented as an example of an initial effort toward evolution-oriented devices. Evolutionary experiments in simulations and with a FPTA chip in-the-loop demonstrate automatic synthesis of electronic circuits. Unconventional circuits, for which there are no textbook design guidelines, are particularly appealing to evolvable hardware. To illustrate this situation, one demonstrates here the evolution of circuits implementing parametrical connectives for fuzzy logics. In addition to synthesizing circuits for new functions, evolvable hardware can be used to preserve existing functions and achieve fault-tolerance, determining circuit configurations that circumvent the faults. In addition, we illustrate with an example how evolution can recover functionality lost due to an increase in temperature. In the particular case of space applications, these characteristics are extremely important for enabling spacecraft to survive harsh environments and to have long life.

Patent
29 Mar 2000
TL;DR: In this paper, an FPGA-based communications access point and system for reconfiguration of the FPGAs via a communications channel are described in various embodiments, one embodiment includes a physical interface circuit, a storage element (e.g., a RAM), and a configuration control circuit.
Abstract: An FPGA-based communications access point and system for reconfiguration of the FPGA via a communications channel are described in various embodiments. One embodiment includes a physical interface circuit, a storage element (e.g., a RAM), an FPGA, and a configuration control circuit. The physical interface circuit is arranged for connection to a communications channel and is coupled to the FPGA. The configuration control circuit includes a controlling circuit (e.g., a PLD) and a memory circuit (e.g., a PROM). The PROM is configured with an initial configuration bitstream for the FPGA. The initial configuration bitstream implements both a communications protocol and a control function that writes configuration bits received by the FPGA via the communications channel to the RAM. The control function also generates a reconfiguration signal responsive to a first predetermined condition. The PLD is configured to load the initial configuration bitstream from the PROM into the FPGA, and, responsive to the reconfiguration signal from the FPGA, to load a second configuration bitstream from the RAM into the FPGA. The control function may be configured to interact with standard network programs such as FTP (file transfer protocol) or custom programs.

Journal ArticleDOI
TL;DR: In this paper, a study of U.K. voluntary organizations demonstrates relatively low uptake of the core networking technologies and applications essential to support the reconfiguration of key relationships in and around the organizations, and case studies of these organizations suggest they are using information and communication technologies to reshape internal relationships and reconfigure relationships externally.
Abstract: Electronic networking holds the promise of innovation with involuntary organizations as they seek to respond to deep shifts in the social, economic, and political spheres in which they operate. Evidence from our study of U.K. voluntary organizations demonstrates relatively low uptake of the core networking technologies and applications essential to support the reconfiguration of key relationships in and around the organizations. Friends of the Earth and the Samaritans are exceptions to this trend. Case studies of these organizations suggest they are using information and communication technologies to reshape internal relationships and reconfigure relationships externally. The extent to which the organizations are reconfiguring around intelligent campaigning and intelligent client service is tempered by their long-standing values.