scispace - formally typeset
Search or ask a question

Showing papers on "Control reconfiguration published in 2011"


Journal ArticleDOI
TL;DR: This work investigates the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales and develops a general statistical framework for the identification of modular architectures in evolving systems.
Abstract: Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes—flexibility and selection—must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we investigate the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.

1,529 citations


Journal ArticleDOI
TL;DR: A harmony search algorithm (HSA) is proposed to solve the network reconfiguration problem to get optimal switching combination in the network which results in minimum loss and is observed that the proposed method performed well compared to the other methods in terms of the quality of solution.
Abstract: Electrical distribution network reconfiguration is a complex combinatorial optimization process aimed at finding a radial operating structure that minimizes the system power loss while satisfying operating constraints. In this paper, a harmony search algorithm (HSA) is proposed to solve the network reconfiguration problem to get optimal switching combination in the network which results in minimum loss. The HSA is a recently developed algorithm which is conceptualized using the musical process of searching for a perfect state of harmony. It uses a stochastic random search instead of a gradient search which eliminates the need for derivative information. Simulations are carried out on 33- and 119-bus systems in order to validate the proposed algorithm. The results are compared with other approaches available in the literature. It is observed that the proposed method performed well compared to the other methods in terms of the quality of solution.

397 citations


Journal ArticleDOI
TL;DR: New efforts to evaluate river restoration projects that use channel reconfiguration as a methodology for improving stream ecosystem structure and function are finding little evidence for measurable ecological improvement.
Abstract: River restoration is an increasingly common approach utilized to reverse past degradation of freshwater ecosystems and to mitigate the anticipated damage to freshwaters from future development and resource-extraction activities. While the practice of river restoration has grown exponentially over the last several decades, there has been little empirical evaluation of whether restoration projects individually or cumulatively achieve the legally mandated goals of improving the structure and function of streams and rivers. New efforts to evaluate river restoration projects that use channel reconfiguration as a methodology for improving stream ecosystem structure and function are finding little evidence for measurable ecological improvement. While designed channels may have less-incised banks and greater sinuousity than the degraded streams they replace, these reach-scale efforts do not appear to be effectively mitigating the physical, hydrological, or chemical alterations that are responsible for the loss of sensitive taxa and the declines in water quality that typically motivate restoration efforts. Here we briefly summarize this new literature, including the collection of papers within this Invited Feature, and provide our perspective on the limitations of current restoration.

388 citations


Journal ArticleDOI
TL;DR: A component-based modelling approach to system-software co-engineering of real-time embedded systems, in particular aerospace systems, centred around the standardized Architecture Analysis and Design Language (AADL) modelling framework is presented.
Abstract: This paper presents a component-based modelling approach to system-software co-engineering of real-time embedded systems, in particular aerospace systems. Our method is centred around the standardized Architecture Analysis and Design Language (AADL) modelling framework. We formalize a significant subset of AADL, incorporating its recent Error Model Annex for modelling faults and repairs. The major distinguishing aspects of this component-based approach are the possibility to describe nominal hardware and software operations, hybrid (and timing) aspects, as well as probabilistic faults and their propagation and recovery. Moreover, it supports dynamic (i.e. on-the-fly) reconfiguration of components and inter-component connections. The operational semantics gives a precise interpretation of specifications by providing a mapping onto networks of event-data automata. These networks are then subject to different kinds of formal analysis such as model checking, safety and dependability analysis and performance evaluation. Mature tool support realizes these analyses. The activities reported in this paper are carried out in the context of the correctness, modelling, and performance of aerospace systems, project which is funded by the European Space Agency.

216 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate.

213 citations


Proceedings ArticleDOI
27 Feb 2011
TL;DR: This paper analyses different hardware sorting architectures in order to implement a highly scaleable sorter for solving huge problems at high performance up to the GB range in linear time complexity and demonstrates how partial run-time reconfiguration can be used for saving almost half the FPGA resources or alternatively for improving the speed.
Abstract: This paper analyses different hardware sorting architectures in order to implement a highly scaleable sorter for solving huge problems at high performance up to the GB range in linear time complexity. It will be proven that a combination of a FIFO-based merge sorter and a tree-based merge sorter results in the best performance at low cost. Moreover, we will demonstrate how partial run-time reconfiguration can be used for saving almost half the FPGA resources or alternatively for improving the speed. Experiments show a sustainable sorting throughput of 2GB/s for problems fitting into the on-chip FPGA memory and 1 GB/s when using external memory. These values surpass the best published results on large problem sorting implementations on FPGAs, GPUs, and the Cell processor.

180 citations


Journal ArticleDOI
01 Jan 2011
TL;DR: An automation agent architecture for controlling physical components that integrates “on the fly” reconfiguration abilities on the low-level layer and enhances not only the flexibility of each component's control software, but also the precondition for reconfiguring the entire manufacturing system.
Abstract: The reconfiguration of control software is regarded as an important ability to enhance the effectiveness and efficiency in future manufacturing systems. Agent technology is considered as a promising approach to provide reconfiguration abilities, but existing work has been focused mainly on the reconfiguration of higher layers concerned with production scheduling and planning. In this paper, we present an automation agent architecture for controlling physical components that integrates “on the fly” reconfiguration abilities on the low-level layer. Our approach is combined with an ontological representation of the low-level functionality at the high-level control layer, which is able to reason and initiate reconfiguration processes to modify the low-level control (LLC). As current control systems are mostly based on standards and principles that do not support reconfiguration, leading to rigid control software architectures, we base our approach on the promising Standard IEC 61499 for the LLC, extended by an innovative reconfiguration infrastructure. We demonstrate this approach with a case study of a reconfiguration process that modifies the LLC functionality provided by the automation agent of a physical component. Thereby, we obtain the ability to support numerous different LLC configurations without increasing the LLC's complexity. By applying our automation agent architecture, we enhance not only the flexibility of each component's control software, but also achieve the precondition for reconfiguring the entire manufacturing system.

161 citations


Proceedings ArticleDOI
08 Jun 2011
TL;DR: This paper introduces a novel FPGA architecture with memristor-based reconfiguration (mrFPGA), based on the existing CMOS-compatible Memristor fabrication process, and proposes an improved architecture that allows adaptive buffer insertion in interconnects to achieve more speedup.
Abstract: In this paper, we introduce a novel FPGA architecture with memristor-based reconfiguration (mrFPGA). The proposed architecture is based on the existing CMOS-compatible memristor fabrication process. The programmable interconnects of mrFPGA use only memristors and metal wires so that the interconnects can be fabricated over logic blocks, resulting in significant reduction of overall area and interconnect delay but without using a 3D die-stacking process. Using memristors to build up the interconnects can also provide capacitance shielding from unused routing paths and reduce interconnect delay further. Moreover we propose an improved architecture that allows adaptive buffer insertion in interconnects to achieve more speedup. Compared to the fixed buffer pattern in conventional FPGAs, the positions of inserted buffers in mrFPGA are optimized on demand. A complete CAD flow is provided for mrFPGA, with an advanced P&R tool named mrVPR that was developed for mrFPGA. The tool can deal with the novel routing structure of mrFPGA, the memristor shielding effect, and the algorithm for optimal buffer insertion. We evaluate the area, performance and power consumption of mrFPGA based on the 20 largest MCNC benchmark circuits. Results show that mrFPGA achieves 5.18x area savings, 2.28x speedup and 1.63x power savings. Further improvement is expected with combination of 3D technologies and mrFPGA.

146 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This contribution reveals the main ideas, potential benefits and challenges for supporting invasive computing at the architectural, programming and compiler level in the future and gives an overview of required research topics rather than being able to present mature solutions yet.
Abstract: A novel paradigm for designing and programming future parallel computing systems called invasive computing is proposed. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors in a phase called invasion, then to execute portions of code of high parallelism degree in parallel based on the available invasible region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. To support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of Multi-Processor Systems-on-a-Chip must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration. This contribution reveals the main ideas, potential benefits and challenges for supporting invasive computing at the architectural, programming and compiler level in the future. It serves to give an overview of required research topics rather than being able to present mature solutions yet.

144 citations


Journal ArticleDOI
TL;DR: An FPGA-based system architecture is studied and a cost model of Partial Reconfiguration (PR) is introduced to calculate the expected reconfiguration time and throughput and enables a user to evaluate PR and decide whether it is suitable for a certain application prior entering the complex PR design flow.
Abstract: Fine-grain reconfigurable devices suffer from the time needed to load the configuration bitstream. Even for small bitstreams in partially reconfigurable FPGAs this time cannot be neglected. In this article we survey the performance of the factors that contribute to the reconfiguration speed. Then, we study an FPGA-based system architecture and with real experiments we produce a cost model of Partial Reconfiguration (PR). This model is introduced to calculate the expected reconfiguration time and throughput. In order to develop a realistic model we take into account all the physical components that participate in the reconfiguration process. We analyze the parameters that affect the generality of the model and the adjustments needed per system for error-free evaluation. We verify it with real measurements, and then we employ it to evaluate existing systems presented in previous publications. The percentage error of the cost model when comparing its results with the actual values of those publications varies from 36p to 63p, whereas existing works report differences up to two orders of magnitude. Present work enables a user to evaluate PR and decide whether it is suitable for a certain application prior entering the complex PR design flow.

143 citations


Journal ArticleDOI
TL;DR: The approach extends the concept of virtual actuators and virtual sensors from linear to PWA systems on the basis of the fault-hiding principle that provides the underlying conceptual idea: the fault is hidden from the nominal controller and the fault effects are compensated.

Journal ArticleDOI
TL;DR: In this paper, a combinatorial process based on reconfiguration and DSTATCOM allocation is implemented to mitigate losses and improve voltage profile in power distribution networks, where differential evolution algorithm (DEA) has been used to solve and overcome the complicity of this combinatorsial nonlinear optimization problem.

Journal ArticleDOI
TL;DR: The problem of fault tolerant control in the framework of discrete event systems modeled as automata is solved using a general control architecture based on the use of special kind of diagnoser, called ''diagnosing controller'', which is used to safely detect faults and to switch between the nominal control policy and a bank of reconfigured control policies.

Journal ArticleDOI
TL;DR: The proposed approach, enhanced integer coded particle swarm optimization (EICPSO), is able to improve the search efficiency for feeder reconfiguration problems by considering the historical local optimal solutions when generating new particles.
Abstract: This paper proposes an effective approach based on the particle swarm optimization with integer coded to determine the switch operation schemes for feeder reconfiguration. The proposed approach, enhanced integer coded particle swarm optimization (EICPSO), is able to improve the search efficiency for feeder reconfiguration problems by considering the historical local optimal solutions when generating new particles. The local optimal solutions found during the evolution process are used when new particle are generated. Three different distribution systems are used in this paper to verify and validate the effectiveness of the proposed method. Simulation results show that the proposed method can find the solutions for feeder reconfiguration problems faster than other approaches such as discrete particle swarm optimization, modified binary particle swarm optimization, and genetic algorithm.

Proceedings ArticleDOI
05 Dec 2011
TL;DR: The main aim of the proposed new greedy Virtual Network Reconfiguration algorithm, VNR, is to 'tidy up' substrate network in order to minimise the number of overloaded substrate links, while also reducing the cost of reconfiguration.
Abstract: In this paper we address the problem of virtual network reconfiguration. In our previous work on virtual network embedding strategies, we found that most virtual network rejections were caused by bottlenecked substrate links while peak resource use is equal to 18%. These observations lead us to propose a new greedy Virtual Network Reconfiguration algorithm, VNR. The main aim of our proposal is to 'tidy up' substrate network in order to minimise the number of overloaded substrate links, while also reducing the cost of reconfiguration. We compare our proposal with the related reconfiguration strategy VNA-Periodic, both of them are incorporated in the best existing embedding strategies VNE-AC and VNE-Greedy in terms of rejection rate. The results obtained show that VNR outperforms VNA-Periodic. Indeed, our research shows that the performances of VNR do not depend on the virtual network embedding strategy. Moreover, VNR minimises the rejection rate of virtual network requests by at least 83% while the cost of reconfiguration is lower than with VNA-Periodic.

Journal ArticleDOI
TL;DR: This paper deals with radial distribution network reconfiguration for loss and switching mitigation with a genetic algorithm with two network encodings capable of representing only radial connected solutions without demanding a planar topology or any specific genetic operator.
Abstract: This paper deals with radial distribution network reconfiguration for loss and switching mitigation. Its main contribution is the presentation of a genetic algorithm (GA) with two network encodings, capable of representing only radial connected solutions without demanding a planar topology or any specific genetic operator. The code was named sequential because the evaluation of the ith gene depends on the information forthcoming from all the previous genes. In addition to a full description of proposed techniques with examples, the paper presents a survey of existing network codifications including their search space and chromosomes for a 33-bus example system. In order to validate the proposed algorithms, these and other two techniques in the state of the art are applied to reconfigure three sample systems which have been broadly published in the technical literature. Performance is assessed by using the results of implemented representations and the reconfiguration literature.

Proceedings ArticleDOI
10 Oct 2011
TL;DR: A distributed reconfiguration solution named Ariadne, targeting large, aggressively scaled, unreliable NoCs, which provides a 40%-140% latency improvement over other on-chip state-of-the-art fault tolerant solutions, while meeting the low area budget of on- chip routers with an overhead of just 1.97%.
Abstract: Extreme transistor technology scaling is causing increasing concerns in device reliability: the expected lifetime of individual transistors in complex chips is quickly decreasing, and the problem is expected to worsen at future technology nodes. With complex designs increasingly relying on Networks-on-Chip (NoCs) for on-chip data transfers, a NoC must continue to operate even in the face of many transistor failures. Specifically, it must be able to reconfigure and reroute packets around faults to enable continued operation, i.e., generate new routing paths to replace the old ones upon a failure. In addition to these reliability requirements, NoCs must maintain low latency and high throughput at very low area budget. In this work, we propose a distributed reconfiguration solution named Ariadne, targeting large, aggressively scaled, unreliable NoCs. Ariadne utilizes up*/down* for fast routing at high bandwidth, and upon any number of concurrent network failures in any location, it reconfigures to discover new resilient paths to connect the surviving nodes. Experimental results show that Ariadne provides a 40%-140% latency improvement (when subject to 50 faults in a 64-node NoC) over other on-chip state-of-the-art fault tolerant solutions, while meeting the low area budget of on-chip routers with an overhead of just 1.97%.

11 May 2011
TL;DR: This work developed a prototype Dreams engine to test the distributed protocol, using an actor library for the Scala language and statically discover regions of the coordination layer that can execute independently, thus achieving a truly decoupled execution of connectors.
Abstract: This work contributes to the field of coordination, in particular to Reo, by improving existing approaches to execute synchronisation models in three major ways. First, this work supports decoupled execution and lightweight reconfiguration. We developed a prototype Dreams engine to test our distributed protocol, using an actor library for the Scala language. Reconfiguration of a small part of the system is independent of the execution or behaviour of unrelated parts of the same system. Second, Dreams outperforms previous Reo engines by using constraint satisfaction techniques. In each round of the execution of the Dreams framework, descriptions of the behaviour of all building blocks are combined and a coordination pattern for the current round is chosen using constraint satisfaction techniques. This approach requiring less time than previous attempts that collect all patterns before selecting one. Third, our work improves scalability by identifying synchronous regions. We statically discover regions of the coordination layer that can execute independently, thus achieving a truly decoupled execution of connectors. Consequently, the constraint problem representing the behaviour at each round is smaller and more easily solved.

Journal ArticleDOI
01 Jul 2011
TL;DR: This paper presents a new combined method for optimal reconfiguration using a multi-objective function with fuzzy variables that considers both objectives of load balancing and loss reduction in the feeders.
Abstract: All utility companies strive to achieve the well-balanced distribution systems in order to improve system voltage regulation by means of equal load balancing of feeders and reducing power loss. Optimal reconfiguration is one of the best solutions to reach this goal. This paper presents a new combined method for optimal reconfiguration using a multi-objective function with fuzzy variables. This method considers both objectives of load balancing and loss reduction in the feeders. Since reconfiguration is a nonlinear optimization problem, the ant colony algorithm is employed for the optimized response in search space. This method has been applied on two IEEE 33-bus and 69-bus distribution systems. Simulation results confirm the effectiveness of the proposed method in comparison with other techniques for optimal reconfiguration.

Journal ArticleDOI
TL;DR: A new DG interconnection planning study framework that includes a coordinated feeder reconfiguration and voltage control to calculate the maximum allowable DG capacity at a given node in the distribution network is presented.
Abstract: There is increasing requests for noncontrollable distribution generation (DG) interconnections in the medium and low voltage networks. Many studies have suggested that with proper system planning, DG could provide benefits such as reliability enhancement, investment deferment, and reduced losses. However, without network reinforcements, the allowable interconnection capacity in a network is often restricted due to fault current level, voltage variation, and power flow constraints. This paper aims to address the issue of optimizing network operation and use for accommodating DG integrations. A new DG interconnection planning study framework that includes a coordinated feeder reconfiguration and voltage control to calculate the maximum allowable DG capacity at a given node in the distribution network is presented. A binary particle swarm optimization (BPSO) technique is employed to solve the discrete nonlinear optimization problem and possible uncertainties associated with volatile renewable DG resource and loads are incorporated through a stochastic simulation approach. Comprehensive case studies are conducted to illustrate the applicability of the proposed method. Numerical examples suggest that the method and procedure used in the current DG interconnection impact study should be modified in order to optimize the existing grid operation and usage to facilitate customer participation in system operation and planning.

Patent
05 Jan 2011
TL;DR: In this article, a cloud computing resource scheduling method based on dynamic reconfiguration virtual resources is proposed, which comprises the steps of: using cloud application load information collected by a cloud application monitor as a basis, making a dynamic decision based on the load capacity of the virtual resources for running cloud application and the current load of the cloud application; and dynamically reconfiguring virtual resource for cloud application based on decision result.
Abstract: The invention relates to a cloud computing resource scheduling method based on dynamic reconfiguration virtual resources. The method comprises the steps of: using cloud application load information collected by a cloud application monitor as a basis; making a dynamic decision based on the load capacity of the virtual resources for running cloud application and the current load of the cloud application; and dynamically reconfiguring virtual resources for cloud application based on the decision result. Dynamic adjustment of resources is realized by a method for reconfiguring virtual resources for cloud application, without needing dynamic redistribution of physical resources or stopping executing cloud application. The method can dynamically reconfigure the virtual resources according to the load variation of the cloud application, optimize allocation of the cloud computing resources, realize effective use of the cloud computing resources, and meet the requirements on dynamic scalability of cloud application. In addition, the method can avoid waste of the cloud computing resources, and save the cost for using resources for cloud application users.

Journal ArticleDOI
TL;DR: In this article, a fault-tolerant attitude control synthesis is carried out for a flexible spacecraft subject to actuator faults and uncertain inertia parameters, where a control law for attitude stabilization is derived to protect against the partial loss of actuator effectiveness.
Abstract: In this paper, a novel fault-tolerant attitude control synthesis is carried out for a flexible spacecraft subject to actuator faults and uncertain inertia parameters. Based on the sliding mode control, a fault-tolerant control law for the attitude stabilization is first derived to protect against the partial loss of actuator effectiveness. Then the result is extended to address the problem that the actual output of the actuators is constrained. It is shown that the presented controller can accommodate the actuator faults, even while rejecting external disturbances. Moreover, the developed control law can rigorously enforce actuator-magnitude constraints. An additional advantage of the proposed fault-tolerant control strategy is that the control design does not require a fault detection and isolation mechanism to detect, separate, and identify the actuator faults on-line; the knowledge of certain bounds on the effectiveness factors of the actuator is not used via the adaptive estimate method. The associated stability proof is constructive and accomplished by the development of the Lyapunov function candidate, which shows that the attitude orientation and angular velocity will globally asymptotically converge to zero. Numerical simulation results are also presented which not only highlight the ensured closed-loop performance benefits from the control law derived here, but also illustrate its superior fault tolerance and robustness in the face of external disturbances when compared with the conventional approaches for spacecraft attitude stabilization control.

Journal ArticleDOI
TL;DR: An actuator fault-tolerant control scheme, composed of the usual modules performing detection, isolation, accommodation, designed for a class of nonlinear systems, and then applied to an underwater remotely operated vehicle (ROV) used for inspection purposes.
Abstract: This paper proposes an actuator fault-tolerant control scheme, composed of the usual modules performing detection, isolation, accommodation, designed for a class of nonlinear systems, and then applied to an underwater remotely operated vehicle (ROV) used for inspection purposes. Detection is in charge of a residual generation module, while a sliding-mode-based approach has been used both for ROV control and fault isolation, after the application of an input decoupling nonlinear state transformation to the ROV model. Finally, control reconfiguration is performed exploiting the inherent redundancy of actuators. An extensive simulation study has been also performed, supporting the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The proposed method is efficient and promising for reconfiguration problem of radial distribution systems for minimization of real power loss using adapted ant colony optimization.
Abstract: This paper presents an efficient method for the reconfiguration of radial distribution systems for minimization of real power loss using adapted ant colony optimization. The conventional ant colony optimization is adapted by the graph theory to always create feasible radial topologies during the whole evolutionary process. This avoids tedious mesh check and hence reduces the computational burden. The initial population is created randomly and a heuristic spark is introduced to enhance the pace of the search process. The effectiveness of the proposed method is demonstrated on balanced and unbalanced test distribution systems. The simulation results show that the proposed method is efficient and promising for reconfiguration problem of radial distribution systems.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper addresses the challenges of resource provisioning for N-tier web applications in Clouds through the combination of the resource controllers on both application and container levels and indicates two major advantages of the method in comparison to previous approaches.
Abstract: Resource provisioning for N-tier web applications in Clouds is non-trivial due to at least two reasons. First, there is an inherent optimization conflict between cost of resources and Service Level Agreement (SLA) compliance. Second, the resource demands of the multiple tiers can be different from each other, and varying along with the time. Resources have to be allocated to multiple (virtual) containers to minimize the total amount of resources while meeting the end-to-end performance requirements for the application. In this paper we address these two challenges through the combination of the resource controllers on both application and container levels. On the application level, a decision maker (i.e., an adaptive feedback controller) determines the total budget of the resources that are required for the application to meet SLA requirements as the workload varies. On the container level, a second controller partitions the total resource budget among the components of the applications to optimize the application performance (i.e., to minimize the round trip time). We evaluated our method with three different workload models -- open, closed, and semi-open -- that were implemented in the RUBiS web application benchmark. Our evaluation indicates two major advantages of our method in comparison to previous approaches. First, fewer resources are provisioned to the applications to achieve the same performance. Second, our approach is robust enough to address various types of workloads with time-varying resource demand without reconfiguration.

Journal ArticleDOI
TL;DR: This work proposes an efficient heuristic algorithm to solve the distribution network reconfiguration problem for loss reduction and an efficient random walks-based technique for the loss estimation in radial distribution systems.
Abstract: The efficiency of network reconfiguration depends on both the efficiency of the loss estimation technique and the efficiency of the reconfiguration approach itself. We propose two novel algorithmic techniques for speeding-up the computational runtime of both problems. First, we propose an efficient heuristic algorithm to solve the distribution network reconfiguration problem for loss reduction. We formulate the problem of finding incremental branch exchanges as a minimum cost maximum flow problem. This approach finds the best set of concurrent branch exchanges yielding larger loss reduction with fewer iterations, hence significantly reducing the computational runtime. Second, we propose an efficient random walks-based technique for the loss estimation in radial distribution systems. The novelty of this approach lies in its property of localizing the computation. Therefore, bus voltage magnitude updates can be calculated in much shorter computational runtimes in scenarios where the distribution system undergoes isolated topological changes, such as in the case of network reconfiguration. Experiments on distribution systems with sizes of up to 10476 buses demonstrate that the proposed techniques can achieve computational runtimes shorter with up to 7.78 times and with similar or better loss reduction compared to the Baran's reconfiguration technique .

Journal ArticleDOI
TL;DR: In this paper, a mixed-integer nonlinear programming model is proposed to design the dynamic cellular manufacturing systems (DCMSs) under dynamic environment, where the objective is to minimize the sum of various costs such as intracell movement costs; intercell movement costs and machine procurement costs; setup cost; cutting tool consumption costs; machine operation costs; production planning-related costs, such as internal part production cost, part holding costs, and subcontracting costs; system reconfiguration costs; and machine breakdown repair cost, production time loss cost due to machine breakdown, machine maintenance overheads
Abstract: This paper addresses the dynamic cell formation problem (DCF). In dynamic environment, the product demand and mix changes in each period of a multiperiod planning horizon. It causes need of reconfiguration of cells to respond to the product demand and mix change in each period. This paper proposes a mixed-integer nonlinear programming model to design the dynamic cellular manufacturing systems (DCMSs) under dynamic environment. The proposed model, to the best of the author’s knowledge, is the most comprehensive model to date with more integrated approach to the DCMSs. The proposed DCMS model integrates concurrently the important manufacturing attributes in existing models in a single model such as machine breakdown effect in terms of machine repair cost effect and production time loss cost effect to incorporate reliability modeling; production planning in terms of part inventory holding, part internal production cost, and part outsourcing; process batch size; transfer batch size for intracell travel; transfer batch size for intercell travel; lot splitting; alternative process plan, and routing and sequence of operation; multiple copies of identical copies; machine capacity, cutting tooling requirements, work load balancing, and machine in different cells constraint; machine in same cell constraint; and machine procurements and multiple period dynamic cell reconfiguration. Further, the objective of the proposed model is to minimize the sum of various costs such as intracell movement costs; intercell movement costs and machine procurement costs; setup cost; cutting tool consumption costs; machine operation costs; production planning-related costs such as internal part production cost, part holding costs, and subcontracting costs; system reconfiguration costs; and machine breakdown repair cost, production time loss cost due to machine breakdown, machine maintenance overheads, etc. ,in an integrated manner. Nonlinear terms of objective functions are transformed into linear terms to make mixed-integer linear programming model. The proposed model has been demonstrated with several problems, and results have been presented accordingly.

Journal ArticleDOI
TL;DR: This paper presents a behavioral control solution for reconfiguration of a spacecraft formation using the Null-Space Based (NSB) concept, and aims to reconfigure and maintain a rigid formation while avoiding collisions between spacecraft.

Journal ArticleDOI
TL;DR: A design flow to efficiently map multiple multi-core applications on a dynamically reconfigurable SoC is presented and is actually able to extract similarities among the applications, as it achieves an average improvement in terms of reconfiguration latency with respect to a communication-oriented approach.
Abstract: Nowadays, multi-core systems-on-chip (SoCs) are typically required to execute multiple complex applications, which demand a large set of heterogeneous hardware cores with different sizes. In this context, the popularity of dynamically reconfigurable platforms is growing, as they increase the ability of the initial design to adapt to future modifications. This paper presents a design flow to efficiently map multiple multi-core applications on a dynamically reconfigurable SoC. The proposed methodology is tailored for a reconfigurable hardware architecture based on a flexible communication infrastructure, and exploits applications similarities to obtain an effective mapping. We also introduce a run-time mapper that is able to introduce new applications that were not known at design-time, preserving the mapping of the original system. We apply our design flow to a real-world multimedia case study and to a set of synthetic benchmarks, showing that it is actually able to extract similarities among the applications, as it achieves an average improvement of 29% in terms of reconfiguration latency with respect to a communication-oriented approach, while preserving the same communication performance.

Journal ArticleDOI
TL;DR: The proposed algorithm utilizes several queens and considers the queens as an external repository to save non-dominated solutions found during the search process to solve the multi-objective distribution feeder reconfiguration (DFR) problem.
Abstract: This paper presents an efficient multi-objective honey bee mating optimization (MHBMO) evolutionary algorithm to solve the multi-objective distribution feeder reconfiguration (DFR). The purposes of the DFR problem are to decrease the real power loss, the number of the switching operations and the deviation of the voltage at each node. Conventional algorithms for solving the multi-objective optimization problems convert the multiple objectives into a single objective using a vector of the user-predefined weights. This transformation has several drawbacks. For instance, the final solution of the algorithms extensively depends on the values of the weights. This paper presents a new MHBMO algorithm for the DFR problem. The proposed algorithm utilizes several queens and considers the queens as an external repository to save non-dominated solutions found during the search process. Since the objective functions are not the same, a fuzzy clustering technique is used to control the size of the repository within the limits. The proposed algorithm is tested on two distribution test feeders.