scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Control Systems Magazine in 2007"


Journal ArticleDOI
TL;DR: Theoretical results regarding consensus-seeking under both time invariant and dynamically changing communication topologies are summarized in this paper, where several specific applications of consensus algorithms to multivehicle coordination are described.
Abstract: The purpose of this article is to provide a tutorial overview of information consensus in multivehicle cooperative control. Theoretical results regarding consensus-seeking under both time invariant and dynamically changing communication topologies are summarized. Several specific applications of consensus algorithms to multivehicle coordination are described

3,028 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed two approaches, namely, feedback controllers and ECMS, which can lead to system behavior that is close to optimal, with feedback controllers based on dynamic programming.
Abstract: Global optimization techniques, such as dynamic programming, serve mainly to evaluate the potential fuel economy of a given powertrain configuration. Unless the future driving conditions can be predicted during real-time operation but the results obtained using this noncausal approach establish a benchmark for evaluating the optimality of realizable control strategies. Real-time controllers must be simple in order to be implementable with limited computation and memory resources. Moreover, manual tuning of control parameters should be avoided. This article has analyzed two approaches, namely, feedback controllers and ECMS. Both of these approaches can lead to system behavior that is close to optimal, with feedback controllers based on dynamic programming. Additional challenges stem from the need to apply optimal energy-management controllers to advanced HEV architectures, such as combined and plug-in HEVs, as well as to optimization problems that include performance indices in addition to fuel economy, such as pollutant emissions, driveability, and thermal comfort

926 citations


Journal ArticleDOI
TL;DR: This article presents and surveys some recently developed theoretical tools for modeling, analysis, and design of motion coordination algorithms in both continuous and discrete time and pays special attention to the distributed character of coordination algorithms.
Abstract: Motion coordination is a remarkable phenomenon in biological systems and an extremely useful tool for groups of vehicles, mobile sensors, and embedded robotic systems For many applications, teams of mobile autonomous agents need the ability to deploy over a region, assume a specified pattern, rendezvous at a common point, or move in a synchronized manner The objective of this article is to illustrate the use of systems theory to analyze emergent behaviors in animal groups and to design autonomous and reliable robotic networks We present and survey some recently developed theoretical tools for modeling, analysis, and design of motion coordination algorithms in both continuous and discrete time We pay special attention to the distributed character of coordination algorithms, the characterization of their performance, and the development of design methodologies that provide mobile networks with provably correct cooperative strategies

544 citations


Journal ArticleDOI
TL;DR: This article has presented an approach to the mathematical description of dynamical systems, and described a methodology for modeling interconnected systems, called tearing, zooming, and linking, that is much better adapted to the physics of interconnected systems than input/output-modeling procedures such as Simulink.
Abstract: In this article we have presented an approach to the mathematical description of dynamical systems. The central notion is the behavior, which consists of the set of time trajectories that are declared possible by the model of a dynamical system. Often, the behavior is defined as the set of solutions of a system of differential equations. Models that specify a behavior usually involve latent variables in addition to the manifest variables the model aims at. We have also described a methodology for modeling interconnected systems, called tearing, zooming, and linking. The underlying mathematical language consists of terminals, modules, the interconnection graph, the module embedding, and the manifest variable assignment. The combination of module equations, interconnection constraints, and manifest variable assignment leads to a latent-variable representation for the behavior of the manifest variables the model aims. This methodology of tearing, zooming, and linking offers a systematic procedure for modeling interconnected systems that is much better adapted to the physics of interconnected systems than input/output-modeling procedures such as, for example, Simulink. The methodology of tearing, zooming, and linking has many things in common with bond graphs.

448 citations


Journal ArticleDOI
TL;DR: Experimental results on an industrial robot manipulator show that the estimated dynamic robot model can accurately predict the actuator torques for a given robot motion.
Abstract: The use of periodic excitation is the key feature of the presented robot identification method. Periodic excitation allows us to integrate the experiment design, signal processing, and parameter estimation. This integration simplifies the identification procedure and yields accurate models. Experimental results on an industrial robot manipulator show that the estimated dynamic robot model can accurately predict the actuator torques for a given robot motion. Accurate actuator torque prediction is a fundamental requirement for robot models that are used for offline programming, task optimization, and advanced model-based control. A payload identification approach is derived from the integrated robot identification method, and possesses the same favorable properties.

280 citations


Journal ArticleDOI
TL;DR: PCOD as discussed by the authors is a cooperative control framework for stabilizing relative equilibria in a model of self-propelled, steered particles moving in the plane at unit speed, which is applicable to time-invariant and undirected interaction.
Abstract: This article describes PCOD, a cooperative control framework for stabilizing relative equilibria in a model of self-propelled, steered particles moving in the plane at unit speed. Relative equilibria correspond either to motion of all of the particles in the same direction or to motion of all of the particles around the same circle. Although the framework applies to time-varying and directed interaction between individuals, we focus here on time-invariant and undirected interaction, using the Laplacian matrix of the interaction graph to design a set of decentralized control laws applicable to mobile sensor networks. Since the direction of motion of each particle is represented in the framework by a point on the unit circle, the closed-loop model has coupled-phase oscillator dynamics.

253 citations


Journal ArticleDOI
TL;DR: The bond graph method as discussed by the authors is a graphical approach to modeling in which component energy ports are connected by bonds that specify the transfer of energy between system components, and the essential element to be grasped is that bonds represent power transactions between components.
Abstract: The bond-graph method is a graphical approach to modeling in which component energy ports are connected by bonds that specify the transfer of energy between system components. Power, the rate of energy transport between components, is the universal currency of physical systems. Bond graphs are inherently energy based and thus related to other energy-based methods, including dissipative systems and port-Hamiltonians. This article has presented an introduction to bond graphs for control engineers. Although the notation can initially appear daunting, the bond graph method is firmly grounded in the familiar concepts of energy and power. The essential element to be grasped is that bonds represent power transactions between components

227 citations


Journal ArticleDOI
TL;DR: In this article, the authors illuminate the critical role of system zeros in control-system performance for the benefit of a wide audience both inside and outside the control systems community, and highlight the tradeoff between the robustness and achievable performance of a feedback control system.
Abstract: The purpose of this article is to illuminate the critical role of system zeros in control-system performance for the benefit of a wide audience both inside and outside the control systems community. Zeros are a fundamental aspect of systems and control theory; however, the causes and effects of zeros are more subtle than those of poles. In particular, positive zeros can cause initial undershoot (initial error growth), zero crossings, and overshoot in the step response of a system, whereas nonminimum-phase zeros limit bandwidth. Both of these aspects have real-world implications in many applications. Nonminimum-phase zeros exacerbate the tradeoff between the robustness and achievable performance of a feedback control system. From a control-theoretic point of view, a nonminimum-phase zero in the loop transfer function L is arguably the worst feature a system can possess. Every feedback synthesis methodology must accept limitations due to the presence of open-right-half-plane zeros, and the mark of a good analysis tool is the ability to capture the performance limitations arising from nonminimum-phase zeros.

220 citations


Journal ArticleDOI
TL;DR: This work studied one of these problems in terms of how such an estimate can be efficiently computed in a distributed manner as well as how the quality of an optimal estimate scales with the size of the network.
Abstract: Large-scale sensor networks give rise to estimation problems that have a rich graphical structure. We studied one of these problems in terms of how such an estimate can be efficiently computed in a distributed manner as well as how the quality of an optimal estimate scales with the size of the network. Two distributed algorithms are presented to compute the optimal estimates that are scalable and robust to communication failures. In designing these algorithms, we found the literature on parallel computation to be a rich source of inspiration.

192 citations


Journal ArticleDOI
TL;DR: A cooperative multivehicle testbed (COMET) is created to facilitate the development of cooperative control systems and mobile sensor networks, and to adopt the following definition of cooperative behavior.
Abstract: Recent advances in communication, computation, and embedded technologies support the development of cooperative multivehicle systems. For the purposes of this article, we adopt the following definition of cooperative behavior: "Given some task specified by a designer, a multiple-robot system displays cooperative behavior if, due to some underlying mechanism, for instance, the 'mechanism of cooperation,' there is an increase in the total utility of the system." The development of cooperative multivehicle systems is motivated by the recognition that, by distributing computer power and other resources, teams of mobile agents can perform many tasks more efficiently and robustly than an individual robot. For example, teams of robots can complete tasks such as multipoint surveillance, distributed localization and mapping, and cooperative transport. To facilitate the development of cooperative control systems and mobile sensor networks, we have created a cooperative multivehicle testbed (COMET) for research and experimentation.

167 citations


Journal ArticleDOI
TL;DR: The task of programming active self-assembling and self-organizing systems at the level of interactions among particles in the system is considered, and each particle or robot is provided with a local interaction rule book called a graph grammar.
Abstract: Self-assembly is the phenomenon in which a collection of particles spontaneously arrange themselves into a coherent structure. Self-assembly is ubiquitous in nature. In this article we consider the task of programming active self-assembling and self-organizing systems at the level of interactions among particles in the system. To demonstrate the approach, we use it to control an experimental system called the programmable parts testbed (PPT). We also consider several illustrative examples, including polymerization, a model of a molecular ratchet, and a cooperative control scenario. In all of these systems, we provide each particle or robot with a local interaction rule book called a graph grammar.

Journal ArticleDOI
TL;DR: It is illustrated, through the example of human dynamics, that a thorough understanding of complex systems requires an understanding of network dynamics as well as network topology and architecture, and that complexity theory must incorporate the interactions between dynamics and structure.
Abstract: The purpose of this article is to illustrate, through the example of human dynamics, that a thorough understanding of complex systems requires an understanding of network dynamics as well as network topology and architecture. After an overview of the topology of complex networks, such as the Internet and the WWW, data-driven models for human dynamics are given. These models motivate the study of network dynamics and suggest that complexity theory must incorporate the interactions between dynamics and structure. The article also advances the notion that an understanding of network dynamics is facilitated by the availability of large data sets and analysis tools gained from the study of network structure.

Journal ArticleDOI
TL;DR: In this paper, a fixed-size LS-SVM was used to estimate five NARX models and two of the models were later modified into AR-NARX structures following the exploration of the residuals.
Abstract: This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.

Journal ArticleDOI
TL;DR: In this article, the authors describe the use of an optomechanical device called a wafer scanner to transfer a pattern from a mask to the surface of a silicon wafer.
Abstract: Photolithography is a step in semiconductor manufacturing that uses a laser beam to transfer a pattern from a mask to the surface of a silicon wafer. This process is implemented by an optomechanical device called a wafer scanner. Wafer scanners require ultra-high-precision repetitive positioning capabilities. When the disturbances are repetitive, ILC improves performance of wafer-stage positioning from scan to scan. However, in the presence of nonrepetitive disturbances, ILC must be able to extract repetitive information, which is consistent from cycle to cycle, while avoiding nonrepetitive information.

Journal ArticleDOI
TL;DR: The KiteGen project as mentioned in this paper has designed and simulated a small-scale prototype of a yo-yo configuration with two kite lines around two drums and linked to two electric drives, which are fixed to the ground.
Abstract: To overcome the limitations of current wind power technology, the KiteGen project was initiated at Politecnico di Torino, Italy, to design and build a new class of wind energy generators in collaboration with Sequoia Automation, Modelway, and Centro Studi Industriali. The project focus is to capture wind energy by means of controlled tethered airfoils, that is, kites. The KiteGen project has designed and simulated a small-scale prototype. The two kite lines are rolled around two drums and linked to two electric drives, which are fixed to the ground. The flight of the kite is controlled by regulating the pulling force on each line. Energy is collected when the wind force on the kite unrolls the lines, and the electric drives act as generators due to the rotation of the drums. When the maximal line length of about 300 m is reached, the drives act as motors to recover the kite, spending a small percentage (about 12%, see the "Simulation Results" section for details) of the previously generated energy. This yo-yo configuration is under the control of the kite steering unit (KSU), which includes the electric drives (for a total power of 40 kW), the drums, and all of the hardware needed to control a single kite. The aims of the prototype are to demonstrate the ability to control the flight of a single kite, to produce a significant amount of energy, and to verify the energy production levels predicted in simulation studies.

Journal ArticleDOI
TL;DR: This special section focuses on the study of network architectures and their formation as well as on the description of dynamical processes that take place over networks.
Abstract: This special section focuses on the study of network architectures and their formation as well as on the description of dynamical processes that take place over networks. A common thread throughout the five articles is the role of distributed processing and control, as well as the search for network-growth mechanisms that give rise to the desired structure and behavior.

Journal Article
TL;DR: Photolithography is a step in semiconductor manufacturing that uses a laser beam to transfer a pattern from a mask to the surface of a silicon wafer to extract repetitive information, which is consistent from cycle to cycle, while avoiding nonrepetitive information.
Abstract: Photolithography is a step in semiconductor manufacturing that uses a laser beam to transfer a pattern from a mask to the surface of a silicon wafer. This process is implemented by an optomechanical device called a wafer scanner (Figure 1). A schematic of the major subsystems of a wafer scanner is shown in Figure 2. The illumination system consists of an argon-fluoride (Ar-F) or similar laser source that passes a laser beam through the photomask mounted on the reticle to form an image of the integrated circuit (IC) on the silicon wafer mounted on the wafer stage. The reduction lens assembly projects a reduced image of the photomask, shown in Figure 3, onto a photoresist layer on the silicon wafer, shown in Figure 4. Linear motors are used to position the wafer and reticle scanning stages relative to the projection lens and illumination system. Although several individual ICs are typically made from a single silicon wafer, the scanner exposes only a small part of the wafer to the laser beam in a single scan. The scan is performed by simultaneously moving the wafer and reticle stages in opposite directions, which reduces the time required for a single scan, while keeping peak velocities of both stages small. After the length of the reticle is scanned, the stages are stepped to the required positions for the scan process at another location on the wafer. This step and scan process is illustrated in Figure 5. The critical dimension (CD) of the IC manufacturing process is the finest line resolvable on the silicon wafer by the scanner. The CD of the scanner depends on the type of optical assembly, laser source, and positioning accuracies of the wafer and reticle stages. State-of-the-art scanners using Ar-F lasers can achieve CDs as small as 65 nm. In addition to the CD, the standard deviation of line size is a performance metric for the wafer scanner. This metric, called the alignment accuracy, indicates how close two features can be reliably placed on the silicon wafer. One of the

Journal ArticleDOI
TL;DR: In this article, the authors classified TPMSs into two categories, namely, direct and indirect, and calculated the pressure drop based on actual pressure measurements through sensors, such as wheel speed.
Abstract: Proper tire inflation pressure improves fuel economy, reduces braking distance, improves handling, and increases tire life, while underinflation creates overheating and can lead to accidents. Approximately 3/4 of all automobiles operate with at least one underinflated tire. Beginning with 2006 models, all passenger cars and trucks in the United States are required to have tire-pressure monitoring systems (TPMSs). A TPMS is a driver-assist system that warns the driver when the tire pressure is below or above the prescribed limits. TPMSs are classified into two categories, namely, direct and indirect. In direct TPMSs, the pressure drop is calculated based on actual pressure measurements through sensors. In contrast, measurements such as wheel speed are used in indirect TPMSs.

Journal ArticleDOI
TL;DR: The unilateral Laplace transform is widely used to analyze signals, linear models, and control systems, and is consequently taught to most engineering undergraduates as mentioned in this paper, but it is difficult to understand and apply.
Abstract: The unilateral Laplace transform is widely used to analyze signals, linear models, and control systems, and is consequently taught to most engineering undergraduates. In our courses at MIT in electrical engineering and computer science, mathematics, and mechanical engineering, we have found some significant pitfalls associated with teaching students to understand and apply the Laplace transform. We have independently concluded that one reason students find the Laplace transform difficult is that there is significant confusion present in many of the standard textbook presentations of this subject, in all three of our disciplines

Journal ArticleDOI
TL;DR: Complex lead and lag compensators are new additions to the repertoire of compensator structures for loop shaping as discussed by the authors, and they can be used as weighting functions with automated robust design tools.
Abstract: Complex lead and lag compensators are new additions to the repertoire of compensator structures for loop shaping. This article facilitates the use of these compensators by providing explicit formulas that relate the parameters of the compensators to features of their frequency responses. Two examples illustrate the utility of these compensators for system modeling and controller design. While the examples involve low-order plants, the principles of employing the complex lead and lag compensators remain the same for higher-order systems. We plan to use these compensators as weighting functions with automated robust design tools. A weighting function is a transfer function whose frequency response magnitude is used to bound closed-loop response or modeling uncertainty. The complex lead and lag compensators provide new degree of freedom for selecting weighting functions. In particular, the steep magnitude slope in the transition region of these compensators more closely approximates an ideal step function than weighting functions appearing in the literature (Packard et al.)


Journal ArticleDOI
TL;DR: In this article, the authors proposed a version of the FV theorem for rational Laplace transforms with poles in the OLHP or at the origin, which is a refinement of the classical literature in that s approaches zero through the right half plane to obtain the correct sign of the infinite limit.
Abstract: The aim of this article is to publicize and prove the ";infinite-limit"; version of the final value theorem. The version we provide is a slight refinement of the classical literature in that we require that s approach zero through the right-half plane to obtain the correct sign of the infinite limit. We first consider the case of rational Laplace transforms and then state a version that applies to irrational functions. For rational Laplace transforms with poles in the OLHP or at the origin, the extended final value theorem provides the correct infinite limit. For irrational Laplace transforms, the generalized final value theorem provides the analogous result. Finally, we point to a detailed analysis of the final value theorem for piecewise continuous functions.

Journal ArticleDOI
TL;DR: This article is focused on domestic robots for vacuuming and lawn mowing, which are mobile units that use autonomous mobile robotics technology.
Abstract: Service robots are programmable automated or semiautomated mechanical devices designed to perform a specific service rather than a manufacturing function. Robots were initially used in the automation sector to handle repetitive and simple tasks reliably, with the objective of cost reduction per product. Along with the increased speed of embedded microcontrollers, the service robotic sector has started to grow. This article is focused on domestic robots for vacuuming and lawn mowing. Domestic robots for vacuuming and lawn mowing are mobile units that use autonomous mobile robotics technology

Journal ArticleDOI
TL;DR: In this paper, the authors describe several constrained-optimization-based formulations for multisine input signal design that allow users to simultaneously specify the essential frequency and time-domain properties of these signals.
Abstract: Distillation is one of the most common separation techniques in chemical manufacturing. This multi-input, multi-output staged separation process is strongly interactive, as determined by the singular value decomposition of a linear dynamic model of the system. Process dynamics associated with the low-gain direction are critical to the design of high-performance controllers for high-purity distillation but are difficult to estimate from conventional experimental test signals for identification. As a result, high-purity distillation columns are considered challenging cases for multivariable system identification and robust control system design. High-purity distillation is a challenging process application for system identification because of its nonlinear and strongly interactive dynamics. This article has described several constrained-optimization-based formulations for multisine input signal design that allow users to simultaneously specify the essential frequency- and time-domain properties of these signals. Because constraints are explicitly part of the design procedure, the approach is useful for accomplishing plant-friendly identification testing in the process industries. The problem formulations were evaluated for a highly nonlinear methanol-ethanol distillation column. Introducing directional sinusoids in the multisine signal, applying a closed-loop signal design, and minimizing an objective function based on Weyl's theorem enhanced the information content of the low-gain direction in the identification experiment.

Journal ArticleDOI
Chris Bissell1
TL;DR: The Moniac, or Phillips machine as it is more commonly known is unusual, perhaps unique, in the world of analog computers and simulators in employing hydraulic components to simulate dynamic systems, rather than electrical or mechanical devices as mentioned in this paper.
Abstract: The Moniac, or Phillips machine as it is more commonly known is unusual, perhaps unique, in the world of analog computers and simulators in employing hydraulic components to simulate dynamic systems, rather than electrical or mechanical devices. While the machine may seem quaint to us now, it is difficult to imagine that any other contemporary simulator would have been quite so successful in directly demonstrating the dynamic behavior of an economic system both to students and professional economists. This article aims to bring wider attention to the machine while emphasizing the relationship between Phillip's work and control engineering

Journal ArticleDOI
TL;DR: In this paper, a combination of ground vibration tests and in-flight tests is used for this purpose, and various sensor data are recorded, typically from accelerometers, based on these data, the aircraft dynamics are identified through modal analysis or structural identification.
Abstract: The development of an aircraft requires careful exploration of the dynamical behavior of the structure subject to aeroservoelastic forces. A combination of ground vibration tests and in-flight tests is used for this purpose. For both types of tests, various sensor data are recorded, typically from accelerometers. Based on these data, the aircraft dynamics are identified through modal analysis or structural identification. System identification and parameter estimation from flight data sets are considered and industrial application for modern flight vehicle design and certification is discussed.

Journal ArticleDOI
TL;DR: In this paper, an analysis and compensator design framework for power-factor compensation based on cyclodissipativity was proposed for polyphase unbalanced loads with possibly nonlinear lossless compensators.
Abstract: This article advances an analysis and compensator design framework for power-factor compensation based on cyclodissipativity. Although the framework applies to general polyphase unbalanced circuits, this paper have focused on the problem of power factor compensation with LTI capacitors or inductors of single-phase loads. The full power of the approach are expected to become evident for polyphase unbalanced loads with possibly nonlinear lossless compensators, where the existing solutions are far from satisfactory. The main obstacle appears to be the lack of knowledge about the load, a piece of information that is essential for a successful design

Journal ArticleDOI
TL;DR: In the present issue, Dennis Bernstein answers the question "What is hysteresis?" as discussed by the authors, which is related to the question of what is hystresis in the present paper.
Abstract: In the present issue, Dennis Bernstein answers the longstanding question, "What is hysteresis?"


Journal ArticleDOI
TL;DR: In this article, the authors extend the results of the dynamic market One: optimum pricing policy of monopoly markets, given in Part 1. 1, to the competitive case and show that the most important contributing factors for differences between static and dynamic pricing policy are: consumer response time, planning period and the discount rate.
Abstract: pricing policy in the dynamic case different from the static In this paper we extend the results of the dynamic market One: optimum pricing policy of monopoly markets, given in Part 1 . 1, to the competitive case. We show that the most important contributing factors for differences between static and y2. namic, competitive market optimum pricing policy as in the monopoly case are: consumer response time, planning 3. period and the discount rate. Additionally the price of the competitor enters to the decision rule. In Consumer response time (the time it takes the consumer to react to a price change); planning period T (the time period over which an objective function is maximized); discount rate r (which is dependent on the interest