scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Feedback for physicists: A tutorial essay on control

31 Aug 2005-Reviews of Modern Physics (American Physical Society)-Vol. 77, Iss: 3, pp 783-836
TL;DR: In this paper, a tutorial essay aims to give enough of the formal elements of control theory to satisfy the experimentalist designing or running a typical physics experiment and enough to satisfy a theorist wishing to understand its broader intellectual context.
Abstract: Feedback and control theory are important ideas that should form part of the education of a physicist but rarely do. This tutorial essay aims to give enough of the formal elements of control theory to satisfy the experimentalist designing or running a typical physics experiment and enough to satisfy the theorist wishing to understand its broader intellectual context. The level is generally simple, although more advanced methods are also introduced. Several types of applications are discussed, as the practical uses of feedback extend far beyond the simple regulation problems where it is most often employed. Sketches are then provided of some of the broader implications and applications of control theory, especially in biology, which are topics of active research.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: With increasing delay, the accuracy of the results from the Shohat expansion worsens, so variational perturbation theory (VPT) is applied to the perturbations expansions to obtain more accurate results, which moreover hold even in the limit of large delays.
Abstract: We consider a model system of two coupled Hopfield neurons, which is described by delay differential equations taking into account the finite signal propagation and processing times. When the delay exceeds a critical value, a limit cycle emerges via a supercritical Hopf bifurcation. First, we calculate its frequency and trajectory perturbatively by applying the Poincar\'e-Lindstedt method. Then, the perturbation series are resummed by means of the Shohat expansion in good agreement with numerical values. However, with increasing delay, the accuracy of the results from the Shohat expansion worsens. We thus apply variational perturbation theory (VPT) to the perturbation expansions to obtain more accurate results, which moreover hold even in the limit of large delays.

25 citations

DissertationDOI
01 Jan 2017
TL;DR: In this paper, the authors present the design, control scheme, and noise performance of the Advanced LIGO detector in Washington during the first observing run (O1), and discuss some issues relating to interferometer calibration, and the impact of calibration errors on astrophysical parameter estimation.
Abstract: Late in 2015, gravitational physics reached a watershed moment with the first direct detections of gravitational waves. Two events, each from the coalescence of a binary black hole system, were detected by the Laser Interferometer Gravitational-wave Observatory (LIGO). At present, LIGO comprises two 4 km laser interferometers, one in Washington and the other in Louisiana; a third detector is planned to be installed in India. These interferometers, known as Advanced LIGO, belong to the so-called “second generation” of gravitational-wave detectors. Compared to the first-generation LIGO detectors (Initial and Enhanced LIGO), these instruments use multi-stage active seismic isolation, heavier and higher-quality mirrors, and more laser power to achieve an unprecedented sensitivity to gravitational waves. In 2015, both Advanced LIGO detectors achieved a strain sensitivity better than 10-23/Hz1/2 at a few hundred hertz; ultimately, these detectors are designed to achieve a sensitivity of a few parts in 10-24/Hz1/2 at a few hundred hertz. This thesis covers several topics in gravitational physics and laser interferometry. First, it presents the design, control scheme, and noise performance of the Advanced LIGO detector in Washington during the first observing run (O1). Second, it discusses some issues relating to interferometer calibration, and the impact of calibration errors on astrophysical parameter estimation. Third, it discusses the prospects for using terrestrial and space-based laser interferometers as dark matter detectors. This thesis has the internal LIGO document number P1600295.

25 citations

Journal ArticleDOI
TL;DR: In this paper, the authors generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements, and obtain a MIMO framework which resembles SISO theory and whose additional mathematical structure is highlighted by the extensive use of vector-operator algebra.
Abstract: Feedback control engineers have been interested in multiple-input--multiple-output (MIMO) extensions of single-input--single-output (SISO) results of various kinds due to its rich mathematical structure and practical applications. An outstanding problem in quantum feedback control is the extension of the SISO theory of Markovian feedback by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)] to multiple inputs and multiple outputs. Here we generalize the SISO homodyne-mediated feedback theory to allow for multiple inputs, multiple outputs, and arbitrary diffusive quantum measurements. We thus obtain a MIMO framework which resembles the SISO theory and whose additional mathematical structure is highlighted by the extensive use of vector-operator algebra.

25 citations

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter discusses how organisms may flexibly shift the balance between feedback and feedforward control in a context- and task-specific manner, and outlines an ecological theory of control strategies.
Abstract: In this chapter, I situate self-regulation in an evolutionary perspective, and explore the implications of an evolutionary approach for the study of individual differences in self-regulation. I begin by reviewing the two basic strategies of behavior control (feedback and feedforward control), compare their relative advantages and disadvantages, examine how they can be combined for optimal performance, and highlight the role of trade-offs in the evolution of control systems. I then discuss how organisms may flexibly shift the balance between feedback and feedforward control in a context- and task-specific manner, and outline an ecological theory of control strategies. Specifically, I analyze the differences between optimal self-regulation in predictable versus unpredictable environments, consider the role of delayed outcomes, and discuss the logic of defensive responses. I then go on to show how the same principles can be employed to understand stable individual differences in control strategies and impulsivity, usually characterized as “coping styles” in the biological literature. Finally, I introduce the framework of life history theory, and discuss how it provides a unifying perspective on the development of individual differences in self-regulation. After briefly introducing the fast-slow continuum of life history variation, I critically examine the associations between life history strategies and self-regulation in humans and nonhuman animals. I argue that, in humans, a primacy of feedforward regulation can be associated with both fast strategies characterized by high levels of impulsivity and slow strategies characterized by low impulsivity and high levels of future-oriented planning.

24 citations


Cites background from "Feedback for physicists: A tutorial..."

  • ...Most crucially, feedforward systems—regardless of their complexity—are unable to respond to unanticipated events that occur while the planned action is unfolding (Albertos and Mareels 2010; Bechhoefer 2005)....

    [...]

  • ...…goal of this behavior is obvious to an external observer—moving the bacterium toward glucose—even if the bacterium itself has no internal representation of the reason for its behavior; in fact, the bacterium does not even need to represent the direction in which it is swimming (Bechhoefer 2005)....

    [...]

  • ...The objective goal of this behavior is obvious to an external observer—moving the bacterium toward glucose—even if the bacterium itself has no internal representation of the reason for its behavior; in fact, the bacterium does not even need to represent the direction in which it is swimming (Bechhoefer 2005)....

    [...]

  • ...Conversely, effective filtering of unwanted noise inevitably reduces the tracking speed of the control system (Bechhoefer 2005)....

    [...]

  • ...The standard engineering solution to these trade-offs is to combine feedback and feedforward elements in the same control system, in order to exploit the strengths of both strategies and compensate for their weaknesses (Albertos and Mareels 2010; Bechhoefer 2005)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a transversely polarized spin exchange pumped noble gas comagnetometer is presented to detect non-magnetic spin-dependent interactions with a pulsed bias field.
Abstract: We demonstrate a transversely polarized spin-exchange pumped noble gas comagnetometer which suppresses systematic errors from longitudinal polarization. Rb atoms as well as $^{131}\mathrm{Xe}$ and $^{129}\mathrm{Xe}$ nuclei are simultaneously polarized perpendicular to a pulsed bias field. Both Xe isotopes' nuclear magnetic resonance conditions are simultaneously satisfied by frequency modulation of the pulse repetition rate. The Rb atoms detect the Xe precession. We highlight the importance of magnetometer phase shifts when performing comagnetometry. For detection of nonmagnetic spin-dependent interactions the sensing bandwidth is 1 Hz, the white-noise level is $7\phantom{\rule{0.28em}{0ex}}\ensuremath{\mu}\mathrm{Hz}/\sqrt{\text{Hz}}$, and the bias instability is $\ensuremath{\sim}1$ $\ensuremath{\mu}\mathrm{Hz}$.

24 citations


Cites background from "Feedback for physicists: A tutorial..."

  • ...[29] John Bechhoefer, “Feedback for physicists: A tutorial essay on control,” Rev....

    [...]

  • ...This is most easily accomplished in our system by feedback [29]....

    [...]

References
More filters
Book
01 Jan 1991
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Abstract: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index.

45,034 citations


"Feedback for physicists: A tutorial..." refers background in this paper

  • ...Recent papers by Touchette and Lloyd (2000, 2004) begin to explore more formally these links and derive a fundamental relationship between the amount of control achievable (“decrease of entropy” in their formulation) and the “mutual information” (Cover and Thomas, 1991) between the dynamical system and the controller created by an initial interaction....

    [...]

Book
01 Jan 1987
TL;DR: Das Buch behandelt die Systemidentifizierung in dem theoretischen Bereich, der direkte Auswirkungen auf Verstaendnis and praktische Anwendung der verschiedenen Verfahren zur IdentifIZierung hat.
Abstract: Das Buch behandelt die Systemidentifizierung in dem theoretischen Bereich, der direkte Auswirkungen auf Verstaendnis und praktische Anwendung der verschiedenen Verfahren zur Identifizierung hat. Da ...

20,436 citations


"Feedback for physicists: A tutorial..." refers methods in this paper

  • ...For an introduction, see Dutton et al. 1997 ; for full details, see Ljung 1999 ....

    [...]

  • ...Alternatively, there are a number of methods that avoid the transfer function completely: from a given input u(t) and measured response y(t), they directly fit to the coefficients of a time-domain model or directly give pole and zero positions (Ljung, 1999)....

    [...]

  • ...Alternatively, there are a number of methods that avoid the transfer function completely: from a given input u t and measured response y t , they directly fit to the coefficients of a time-domain model or directly give pole and zero positions Ljung, 1999 ....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a simple model based on the power-law degree distribution of real networks was proposed, which was able to reproduce the power law degree distribution in real networks and to capture the evolution of networks, not just their static topology.
Abstract: The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.

18,415 citations

Journal ArticleDOI
TL;DR: Developments in this field are reviewed, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Abstract: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.

17,647 citations


"Feedback for physicists: A tutorial..." refers background in this paper

  • ...The structure of such networks is a topic of intense current interest (Albert and Barabási, 2002; Newman, 2003)....

    [...]