scispace - formally typeset
Search or ask a question
Author

Torsten Söderström

Bio: Torsten Söderström is an academic researcher from Uppsala University. The author has contributed to research in topics: System identification & Estimation theory. The author has an hindex of 48, co-authored 346 publications receiving 18409 citations. Previous affiliations of Torsten Söderström include Karlstad University & Information Technology University.


Papers
More filters
Book
01 Jan 1988

5,375 citations

Book
01 Jan 1983
TL;DR: Methods of recursive identification deal with the problem of building mathematical models of signals and systems on-line, at the same time as data is being collected.
Abstract: Methods of recursive identification deal with the problem of building mathematical models of signals and systems on-line, at the same time as data is being collected. Such methods, which are also k ...

2,960 citations

Book
01 Sep 1983
TL;DR: In this article, a tutorial overview of instrumental variable methods is given, and an analysis including consistency and asymptotic distribution of the parameter estimates is included, along with a comparison with the least-squares method.
Abstract: This paper gives a tutorial overview of instrumental variable methods. Comparisons are made to the least-squares method. An analysis including consistency and asymptotic distribution of the parameter estimates is included.

519 citations

Book
26 Dec 2018
TL;DR: The paper gives a survey of errors-in-variables methods in system identification, and a number of approaches for parameter estimation of errors invariables models are presented.
Abstract: The paper gives a survey of errors-in-variables methods in system identification. Background and motivation are given, and examples illustrate why the identification problem can be difficult. Under general weak assumptions, the systems are not identifiable, but can be parameterized using one degree-of-freedom. Examples where identifiability is achieved under additional assumptions are also provided. A number of approaches for parameter estimation of errors-in-variables models are presented. The underlying assumptions and principles for each approach are highlighted.

440 citations

Journal Article
TL;DR: In this paper, it is shown that prediction error identification methods, applied in a direct fashion, will give correct estimates in a number of feedback cases, and that the accuracy is not necessarily worse in the presence of feedback.
Abstract: It is often necessary in practice to perform identification experiments on systems operating in closed loop. There has been some confusion about the possibilities of successful identification in such cases, evidently due to the fact that certain common methods then fail. A rapidly increasing literature on the problem is briefly surveyed in this paper, and an overview of a particular approach is given. It is shown that prediction error identification methods, applied in a direct fashion will given correct estimates in a number of feedback cases. Furthermore, the accuracy is not necessarily worse in the presence of feedback; in fact optimal inputs may very well require feedback terms. Some practical applications are also described.

405 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations

Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations