scispace - formally typeset
Search or ask a question
Author

Roger A. Horn

Bio: Roger A. Horn is an academic researcher from University of Utah. The author has contributed to research in topics: Matrix (mathematics) & Canonical form. The author has an hindex of 32, co-authored 116 publications receiving 41610 citations. Previous affiliations of Roger A. Horn include Johns Hopkins University & Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the Green's function for Laplace's equation is shown to be an infinitely divisible kernel under certain conditions, i.e., the occurrence of zeros among its entries is subject to a certain condition which is most easily stated in the language of graph theory.
Abstract: when he showed recently that the Green's function for Laplace's equation is, under certain conditions, an infinitely divisible kernel. In this paper we shall develop a general theory of infinitely divisible kernels and indicate briefly a few of its applications. It is convenient to begin by examining the discrete analogues of the infinitely divisible kernels, i.e., infinitely divisible matrices. In ?1 we characterize this class of matrices and show that the occurrence of zeros among their entries is subject to a certain condition which is most easily stated in the language of graph theory. We then use the matrical theory to obtain corresponding results for continuous infinitely divisible kernels, and we develop the technical tools for dealing with these kernels which are needed later in the applications. We shall use Rm and Cm to denote m-dimensional real and complex space,

87 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss two applications to the study of characteristic functions and completely monotonic functions, and show how the classical representation theorems for infinitely divisible laws may be obtained.
Abstract: Generalizations and applications of a recent theorem of C. Loewner on nonnegative quadratic forms have led to interesting new results and to some especially simple derivations of well-known theorems from a unified point of view. In this note we discuss two applications to the study of characteristic functions and completely monotonic functions, and show how the classical representation theorems for infinitely divisible laws may be obtained.

82 citations

Journal ArticleDOI
TL;DR: The findings suggest that, with a more representative set of hospitals, the difference between unadjusted and Severity-adjusted DRG-based prospective payment could be greater than 35 per cent of a hospital's total operating costs.
Abstract: This study compares the financial impact of a Diagnosis Related Group (DRG) prospective payment system with that of a Severity of Illness-adjusted DRG prospective payment system. The data base of about 106,000 discharges is from 15 hospitals, all of which had a Health Care Financing Administration (HCFA) DRG case mix index greater than 1. In order to pool the data over the 15 hospitals, all charges were converted to costs, normalized to Fiscal Year 1983, and adjusted for medical education and wage levels. The findings showed that, for the study population as a whole, DRGs explained 28 per cent of the variability in resource use per case while Severity of Illness-adjusted DRGs explained 61 per cent of the variability in resource use per case. When we simulated prospective payment systems based on DRGs and on Severity-adjusted DRGs, we found that the financial impact of the two systems differed by very little in some hospitals and by as much as 35 per cent of total operating costs in other hospitals. Thus, even with a data set that is relatively homogeneous (with respect to the HCFA DRG case mix index definition of hospitals), we found substantial inequities in payment when DRGs were not adjusted for Severity of Illness. These findings suggest that, with a more representative set of hospitals, the difference between unadjusted and Severity-adjusted DRG-based prospective payment could be greater than 35 per cent of a hospital's total operating costs.

81 citations

Book ChapterDOI
01 Dec 1985
TL;DR: This chapter defines norms that measure every object, allowing us to make meaningful statements about size, comparisons, and convergence.
Abstract: In linear algebra, we work with vectors and matrices. Vectors are our primary objects, and we think of a matrix as an operator that can transform one vector into another. Vectors and matrices are lists and tables of numbers, so it may not be obvious how to say that one object is large and another small. We will define norms that measure every object, allowing us to make meaningful statements about size, comparisons, and convergence.

78 citations

Journal ArticleDOI
TL;DR: In this article, the authors survey the known forms to which a given complex matrix may be reduced by unitary or general consimilarity and describe a canonical form to which it can be reduced.

74 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Journal ArticleDOI
TL;DR: A distinctive feature of this work is to address consensus problems for networks with directed information flow by establishing a direct connection between the algebraic connectivity of the network and the performance of a linear consensus protocol.
Abstract: In this paper, we discuss consensus problems for networks of dynamic agents with fixed and switching topologies. We analyze three cases: 1) directed networks with fixed topology; 2) directed networks with switching topology; and 3) undirected networks with communication time-delays and fixed topology. We introduce two consensus protocols for networks with and without time-delays and provide a convergence analysis in all three cases. We establish a direct connection between the algebraic connectivity (or Fiedler eigenvalue) of the network and the performance (or negotiation speed) of a linear consensus protocol. This required the generalization of the notion of algebraic connectivity of undirected graphs to digraphs. It turns out that balanced digraphs play a key role in addressing average-consensus problems. We introduce disagreement functions for convergence analysis of consensus protocols. A disagreement function is a Lyapunov function for the disagreement network dynamics. We proposed a simple disagreement function that is a common Lyapunov function for the disagreement dynamics of a directed network with switching topology. A distinctive feature of this work is to address consensus problems for networks with directed information flow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the effectiveness of our theoretical results.

11,658 citations

Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations