scispace - formally typeset
Search or ask a question
Author

Ulf Grenander

Bio: Ulf Grenander is an academic researcher from Brown University. The author has contributed to research in topics: Stochastic process & Series (mathematics). The author has an hindex of 44, co-authored 127 publications receiving 28349 citations. Previous affiliations of Ulf Grenander include Florida State University & University of Manchester.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, Toeplitz forms are used for the trigonometric moment problem and other problems in probability theory, analysis, and statistics, including analytic functions and integral equations.
Abstract: Part I: Toeplitz Forms: Preliminaries Orthogonal polynomials. Algebraic properties Orthogonal polynomials. Limit properties The trigonometric moment problem Eigenvalues of Toeplitz forms Generalizations and analogs of Toeplitz forms Further generalizations Certain matrices and integral equations of the Toeplitz type Part II: Applications of Toeplitz Forms: Applications to analytic functions Applications to probability theory Applications to statistics Appendix: Notes and references Bibliography Index.

2,279 citations

Book
01 Jan 1984
TL;DR: In this paper, Toeplitz forms are used for the trigonometric moment problem and other problems in probability theory, analysis, and statistics, including analytic functions and integral equations.
Abstract: Part I: Toeplitz Forms: Preliminaries Orthogonal polynomials. Algebraic properties Orthogonal polynomials. Limit properties The trigonometric moment problem Eigenvalues of Toeplitz forms Generalizations and analogs of Toeplitz forms Further generalizations Certain matrices and integral equations of the Toeplitz type Part II: Applications of Toeplitz Forms: Applications to analytic functions Applications to probability theory Applications to statistics Appendix: Notes and references Bibliography Index.

1,643 citations

Book
01 Jan 1957
TL;DR: In this article, the spectrum is estimated by using a regression spectrum and the regression spectrum is then used to estimate the spectral density of a time series with respect to the spectrum of the time series.
Abstract: Stationary Stochastic Processes and Their Representations: 1.0 Introduction 1.1 What is a stochastic process? 1.2 Continuity in the mean 1.3 Stochastic set functions of orthogonal increments 1.4 Orthogonal representations of stochastic processes 1.5 Stationary processes 1.6 Representations of stationary processes 1.7 Time and ensemble averages 1.8 Vector processes 1.9 Operations on stationary processes 1.10 Harmonizable stochastic processes Statistical Questions when the Spectrum is Known (Least Squares Theory): 2.0 Introduction 2.1 Preliminaries 2.2 Prediction 2.3 Interpolation 2.4 Filtering of stationary processes 2.5 Treatment of linear hypotheses with specified spectrum Statistical Analysis of Parametric Models: 3.0 Introduction 3.1 Periodogram analysis 3.2 The variate difference method 3.3 Effect of smoothing of time series (Slutzky's theorem) 3.4 Serial correlation coefficients for normal white noise 3.5 Approximate distributions of quadratic forms 3.6 Testing autoregressive schemes and moving averages 3.7 Estimation and the asymptotic distribution of the coefficients of an autoregressive scheme 3.8 Discussion of the methods described in this chapter Estimation of the Spectrum: 4.0 Introduction 4.1 A general class of estimates 4.2 An optimum property of spectrograph estimates 4.3 A remark on the bias of spectrograph estimates 4.4 The asymptotic variance of spectrograph estimates 4.5 Another class of estimates 4.6 Special estimates of the spectral density 4.7 The mean square error of estimates 4.8 An example from statistical optics Applications: 5.0 Introduction 5.1 Derivations of spectra of random noise 5.2 Measuring noise spectra 5.3 Turbulence 5.4 Measuring turbulence spectra 5.5 Basic ideas in a statistical theory of ocean waves 5.6 Other applications Distribution of Spectral Estimates: 6.0 Introduction 6.1 Preliminary remarks 6.2 A heuristic derivation of a limit theorem 6.3 Preliminary considerations 6.4 Treatment of pure white noise 6.5 The general theorem 6.6 The normal case 6.7 Remarks on the nonnormal case 6.8 Spectral analysis with a regression present 6.9 Alternative estimates of the spectral distribution function 6.10 Alternative statistics and the corresponding limit theorems 6.11 Confidence band for the spectral density 6.12 Spectral analysis of some artificially generated time series Problems in Linear Estimation: 7.0 Preliminary discussion 7.1 Estimating regression coefficients 7.2 The regression spectrum 7.3 Asymptotic expression for the covariance matrices 7.4 Elements of the spectrum 7.5 Polynomial and trigonometric regression 7.6 More general trigonometric and polynomial regression 7.7 Some other types of regression 7.8 Detection of signals in noise 7.9 Confidence intervals and tests Assorted Problems: 8.0 Introduction 8.1 Prediction when the conjectured spectrum is not the true one 8.2 Uniform convergence of the estimated spectral density to the true spectral density 8.3 The asymptotic distribution of an integral of a spectrograph estimate 8.4 The mean square error of prediction when the spectrum is estimated 8.5 Other types of estimates of the spectrum 8.6 The zeros and maxima of stationary stochastic processes 8.7 Prefiltering of a time series 8.8 Comments on tests of normality Problems Appendix on complex variable theory Bibliography Index.

902 citations

Journal ArticleDOI
TL;DR: In this article, the authors formalize the Brown/Washington University model of anatomy following the global pattern theory introduced in [1, 2], in which anatomies are represented as deformable templates, collections of 0,1,2,3-dimensional manifolds.
Abstract: This paper studies mathematical methods in the emerging new discipline of Computational Anatomy. Herein we formalize the Brown/Washington University model of anatomy following the global pattern theory introduced in [1, 2], in which anatomies are represented as deformable templates, collections of 0,1,2,3-dimensional manifolds. Typical structure is carried by the template with the variabilities accommodated via the application of random transformations to the background manifolds. The anatomical model is a quadruple (0,W, X,V), the background space f2 = \JaMa of 0,1,2,3-dimensional manifolds, the set of diffeomorphic transformations on the background space W : <-> O, the space of idealized medical imagery X, and V the family of probability measures on H. The group of diffeomorphic transformations H is chosen to be rich enough so that a large family of shapes may be generated with the topologies of the template maintained. For normal anatomy one deformable template is studied, with (f1,H,X) corresponding to a homogeneous space [3], in that it can be completely generated from one of its elements, X = Tiltemp-, hemp £ For disease, a family of templates Ua I"emp are introduced of perhaps varying dimensional transformation classes. The complete anatomy is a collection of homogeneous spaces Xtotai — UQ(^a> ~Ha )There are three principal components to computational anatomy studied herein. (1) Computation of large deformation maps: Given any two elements 1,1' £ X in the same homogeneous anatomy (fl,7i,X), compute diffeomorphisms h from one h anatomy to the other I1I'. This is the principal method by which anatomical Received May 5, 1998.

696 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

Book ChapterDOI
TL;DR: The analysis of censored failure times is considered in this paper, where the hazard function is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time.
Abstract: The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.

28,264 citations

Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations