scispace - formally typeset
Search or ask a question
Author

Alan Julian Izenman

Bio: Alan Julian Izenman is an academic researcher from Temple University. The author has contributed to research in topics: Linear discriminant analysis & Estimator. The author has an hindex of 18, co-authored 53 publications receiving 3974 citations. Previous affiliations of Alan Julian Izenman include Tel Aviv University & University of Minnesota.


Papers
More filters
BookDOI
01 Jan 2008
TL;DR: The identity matrices have different dimensions — In the top row of each matrix, the identity matrix has dimension r and in the bottom row it has dimension s.
Abstract: CHAPTER 3 Page 46, line –15: (K × J)-matrix. Page 47, Equation (3.5): −EF should be −EF . Page 49, line –6: R should be <. Page 53, line –7: “see Exercise 3.4” is not relevant here. Page 53, Equation (3.43): Last term on rhs should be ∂yJ ∂xK . Page 60, Equation (3.98): σ should be σ. Page 61, line 8: (3.106) should be (3.105). Pages 61, 62, Equations (3.109), (3.110), and (3.111): The identity matrices have different dimensions — In the top row of each matrix, the identity matrix has dimension r and in the bottom row it has dimension s. Page 62, line 1: “r-vector” should be “(r + s)-vector.” Page 62, Equation (3.111): ΣXY should be ΣY X . Page 64, Equation (3.127): |Σ| should be |Σ|. Page 62, Equation (3.133): I(2.2) should be I(2,2). Page 65, line 8 (2nd line of property 2): Wr should be Wp. Page 65, property 4: Restate as follows. Let X = (X1, · · · ,Xn) , where Xi ∼ Nr(0,Σ), i = 1, 2, . . . , n, are independently and identically distributed (iid). Let A be a symmetric (n×n)-matrix with ν = rank(A), and let a be a fixed r-vector. Let y = Xa. Then, X AX ∼ Wr(ν,Σ) iff yAy ∼ σ aχ 2 ν , where σ 2 a = a Σa. Page 66, Equation (3.143): last term on rhs, +n should be −n2 . Page 67, line 3: Should read tr(TT ) = ∑r i=1 t 2 ii + ∑ i>j t 2 ij . Page 67, line –6: Should read “idempotent with rank n − 1.” Page 67, line –3: bX should be X b. Page 67, Equation (3.148): n should be n − 1.

747 citations

Book
28 Aug 2008
TL;DR: Techniques covered range from traditional multivariate methods, such as multiple regression, principal components, canonical variates, linear discriminant analysis, factor analysis, clustering, multidimensional scaling, and correspondence analysis, to the newer methods of density estimation, projection pursuit, neural networks, and classification and regression trees.
Abstract: Remarkable advances in computation and data storage and the ready availability of huge data sets have been the keys to the growth of the new disciplines of data mining and machine learning, while the enormous success of the Human Genome Project has opened up the field of bioinformatics. These exciting developments, which led to the introduction of many innovative statistical tools for high-dimensional data analysis, are described here in detail. The author takes a broad perspective; for the first time in a book on multivariate analysis, nonlinear methods are discussed in detail as well as linear methods. Techniques covered range from traditional multivariate methods, such as multiple regression, principal components, canonical variates, linear discriminant analysis, factor analysis, clustering, multidimensional scaling, and correspondence analysis, to the newer methods of density estimation, projection pursuit, neural networks, multivariate reduced-rank regression, nonlinear manifold learning, bagging, boosting, random forests, independent component analysis, support vector machines, and classification and regression trees. Another unique feature of this book is the discussion of database management systems. This book is appropriate for advanced undergraduate students, graduate students, and researchers in statistics, computer science, artificial intelligence, psychology, cognitive sciences, business, medicine, bioinformatics, and engineering. Familiarity with multivariable calculus, linear algebra, and probability and statistics is required. The book presents a carefully-integrated mixture of theory and applications, and of classical and modern multivariate statistical techniques, including Bayesian methods. There are over 60 interesting data sets used as examples in the book, over 200 exercises, and many color illustrations and photographs.

698 citations

Journal ArticleDOI
TL;DR: In this article, the problem of estimating the regression coefficient matrix having known (reduced) rank for the multivariate linear model when both sets of variates are jointly stochastic is discussed.

548 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of recent developments in nonparametric density estimation and include topics that have been omitted from review articles and books on the subject, such as the histogram, kernel estimators, and orthogonal series estimators.
Abstract: Advances in computation and the fast and cheap computational facilities now available to statisticians have had a significant impact upon statistical research, and especially the development of nonparametric data analysis procedures. In particular, theoretical and applied research on nonparametric density estimation has had a noticeable influence on related topics, such as nonparametric regression, nonparametric discrimination, and nonparametric pattern recognition. This article reviews recent developments in nonparametric density estimation and includes topics that have been omitted from review articles and books on the subject. The early density estimation methods, such as the histogram, kernel estimators, and orthogonal series estimators are still very popular, and recent research on them is described. Different types of restricted maximum likelihood density estimators, including order-restricted estimators, maximum penalized likelihood estimators, and sieve estimators, are discussed, where restrictions are imposed upon the class of densities or on the form of the likelihood function. Nonparametric density estimators that are data-adaptive and lead to locally smoothed estimators are also discussed; these include variable partition histograms, estimators based on statistically equivalent blocks, nearest-neighbor estimators, variable kernel estimators, and adaptive kernel estimators. For the multivariate case, extensions of methods of univariate density estimation are usually straightforward but can be computationally expensive. A method of multivariate density estimation that did not spring from a univariate generalization is described, namely, projection pursuit density estimation, in which both dimensionality reduction and density estimation can be pursued at the same time. Finally, some areas of related research are mentioned, such as nonparametric estimation of functionals of a density, robust parametric estimation, semiparametric models, and density estimation for censored and incomplete data, directional and spherical data, and density estimation for dependent sequences of observations.

520 citations

Book ChapterDOI
01 Jan 2013
TL;DR: A learning set of multivariate observations is given that may be identified as species of plants, levels of credit worthiness of customers, presence or absence of a specific medical condition, different types of tumors, views on Internet censorship, or whether an e-mail message is spam or non-spam.
Abstract: Suppose we are given a learning set \(\mathcal{L}\) of multivariate observations (i.e., input values \(\mathfrak{R}^r\)), and suppose each observation is known to have come from one of K predefined classes having similar characteristics. These classes may be identified, for example, as species of plants, levels of credit worthiness of customers, presence or absence of a specific medical condition, different types of tumors, views on Internet censorship, or whether an e-mail message is spam or non-spam.

349 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: It is demonstrated that the use of local optimisation methods together with the standard multi-resolution approach is not sufficient to reliably find the global minimum, so a global optimisation method is proposed that is specifically tailored to this form of registration.

6,413 citations

Journal ArticleDOI

6,278 citations

Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations