scispace - formally typeset
Search or ask a question
Author

Majid Mojirsheibani

Other affiliations: Carleton University
Bio: Majid Mojirsheibani is an academic researcher from California State University, Northridge. The author has contributed to research in topics: Estimator & Bayes classifier. The author has an hindex of 10, co-authored 47 publications receiving 244 citations. Previous affiliations of Majid Mojirsheibani include Carleton University.

Papers
More filters
Journal ArticleDOI
TL;DR: The proposed combined classifiers are considered, which turns out to be strongly consistent, and it is shown that this combined classifier is, (strongly) asymptotically, at least as good as any one of the individual classifiers.
Abstract: I consider a method for combining different classifiers to develop more effective classification rules. The proposed combined classifier, which turns out to be strongly consistent, is quite simple to use in real applications. It is also shown that this combined classifier is, (strongly) asymptotically, at least as good as any one of the individual classifiers. In addition, if one of the individual classifiers is already Bayes optimal (asymptotically), then so is the combined classifier.

57 citations

Journal ArticleDOI
TL;DR: A data-based method for constructing combined classifiers is proposed and the resulting classifiers, which are linear in nature, turn out to be consistent.

18 citations

Journal ArticleDOI
TL;DR: In this article, a BCa-type bootstrap procedure for setting approximate prediction intervals for an efficient estimator θm of a scalar parameter θ, based on a future sample of size m, is investigated.
Abstract: We investigate the construction of a BCa-type bootstrap procedure for setting approximate prediction intervals for an efficient estimator θm of a scalar parameter θ, based on a future sample of size m. The results are also extended to nonparametric situations, which can be used to form bootstrap prediction intervals for a large class of statistics. These intervals are transformation-respecting and range-preserving. The asymptotic performance of our procedure is assessed by allowing both the past and future sample sizes to tend to infinity. The resulting intervals are then shown to be second-order correct and second-order accurate. These second-order properties are established in terms of min(m, n), and not the past sample size n alone. Dans cet article, nous etudions la construction d'une procedure “bootstrap” de type BCa pour determiner des intervalles de prediction approximatifs d'un estimateur efficace θm d'un parametre scalaire θ1 fonde sur un echantillon futur de taille m. Les resultats sont egalement etendus aux situations non-parametriques, qui peuvent ětre utilisees pour construire des intervalles de prediction “bootstrap” pour une grande classe de statistiques. Ces intervalles sont invariants sous les transformations et preservent l'etendue. La performance asymptotique de notre procedure est evaluee en laissant les tailles echantillonnales passee et future tendre vers l'infini. Nous demontrons alors que les intervalles obtenus sont exacts et precis au deuxieme ordre et precis de deuxieme ordre. Ces proprietes de deuxieme ordre sont etablies en terme de min(m, n) et non seulement en terme de la taille, n, de I'echantillon passe

15 citations

Journal ArticleDOI
TL;DR: A data-based procedure for combining a number of individual classifiers in order to construct more effective classification rules is proposed and the resulting combined classifier turns out to be almost surely superior to each of the individual classifier.

14 citations

Journal ArticleDOI
TL;DR: This work derives representations for the best (Bayes) classifier when some of the covariates can be missing; this is done without imposing any assumptions on the underlying missing probability mechanism.
Abstract: Summary. Some results related to statistical classification in the presence of missing covariates are presented. We derive representations for the best (Bayes) classifier when some of the covariates can be missing; this is done without imposing any assumptions on the underlying missing probability mechanism. Furthermore, without assuming any missingness-at-random type of conditions, we also construct Bayes consistent classifiers that do not require any imputation-based techniques. Both parametric and non-parametric situations are considered but the emphasis is on the latter. In addition to simple missingness patterns, we also consider the full Swiss cheese model, where the missing covariates can be anywhere. Both mechanics and the theoretical validity of our results are discussed.

14 citations


Cited by
More filters
01 Jan 1997

892 citations

Book
01 Jan 2005
TL;DR: In this article, the authors propose a general theory iterative estimation scheme effective gradient approximation reduction from the klaman filter estimation from linear hypotheses for 3-D reconstruction of points.
Abstract: Introduction - The aims of this book the features of this book organization and background the analytical mind: strengh and weakness. Fundamentals of linear algebra - Vector and matrix calculus Eigenvalue problem linear systems and optimization matrix and tensor algebra. Probabilities and statistical estimation - probability distributions manifolds and local distributions gaussian distributions and X2 distributions statistical estimation for gaussian models general statistical estimation maximum likelihood estimation Akaike information criterion. Representation of geometric objects - image points and image lines space points and space lines space planes conics space conics and quadrics coordinate transformation and projection. Geometric correction - general theory correction of image points and image lines correction of space points and space lines correction of space planes orthogonality correction conic incidence correction. 3-D computation by stereo vision - epipolar constraint optimal correction of correspondence 3-D reconstruction of points 3-D reconstruction of lines optimal back projection onto a space plane scenes infinitely far away camera calibration errors. Parametric fitting - general theory optimal fitting for image points optimal fitting for image lines optimal fitting for space points optimal fitting for space lines optimal fitting for space planes. Optimal filter - general theory iterative estimation scheme effective gradient approximation reduction from the klaman filter estimation from linear hypotheses. Renormalization - eigenvector fit unbiased eigenvector generalized eigenvalue fit renormalization lincarization second order renormalization. Applications of geometric estimation - image line fitting conic fitting space plane fitting by range sensing space plane fitting by stereo vision. 3-D motion analysis - general theory lincarization and renormalization optimal correction and decomposition reliability of 3-D reconstruction critical surfaces 3-D reconstruction from planar surface motion camera rotation and information. 3-D interpretation of optical flow - optical flow detection theoretical basis of 3-D interpretation optimal estimation of motion parameters. (Part contents).

298 citations

Journal ArticleDOI
TL;DR: A three-way comparison of prediction accuracy involving nonlinear regression, NNs and CART models using a continuous dependent variable and a set of dichotomous and categorical predictor variables is performed.
Abstract: Numerous articles comparing performances of statistical and Neural Networks (NNs) models are available in the literature, however, very few involved Classification and Regression Tree (CART) models in their comparative studies. We perform a three-way comparison of prediction accuracy involving nonlinear regression, NNs and CART models using a continuous dependent variable and a set of dichotomous and categorical predictor variables. A large dataset on smokers is used to run these models. Different prediction accuracy measuring procedures are used to compare performances of these models. The outcomes of predictions are discussed and the outcomes of this research are compared with the results of similar studies.

283 citations

Journal ArticleDOI
TL;DR: It is shown that p-values exhibit surprisingly large variability in typical data situations, and the use of *, **, and *** to denote levels 0.05, 0.01, and 0.001 of statistical significance in subject-matter journals is about the right level of precision for reporting p- values when judged by widely accepted rules for rounding statistical estimates.
Abstract: P-values are useful statistical measures of evidence against a null hypothesis. In contrast to other statistical estimates, however, their sample-to-sample variability is usually not considered or estimated, and therefore not fully appreciated. Via a systematic study of log-scale p-value standard errors, bootstrap prediction bounds, and reproducibility probabilities for future replicate p-values, we show that p-values exhibit surprisingly large variability in typical data situations. In addition to providing context to discussions about the failure of statistical results to replicate, our findings shed light on the relative value of exact p-values vis-a-vis approximate p-values, and indicate that the use of *, **, and *** to denote levels 0.05, 0.01, and 0.001 of statistical significance in subject-matter journals is about the right level of precision for reporting p-values when judged by widely accepted rules for rounding statistical estimates.

161 citations