scispace - formally typeset
Search or ask a question
Author

Nedret Billor

Other affiliations: University of Sheffield
Bio: Nedret Billor is an academic researcher from Auburn University. The author has contributed to research in topics: Outlier & Robust statistics. The author has an hindex of 16, co-authored 41 publications receiving 1096 citations. Previous affiliations of Nedret Billor include University of Sheffield.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new general approach, based on the methods of Hadi (1992a,1994) and Hadi and Simonoff (1993) that can be computed quickly — often requiring less than five evaluations of the model being fit to the data, regardless of the sample size.

506 citations

Journal ArticleDOI
TL;DR: In this article, the authors focused on the decomposition rates and chemical composition in pure pine ( Pinus taeda L.) and mixed pine-deciduous litter, which contained either 100% loblolly pine needles or 80% pine needles and 20% leaves of one of five deciduous species.

121 citations

Journal ArticleDOI
TL;DR: In this paper, a new measure called local influence was proposed, which has the incidental benefit of being simpler to compute than the concept of local influence introduced by Cook, and is used to distinguish between the perturbations of the data and those of the model.
Abstract: The concept of local influence was introduced by Cook(1986). Closer study of the idea of perturbations suggests that it is important to distinguish between those of the data and those of the model, and that in the latter case Cook's definition has a theoretical difficulty. Here a new measure is proposed, which has the incidental benefit of being simpler to compute.

56 citations

Journal ArticleDOI
TL;DR: A robust functional principal component analysis is proposed to find the linear combinations of the original variables that contain most of the information, even if there are outliers and to flag functional outliers.
Abstract: Functional principal component analysis is the preliminary step to represent the data in a lower dimensional space and to capture the main modes of variability of the data by means of small number of components which are linear combinations of original variables. Sensitivity of the variance and the covariance functions to irregular observations make this method vulnerable to outliers and may not capture the variation of the regular observations. In this study, we propose a robust functional principal component analysis to find the linear combinations of the original variables that contain most of the information, even if there are outliers and to flag functional outliers. We demonstrate the performance of the proposed method on an extensive simulation study and two datasets from chemometrics and environment.

54 citations

Journal ArticleDOI
TL;DR: In this article, the authors compared the results of 15 classifications using independent validation datasets, estimates of kappa and error, and a nonparametric analysis of variance derived from visually interpreted observations, Landsat Enhanced Thematic Mapper plus imagery, PLR, and traditional maximum likelihood classifications algorithms.
Abstract: Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To assess the utility of PLR in image classification, we compared the results of 15 classifications using independent validation datasets, estimates of kappa and error, and a non-parametric analysis of variance derived from visually interpreted observations, Landsat Enhanced Thematic Mapper plus imagery, PLR, and traditional maximum likelihood classifications algorithms.

51 citations


Cited by
More filters
Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations

Proceedings ArticleDOI
21 Aug 2005
TL;DR: A novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed, which combines results from multiple outlier detection algorithms that are applied using different set of features.
Abstract: Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm.

622 citations

Journal ArticleDOI
TL;DR: The 2022 guideline as discussed by the authors provides patient-centric recommendations for clinicians to prevent, diagnose, and manage patients with heart failure, with the intent to improve quality of care and align with patients' interests.
Abstract: Aim: The “2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure” replaces the “2013 ACCF/AHA Guideline for the Management of Heart Failure” and the “2017 ACC/AHA/HFSA Focused Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure.” The 2022 guideline is intended to provide patient-centric recommendations for clinicians to prevent, diagnose, and manage patients with heart failure. Methods: A comprehensive literature search was conducted from May 2020 to December 2020, encompassing studies, reviews, and other evidence conducted on human subjects that were published in English from MEDLINE (PubMed), EMBASE, the Cochrane Collaboration, the Agency for Healthcare Research and Quality, and other relevant databases. Additional relevant clinical trials and research studies, published through September 2021, were also considered. This guideline was harmonized with other American Heart Association/American College of Cardiology guidelines published through December 2021. Structure: Heart failure remains a leading cause of morbidity and mortality globally. The 2022 heart failure guideline provides recommendations based on contemporary evidence for the treatment of these patients. The recommendations present an evidence-based approach to managing patients with heart failure, with the intent to improve quality of care and align with patients’ interests. Many recommendations from the earlier heart failure guidelines have been updated with new evidence, and new recommendations have been created when supported by published data. Value statements are provided for certain treatments with high-quality published economic analyses.

484 citations

Proceedings ArticleDOI
04 Jun 2007
TL;DR: The paper provides theoretical evidence that insertion of a new data point as well as deletion of an old data point influence only limited number of their closest neighbors and thus the number of updates per such insertion/deletion does not depend on the total number of points in the data set.
Abstract: Outlier detection has recently become an important problem in many industrial and financial applications. This problem is further complicated by the fact that in many cases, outliers have to be detected from data streams that arrive at an enormous pace. In this paper, an incremental LOF (local outlier factor) algorithm, appropriate for detecting outliers in data streams, is proposed. The proposed incremental LOF algorithm provides equivalent detection performance as the iterated static LOF algorithm (applied after insertion of each data record), while requiring significantly less computational time. In addition, the incremental LOF algorithm also dynamically updates the profiles of data points. This is a very important property, since data profiles may change over time. The paper provides theoretical evidence that insertion of a new data point as well as deletion of an old data point influence only limited number of their closest neighbors and thus the number of updates per such insertion/deletion does not depend on the total number of points TV in the data set. Our experiments performed on several simulated and real life data sets have demonstrated that the proposed incremental LOF algorithm is computationally efficient, while at the same time very successful in detecting outliers and changes of distributional behavior in various data stream applications

397 citations