scispace - formally typeset
Search or ask a question
Topic

Ordinal regression

About: Ordinal regression is a research topic. Over the lifetime, 1879 publications have been published within this topic receiving 65431 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new approach is proposed that builds a probability distribution over the space of all value functions compatible with the DM's certain holistic judgments that is more credible than preference of b over a.
Abstract: Multiple criteria ranking problem is approached using Subjective Stochastic Ordinal Regression (SSOR).Preferences of the decision maker are expressed through pairwise comparisons of some reference alternatives.A part of pairwise comparisons is certain, and another part is uncertain.Uncertain pairwise comparisons are used to build a probability distribution over the space of all preference models compatible with certain pairwise comparisons.From sampling of this distribution, one learns a probability with which a is ranked on the rth position (rank acceptability index), and probability that a is preferred to b (pairwise winning index). Ordinal regression methods of Multiple Criteria Decision Aiding (MCDA) take into account one, several, or all value functions compatible with the indirect preference information provided by the Decision Maker (DM). When dealing with multiple criteria ranking problems, typically, this information is a series of holistic and certain judgments having the form of pairwise comparisons of some reference alternatives, indicating that alternative a is certainly either preferred to or indifferent with alternative b. In some decision situations, it might be useful, however, to additionally account for uncertain pairwise comparisons interpreted in the following way: although the preference of a over b is not certain, it is more credible than preference of b over a. To handle certain and uncertain preference information, we propose a new approach that builds a probability distribution over the space of all value functions compatible with the DM's certain holistic judgments. A didactic example shows the applicability of the proposed approach.

15 citations

Journal ArticleDOI
TL;DR: In this paper, a comparative analysis of OLS, ordinal and multinomial logistic regression models in examining the effects of multiple factors on perceptions of alcohol risk is presented.
Abstract: Drawing on data gathered in the 2006 Monitoring the Future study of American youth (n = 2489), this investigation offers a comparative analysis of ordinary least squares (OLS), ordinal and multinomial logistic regression models in examining the effects of multiple factors on perceptions of alcohol risk. The article addresses limitations of OLS models in risk analyses and demonstrates how scholars can avoid making statistical errors when positioning vague quantifiers as ordinal dependent measures. Substantively, the article finds differential effects for (1) sex, (2) perceived attitudes of peers toward alcohol consumption, (3) frequency of intoxication, (4) teacher efforts toward alcohol education, (5) frequency of communicating with friends, and (6) newspaper exposure, as determinants of alcohol risk perceptions. Through statistical results and visual displays, the article reveals how inferences made about these effects stand to vary depending on the regression method chosen.

15 citations

Posted Content
TL;DR: This paper unify the specification of regression models for categorical response variables, whether nominal or ordinal, based on a decomposition of the link function into an inverse continuous cdf and a ratio of probabilities and introduces the notion of reversible models for ordinal data.
Abstract: Many regression models for categorical data have been introduced in various applied fields, motivated by different paradigms. But these models are difficult to compare because their specifications are not homogeneous. The first contribution of this paper is to unify the specification of regression models for categorical response variables, whether nominal or ordinal. This unification is based on a decomposition of the link function into an inverse continuous cdf and a ratio of probabilities. This allows us to define the new family of reference models for nominal data, comparable to the adjacent, cumulative and sequential families of models for ordinal data. We introduce the notion of reversible models for ordinal data that enables to distinguish adjacent and cumulative models from sequential ones. Invariances under permutations of categories are then studied for each family. The combination of the proposed specification with the definition of reference and reversible models and the various invariance properties leads to an in-depth renewal of our view of regression models for categorical data. Finally, a family of new supervised classifiers is tested on three benchmark datasets and a biological dataset is investigated with the objective of recovering the order among categories with only partial ordering information.

15 citations

Journal ArticleDOI
TL;DR: A new KELM model for ordinal regression is proposed by exploiting a quadratic cost-sensitive encoding scheme and a fast algorithm is designed based on the low rank approximation to make the training process more efficient in the big data scenario.
Abstract: Ordinal regression is a special kind of machine learning problem, which aims to label patterns with an ordinal scale. Due to the ubiquitous existence of the ordering information in many practical cases, ordinal regression has received much attention and can be found in a great variety of applications. Meanwhile, Kernel Extreme Learning Machine (KELM), as the extension of Extreme Learning Machine in the framework of kernel learning, has shown its strength in many machine learning tasks. Nevertheless, existing KELM methods have paid little attention to ordinal regression especially in large-scale situations. In this paper, we consider the kernel technique and ordinal scales in labels at the same time, and propose a new KELM model for ordinal regression by exploiting a quadratic cost-sensitive encoding scheme. In order to make the training process more efficient in the big data scenario, a fast algorithm is designed based on the low rank approximation. The incomplete Cholesky factorization and the Sherman–Morrison–Woodbury formula are utilized together to avoid computing the inverse of the kernel matrix. The time complexity of the new algorithm is thus linear with the number of the training instances, which makes it suitable to the large-scale occasions. Numerical experiments on multiple public datasets validate the effectiveness and efficiency of the proposed methods.

15 citations


Network Information
Related Topics (5)
Regression analysis
31K papers, 1.7M citations
84% related
Linear regression
21.3K papers, 1.2M citations
79% related
Inference
36.8K papers, 1.3M citations
78% related
Empirical research
51.3K papers, 1.9M citations
78% related
Social media
76K papers, 1.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023102
2022191
202188
202093
201979
201873