scispace - formally typeset
Search or ask a question

Showing papers on "Ordinal regression published in 2009"


Book
29 Apr 2009
TL;DR: This paper presents a meta-modelling framework for Logistic Regression for Historical and Developmental Change Models that combines logistic regression with discrete time event history analysis to derive discrete-time modeled error rates.
Abstract: Preface Chapter 1. Introduction: Linear Regression and Logistic Regression Chapter 2. Log-Linear Analysis, Logit Analysis, and Logistic Regression Chapter 3. Quantitative Approaches to Model Fit and Explained Variation Chapter 4. Prediction Tables and Qualitative Approaches to Explained Variation Chapter 5. Logistic Regression Coefficients Chapter 6. Model Specification, Variable Selection, and Model Building Chapter 7. Logistic Regression Diagnostics and Problems of Inference Chapter 8. Path Analysis With Logistic Regression (PALR) Chapter 9. Polytomous Logistic Regression for Unordered Categorical Variables Chapter 10. Ordinal Logistic Regression Chapter 11. Clusters, Contexts, and Dependent Data: Logistic Regression for Clustered Sample Survey Data Chapter 12. Conditional Logistic Regression Models for Related Samples Chapter 13. Longitudinal Panel Analysis With Logistic Regression Chapter 14. Logistic Regression for Historical and Developmental Change Models: Multilevel Logistic Regression and Discrete Time Event History Analysis Chapter 15. Comparisons: Logistic Regression and Alternative Models Appendix A: ESTIMATION FOR LOGISTIC REGRESSION MODELS Appendix B: PROOFS RELATED TO INDICES OF PREDICTIVE EFFICIENCY Appendix C: ORDINAL MEASURES OF EXPLAINED VARIATION References Index

460 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that as originally proposed, Allison's method can have serious problems and should not be applied on a routine basis and also show that his model belongs to a larger class of models known as heterogeneous choice or location-scale models.
Abstract: Allison (1999) notes that comparisons of logit and probit coefficients across groups can be invalid and misleading, proposes a procedure by which these problems can be corrected, and argues that ``routine use [of this method] seems advisable'' and that ``it is hard to see how [the method] can be improved.'' In this article, the author argues that as originally proposed, Allison's method can have serious problems and should not be applied on a routine basis. However, this study also shows that his model belongs to a larger class of models variously known as heterogeneous choice or location-scale models. Several advantages of this broader and more flexible class of models are illustrated. Dependent variables can be ordinal in addition to binary, sources of heterogeneity can be better modeled and controlled for, and insights can be gained into the effects of group characteristics on outcomes that would be missed by other methods.

431 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify 12 distinct models that rely on logistic regression and fit within a framework of three major approaches with variations within each approach based on the application of the proportional odds assumption.
Abstract: Ordinal-level measures are very common in social science research. Researchers often analyze ordinal dependent variables using the proportional odds logistic regression model. However, this ‘‘traditional’’ method is one of many different types of logistic regression models available for the analysis of ordered response variables. In this article, the author identifies 12 distinct models that rely on logistic regression and fit within a framework of three major approaches with variations within each approach based on the application of the proportional odds assumption. This typology provides a degree of conceptual clarity that is missing in the extant literature on logistic regression models for ordinal outcomes. The author illustrates the similarities and differences among the different models with examples from the General Social Survey and the American National Election Study.

240 citations


Journal ArticleDOI
TL;DR: Generalized Regression with Intensities of Preference (GRIP) is presented, which builds a set of additive value functions compatible with preference information composed of a partial preorder and required intensities of preference on a subset of actions, called reference actions.

227 citations


Proceedings ArticleDOI
30 Nov 2009
TL;DR: This work proposes a simple way to turn standard measures for OR into ones robust to imbalance, and shows that, once used on balanced datasets, the two versions of each measure coincide, and argues that these measures should become the standard choice for OR.
Abstract: Ordinal regression (OR -- also known as ordinal classification) has received increasing attention in recent times, due to its importance in IR applications such as learning to rank and product review rating. However, research has not paid attention to the fact that typical applications of OR often involve datasets that are highly imbalanced. An imbalanced dataset has the consequence that, when testing a system with an evaluation measure conceived for balanced datasets, a trivial system assigning all items to a single class (typically, the majority class) may even outperform genuinely engineered systems. Moreover, if this evaluation measure is used for parameter optimization, a parameter choice may result that makes the system behave very much like a trivial system. In order to avoid this, evaluation measures that can handle imbalance must be used. We propose a simple way to turn standard measures for OR into ones robust to imbalance. We also show that, once used on balanced datasets, the two versions of each measure coincide, and therefore argue that our measures should become the standard choice for OR.

198 citations


Book
01 Jan 2009
TL;DR: In this article, the authors present a survey of quality of life measures and their application in clinical trials, and compare the results of two groups of paired observations with the results from three or more groups.
Abstract: Preface. 1 Introduction . Summary. 1.1 What is quality of life? 1.2 Terminology. 1.3 History. 1.4 Types of quality of life measures. 1.5 Why measure quality of life? 1.6 Further reading. 2 Measuring quality of life . Summary. 2.1 Introduction. 2.2 Principles of measurement scales. 2.3 Indicator and causal variables. 2.4 The traditional psychometric model. 2.5 Item response theory. 2.6 Clinimetricscal. 2.7 Measuring quality of life: indicator or causal items. 2.8 Developing and testing questionnaires. 2.9 Further reading. 3 Choosing a quality of life measure for your study . Summary. 3.1 Introduction. 3.2 How to choose between instruments. 3.3 Appropriateness. 3.4 Acceptability. 3.5 Feasibility. 3.6 Validity. 3.7 Reliability. 3.8 Responsiveness. 3.9 Precision. 3.10 Interpretability. 3.11 Finding quality of life instruments. 4 Design and sample size issues: How many subjects do I need for my study? Summary. 4.1 Introduction. 4.2 Significance tests, P -values and power. 4.3 Sample sizes for comparison of two independent groups. 4.4 Choice of sample size method with quality of life outcomes. 4.5 Paired data. 4.6 Equivalence/non-inferiority studies. 4.7 Unknown standard deviation and effect size. 4.8 Cluster randomized controlled trials. 4.9 Non-response. 4.10 Unequal groups. 4.11 Multiple outcomes/endpoints. 4.12 Three or more groups. 4.13 What if we are doing a survey, not a clinical trial?. 4.14 Sample sizes for reliability and method comparison studies. 4.15 Post-hoc sample size calculations. 4.16 Conclusion: Usefulness of sample size calculations. 4.17 Further reading. 5 Reliability and method comparison studies for quality of life measurements. Summary. 5.1 Introduction. 5.2 Intra-class correlation coefficient. 5.3 Agreement between individual items on a quality of life questionnaire. 5.4 Internal consistency and Cronbach's alpha. 5.5 Graphical methods for assessing reliability or agreement between two quality of life measures or assessments. 5.6 Further reading. 5.7 Technical details. 6 Summarizing, tabulating and graphically displaying quality of life outcomes. Summary. 6.1 Introduction. 6.2 Graphs. 6.3 Describing and summarizing quality of life data. 6.4 Presenting quality of life data and results in tables and graphs. 7 Cross-sectional analysis of quality of life outcomes. Summary. 7.1 Introduction. 7.2 Hypothesis testing (using P -values). 7.3 Estimation (using confidence intervals). 7.4 Choosing the statistical method. 7.5 Comparison of two independent groups. 7.6 Comparing more than two groups. 7.7 Two groups of paired observations. 7.8 The relationship between two continuous variables. 7.9 Correlation. 7.10 Regression. 7.11 Multiple regression. 7.12 Regression or correlation?. 7.13 Parametric versus non-parametric methods. 7.14 Technical details: Checking the assumptions for a linear regression analysis. 8 Randomized controlled trials. Summary. 8.1 Introduction. 8.2 Randomized controlled trials. 8.3 Protocols. 8.4 Pragmatic and explanatory trials. 8.5 Intention-to-treat and per-protocol analyses. 8.6 Patient flow diagram. 8.7 Comparison of entry characteristics. 8.8 Incomplete data. 8.9 Main analysis. 8.10 Interpretation of changes/differences in quality of life scores. 8.11 Superiority and equivalence trials. 8.12 Adjusting for other variables. 8.13 Three methods of analysis for pre-test/post-test control group designs. 8.14 Cross-over trials. 8.15 Factorial trials. 8.16 Cluster randomized controlled trials. 8.17 Further reading. 9 Exploring and modelling longitudinal quality of life data. Summary. 9.1 Introduction. 9.2 Summarizing, tabulating and graphically displaying repeated QoL assessments. 9.3 Time-by-time analysis. 9.4 Response feature analysis - the use of summary measures. 9.5 Modelling of longitudinal data. 9.6 Conclusions. 10 Advanced methods for analysing quality of life outcomes. Summary. 10.1 Introduction. 10.2 Bootstrap methods. 10.3 Bootstrap methods for confidence interval estimation. 10.4 Ordinal regression. 10.5 Comparing two independent groups: Ordinal quality of life measures (with less than 7 categories). 10.6 Proportional odds or cumulative logit model. 10.7 Continuation ratio model. 10.8 Stereotype logistic model. 10.9 Conclusions and further reading. 11 Economic evaluations. Summary. 11.1 Introduction. 11.2 Economic evaluations. 11.3 Utilities and QALYs. 11.4 Economic evaluations alongside a controlled trial. 11.5 Cost-effectiveness analysis. 11.6 Cost-effectiveness ratios. 11.7 Cost-utility analysis and cost-utility ratios. 11.8 Incremental cost per QALY. 11.9 The problem of negative (and positive) incremental cost-effectiveness ratios. 11.10 Cost-effectiveness acceptability curves. 11.11 Further reading. 12 Meta-analysis. Summary. 12.1 Introduction. 12.2 Planning a meta-analysis. 12.3 Statistical methods in meta-analysis. 12.4 Presentation of results. 12.5 Conclusion. 12.6 Further reading. 13 Practical issues. Summary. 13.1 Missing data. 13.2 Multiplicity, multi-dimensionality and multiple quality of life outcomes. 13.3 Guidelines for reporting quality of life studies. Solutions to exercises. Appendix A: Examples of questionnaires. Appendix B: Statistical tables. References. Index.

135 citations


Book
25 Mar 2009
TL;DR: In this article, the authors introduce regression model with a Dichotomous Dependent Variable (DVDV) and a Polytomous Dependant Variable (PVDV).
Abstract: Preface 1. Introduction to Regression Modeling 2. Regression with a Dichotomous Dependent Variable 3. Regression with a Polytomous Dependent Variable 4. Regression with an Ordinal Dependent Variable 5. Regression with a Count Dependent Variable Appendix A: Description of Data Sets Appendix B: Logarithms Glossary

120 citations


Journal ArticleDOI
TL;DR: This paper reports on a study using two ordinal techniques, DCE and ranking, to derive the cardinal values for health states derived from a condition-specific sexual health measure, and raises some important issues about the use of ordinal data to produce cardinal health state valuations.
Abstract: There is an increasing interest in using data derived from ordinal methods, particularly data derived from discrete choice experiments (DCEs), to estimate the cardinal values for health states to calculate quality adjusted life years (QALYs). Ordinal measurement strategies such as DCE may have considerable practical advantages over more conventional cardinal measurement techniques, e.g. time trade-off (TTO), because they may not require such a high degree of abstract reasoning. However, there are a number of challenges to deriving the cardinal values for health states using ordinal data, including anchoring the values on the full health–dead scale used to calculate QALYs. This paper reports on a study that deals with these problems in the context of using two ordinal techniques, DCE and ranking, to derive the cardinal values for health states derived from a condition-specific sexual health measure. The results were compared with values generated using a commonly used cardinal valuation technique, the TTO. This study raises some important issues about the use of ordinal data to produce cardinal health state valuations. Copyright © 2009 John Wiley & Sons, Ltd.

72 citations


Journal ArticleDOI
TL;DR: Existing methods are reviewed and the use of penalized regression techniques is proposed, based on dummy coding, which imposes a difference penalty and a ridge type refitting procedure on ordered categorial predictors.
Abstract: Ordered categorial predictors are a common case in regression modeling. In contrast to the case of ordinal response variables, ordinal predictors have been largely neglected in the literature. In this article penalized regression techniques are proposed. Based on dummy coding two types of penalization are explicitly developed; the first imposes a difference penalty, the second is a ridge type refitting procedure. A Bayesian motivation as well as alternative ways of derivation are provided. Simulation studies and real world data serve for illustration and to compare the approach to methods often seen in practice, namely linear regression on the group labels and pure dummy coding. The proposed regression techniques turn out to be highly competitive. On the basis of GLMs the concept is generalized to the case of non-normal outcomes by performing penalized likelihood estimation. The paper is a preprint of an article published in the International Statistical Review. Please use the journal version for citation.

71 citations



Journal ArticleDOI
TL;DR: In this paper, the most important ordinal regression models and common approaches used to verify goodness-of-fit, using R or Stata programs, are reviewed and compared using data sets on health conditions from the National Health and Nutrition Examination Survey (NHANES II).
Abstract: Ordinal logistic regression models have been developed for analysis of epidemiological studies. However, the adequacy of such models for adjustment has so far received little attention. In this article, we reviewed the most important ordinal regression models and common approaches used to verify goodness-of-fit, using R or Stata programs. We performed formal and graphical analyses to compare ordinal models using data sets on health conditions from the National Health and Nutrition Examination Survey (NHANES II).

Journal ArticleDOI
TL;DR: This article extends the standard logistic mixed model by adding a subject-level random effect to the within-subject variance specification, and permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their responses.
Abstract: Mixed-effects logistic regression models are described for analysis of longitudinal ordinal outcomes, where observations are observed clustered within subjects. Random effects are included in the model to account for the correlation of the clustered observations. Typically, the error variance and the variance of the random effects are considered to be homogeneous. These variance terms characterize the within-subjects (i.e., error variance) and between-subjects (i.e., random-effects variance) variation in the data. In this article, we describe how covariates can influence these variances, and also extend the standard logistic mixed model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their responses. Additionally, we allow the random effects to be correlated. We illustrate application of these models for ordinal data using Ecological Momentary Assessment (EMA) data, or intensive longitudinal data, from an adolescent smoking study. These mixed-effects ordinal location scale models have useful applications in mental health research where outcomes are often ordinal and there is interest in subject heterogeneity, both between- and within-subjects.

Book ChapterDOI
21 Apr 2009
TL;DR: This paper proposes the Necessary-preference-enhanced Evolutionary Multiobjective Optimizer (NEMO), a combination of an evolutionary multiobjective optimization method, NSGA-II, and an interactive multiObjective optimization Method, GRIP, that allows to focus the search on the region most preferred by the decision maker, and thereby speeds up convergence.
Abstract: This paper proposes the Necessary-preference-enhanced Evolutionary Multiobjective Optimizer (NEMO), a combination of an evolutionary multiobjective optimization method, NSGA-II, and an interactive multiobjective optimization method, GRIP. In the course of NEMO, the decision maker is able to introduce preference information in a holistic way, by simply comparing some pairs of solutions and specifying which solution is preferred, or comparing intensities of preferences between pairs of solutions. From this information, the set of all compatible value functions is derived using GRIP, and a properly modified version of NSGA-II is then used to search for a representative set of all Pareto-optimal solutions compatible with this set of derived value functions. As we show, this allows to focus the search on the region most preferred by the decision maker, and thereby speeds up convergence.

01 Jan 2009
TL;DR: In this paper, the authors compare the features and results for fitting the proportional odds model using Stata OLOGIT, SAS PROC LOGISTIC (ascending and descending), and SPSS PLUM.
Abstract: Researchers have a variety of options when choosing statistical software packages that can perform ordinal logistic regression analyses. However, statistical software, such as Stata, SAS, and SPSS, may use different techniques to estimate the parameters. The purpose of this article is to (1) illustrate the use of Stata, SAS and SPSS to fit proportional odds models using educational data; and (2) compare the features and results for fitting the proportional odds model using Stata OLOGIT, SAS PROC LOGISTIC (ascending and descending), and SPSS PLUM. The assumption of the proportional odds was tested, and the results of the fitted models were interpreted. Introduction The proportional odds (PO) model, also called, is a commonly used model for the analysis of ordinal categorical data and comes from the class of generalized linear models. It is a generalization of a binary logistic regression model when the response variable has more than two ordinal categories. The proportional odds model is used to estimate the odds of being at or below a particular level of the response variable. For example, if there are j levels of ordinal outcomes, the model makes J-1 predictions, each estimating the cumulative probabilities at or below the j th level of the outcome variable. This model can estimate the odds of being at or beyond a particular level of the response variable as well, because below and beyond a

Book ChapterDOI
07 Oct 2009
TL;DR: A data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step and can support the wine expert evaluations and ultimately improve the production is proposed.
Abstract: Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.

Journal ArticleDOI
TL;DR: Simulation results indicate that analyses of ordinal and binary data can recover both the raw and standardized patterns of results, and it is demonstrated that when using binary data, constraining the total variance to unity for a given value of the moderator is sufficient to ensure identification.
Abstract: Following the publication of Purcell’s approach to the modeling of gene by environment interaction in 2002, the interest in G × E modeling in twin and family data increased dramatically. The analytic techniques described by Purcell were designed for use with continuous data. Here we explore the re-parameterization of these models for use with ordinal and binary outcome data. Analysis of binary and ordinal data within the context of a liability threshold model traditionally requires constraining the total variance to unity to ensure identification. Here, we demonstrate an alternative approach for use with ordinal data, in which the values of the first two thresholds are fixed, thus allowing the total variance to change as function of the moderator. We also demonstrate that when using binary data, constraining the total variance to unity for a given value of the moderator is sufficient to ensure identification. Simulation results indicate that analyses of ordinal and binary data can recover both the raw and standardized patterns of results. However, the scale of the results is dependent on the specification of (threshold or variance) constraints rather than the underlying distribution of liability. Example Mx scripts are provided.

Journal ArticleDOI
TL;DR: In this article, an ordinal logistic regression analysis is used to examine a total of 221 responses to a questionnaire distributed to small and medium-sized enterprises (SMEs) in Sweden.
Abstract: – The purpose of this study is to investigate the influence of the technical and functional dimensions of service management on customer satisfaction in the bank‐SME relationship., – An ordinal logistic regression analysis is used to examine a total of 221 responses to a questionnaire distributed to small and medium‐sized enterprises (SMEs) in Sweden., – Both the technical and the functional dimensions of service management were shown to correlate with customer satisfaction. Thus, SMEs seem to evaluate their banking relationship not only on the basis of the effectiveness and quality of banks' service outcomes but also on the care and manner in which the bankers deliver services., – The study shows that relationship variables, such as personal interaction is a strong determinant for customer satisfaction in the bank‐SME relationship. Hence, there is a need for banks to focus training on understanding the issues of SMEs on a broader scale rather than solely on the sale of individual products., – The study examines both the technical and functional dimensions of service management in the bank‐SME relationship. Because most researchers treat ordinal scales as continuous variables, stronger conclusions can thus be drawn from the ordinal regression analysis performed here.

Journal ArticleDOI
TL;DR: The most important ordinal regression models and common approaches used to verify goodness-of-fit, using R or Stata programs are reviewed, using data sets from the National Health and Nutrition Examination Survey (NHANES II).
Abstract: Ordinal logistic regression models have been developed for analysis of epidemiological studies. However, the adequacy of such models for adjustment has so far received little attention. In this article, we reviewed the most important ordinal regression models and common approaches used to verify goodness-of-fit, using R or Stata programs. We performed formal and graphical analyses to compare ordinal models using data sets on health conditions from the National Health and Nutrition Examination Survey (NHANES II).

Journal ArticleDOI
TL;DR: This study examined the diagnostic utility of the NAB List Learning test for differentiating cognitively healthy, MCI, and AD groups and produced a model that classified individuals with 80% accuracy and good predictive power.
Abstract: Measures of episodic memory are often used to identify Alzheimer's disease (AD) and mild cognitive impairment (MCI). The Neuropsychological Assessment Battery (NAB) List Learning test is a promising tool for the memory assessment of older adults due to its simplicity of administration, good psychometric properties, equivalent forms, and extensive normative data. This study examined the diagnostic utility of the NAB List Learning test for differentiating cognitively healthy, MCI, and AD groups. One hundred fifty-three participants (age: range, 57-94 years; M = 74 years; SD, 8 years; sex: 61% women) were diagnosed by a multidisciplinary consensus team as cognitively normal, amnestic MCI (aMCI; single and multiple domain), or AD, independent of NAB List Learning performance. In univariate analyses, receiver operating characteristics curve analyses were conducted for four demographically-corrected NAB List Learning variables. Additionally, multivariate ordinal logistic regression and fivefold cross-validation was used to create and validate a predictive model based on demographic variables and NAB List Learning test raw scores. At optimal cutoff scores, univariate sensitivity values ranged from .58 to .92 and univariate specificity values ranged from .52 to .97. Multivariate ordinal regression produced a model that classified individuals with 80% accuracy and good predictive power. (JINS, 2009, 15, 121-129.).

Journal ArticleDOI
TL;DR: A full information maximum likelihood estimation method for modelling multivariate longitudinal ordinal variables and two latent variable models are proposed that account for dependencies among items within time and between time.
Abstract: The paper proposes a full information maximum likelihood estimation method for modelling multivariate longitudinal ordinal variables. Two latent variable models are proposed that account for dependencies among items within time and between time. One model fits item-specific random effects which account for the between time points correlations and the second model uses a common factor. The relationships between the time-dependent latent variables are modelled with a non-stationary autoregressive model. The proposed models are fitted to a real data set.

Journal ArticleDOI
TL;DR: A theorem is provided that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach and is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound.
Abstract: Summary Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound.

Journal ArticleDOI
TL;DR: Two algorithms are presented for generating monotone ordinal data sets and the main contribution of this paper describes for the first time how structured monOTone data sets can be generated.


Journal ArticleDOI
TL;DR: Simulation studies comparing the performance of the different diagnostic methods indicate that some of the graphical methods are more powerful in detecting model misspecification than the Hosmer-Lemeshow-type goodness-of-fit statistics for the class of models studied.
Abstract: The cumulative logit or the proportional odds regression model is commonly used to study covariate effects on ordinal responses. This paper provides some graphical and numerical methods for checking the adequacy of the proportional odds regression model. The methods focus on evaluating functional misspecification for specific covariate effects, but misspecification of the link function can also be dealt with under the same framework. For the logistic regression model with binary responses, Arbogast and Lin (Statist. Med. 2005; 24:229-247) developed similar graphical and numerical methods for assessing the adequacy of the model using the cumulative sums of residuals. The paper generalizes their methods to ordinal responses and illustrates them using an example from the VA Normative Aging Study. Simulation studies comparing the performance of the different diagnostic methods indicate that some of the graphical methods are more powerful in detecting model misspecification than the Hosmer-Lemeshow-type goodness-of-fit statistics for the class of models studied.

Journal ArticleDOI
TL;DR: Tolerance to adverse events over time was modeled by including a non-stationary component for the dizziness ordinal Markov regression while the piecewise Weibull distributions allowed a change point in the median time to first non-zero dizziness or drowsiness score.
Abstract: Dizziness and drowsiness are cited as being predictors of dropout from clinical trials for the medicine pregabalin. These adverse events are typically recorded daily on a four point ordinal scale (0 = none, 1 = mild, 2 = moderate, 3 = severe), with most subjects never reporting either adverse event. We modeled the dizziness, drowsiness, and dropout associated with pregabalin use in generalized anxiety disorder using piecewise Weibull distributions for the time to first non-zero dizziness or drowsiness score, after which the dizziness or drowsiness was modeled with ordinal regression with a Markovian element. Dropout was modeled with a Weibull distribution. Platykurtosis was encountered in the estimated random effects distributions for the ordinal regression models and was addressed with dynamic John-Draper transformations. The only identified predictor for the time to first non-zero dizziness or drowsiness score was daily titrated dose. Predictors for dropout included creatinine clearance and maximum daily adverse event score. Tolerance to adverse events over time was modeled by including a non-stationary component for the dizziness ordinal Markov regression while the piecewise Weibull distributions allowed a change point in the median time to first non-zero dizziness or drowsiness score.

Proceedings Article
01 Jan 2009
TL;DR: This paper proposes to formulate concept detection as an ordinal regression problem to explicitly take advantage of the ordinal relationship between concepts and avoid the data imbalance problem of conventional multi-label classification methods.
Abstract: Tofacilitateinformationretrievaloflarge-scalemusicdatabases, the detection of musical concepts, or auto-tagging, has been an active research topic. This paper concerns the use of concept correlations to improve musical concept detection. We propose to formulate concept detection as an ordinal regression problem to explicitly take advantage of the ordinal relationship between concepts and avoid the data imbalance problem of conventional multi-label classification methods. To further improve the detection accuracy, we propose to leverage the co-occurrence patterns of concepts for context fusion and employ concept selection to remove irrelevant or noisy concepts. Evaluation on the cal500 dataset shows that we are able to improve the detection accuracy of 174 concepts from 0.2513 to 0.2924.

Journal ArticleDOI
TL;DR: It is proved theoretically that the OR problem with the block-quantized kernel matrix [(K)\tilde] could be solved by first separating the data samples in the training set into k clusters with kernel k-means and then performing SVOR on the k cluster representatives.
Abstract: Support vector ordinal regression (SVOR) is a recently proposed ordinal regression (OR) algorithm. Despite its theoretical and empirical success, the method has one major bottleneck, which is the high computational complexity. In this brief, we propose a both practical and theoretical guaranteed algorithm, block-quantized support vector ordinal regression (BQSVOR), where we approximate the kernel matrix K with [(K)\tilde] that is composed of k 2 constant blocks. We provide detailed theoretical justification on the approximation accuracy of BQSVOR. Moreover, we prove theoretically that the OR problem with the block-quantized kernel matrix [(K)\tilde] could be solved by first separating the data samples in the training set into k clusters with kernel k-means and then performing SVOR on the k cluster representatives. Hence, the algorithm leads to an optimization problem that scales only with the number of clusters, instead of the data set size. Finally, experiments on several real-world data sets support the previous analysis and demonstrate that BQSVOR improves the speed of SVOR significantly with guaranteed accuracy.

Journal Article
TL;DR: In this paper, a student satisfaction questionnaire was applied to a total of 314 university students, consisting of 208 female and 106 male students, and satisfaction was measured by asking students to respond to 19 questionnaire items.
Abstract: Measuring student satisfaction is an important issue especially for university administration, in order to improve student services and opportunities. The major objective of this study is to provide a solution for this issue. Consequently, student satisfaction has been measured with an ordered five-point Likert scale. A student satisfaction questionnaire was applied to a total of 314 university students, consisting of 208 female and 106 male students, and satisfaction was measured by asking students to respond to 19 questionnaire items. Ordinal regression and artifical neural network methods were applied to the collected data which emphasized the differences between the two methods in terms of the correct classification percentages.

Journal ArticleDOI
TL;DR: Results indicate that operating unit size is related positively to the level of consideration for ABC, which implies that the availability of financial, labour, computing and time resources should mean that it is more likely for operating units to be considering or have considered ABC.
Abstract: Prior research into the extent to which operating units have considered activity-based costing (ABC) has either examined the extent to which operating units have considered or not considered ABC. This paper uses logistic ordinal regression analysis to examine the impact of the level of competition, product customization, manufacturing overhead costs and operating unit size on the level of consideration for ABC when measured on a three-point ordinal scale ranging from not considered, considering and considered ABC. The results indicate that operating unit size is related positively to the level of consideration for ABC. This implies that the availability of financial, labour, computing and time resources should mean that it is more likely for operating units to be considering or have considered ABC.

Journal ArticleDOI
TL;DR: In this paper, the authors compared results from different staging models and ordinal regression with biopsy data and found that the ordinal analysis outperformed other non-invasive fibrosis prediction models.