Variable (computer science)
About: Variable (computer science) is a research topic. Over the lifetime, 7479 publications have been published within this topic receiving 160725 citations. The topic is also known as: assignable & mutable variable.
Papers published on a yearly basis
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."
01 Jan 1975
TL;DR: The authors tried to answer the question: When is a random variable Y "more variable" than another random variable X "less variable" by asking when a variable X is more variable than another variable Y.
Abstract: This paper attempts to answer the question: When is a random variable Y “more variable” than another random variable X?
TL;DR: In this paper, simple quasi-likelihood methods for estimating regression models with a fractional dependent variable and for performing asymptotically valid inference are proposed, and they apply these methods to a data set of employee participation rates in 401(k) pension plans.
Abstract: We offer simple quasi-likelihood methods for estimating regression models with a fractional dependent variable and for performing asymptotically valid inference. Compared with log-odds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to use ad hoc transformations to handle data at the extreme values of zero and one. We also offer some new, simple specification tests by nesting the logit or probit function in a more general functional form. We apply these methods to a data set of employee participation rates in 401(k) pension plans.
TL;DR: The authors present the case that dichotomization is rarely defensible and often will yield misleading results.
Abstract: The authors examine the practice of dichotomization of quantitative measures, wherein relationships among variables are examined after 1 or more variables have been converted to dichotomous variables by splitting the sample at some point on the scale(s) of measurement. A common form of dichotomization is the median split, where the independent variable is split at the median to form high and low groups, which are then compared with respect to their means on the dependent variable. The consequences of dichotomization for measurement and statistical analyses are illustrated and discussed. The use of dichotomization in practice is described, and justifications that are offered for such usage are examined. The authors present the case that dichotomization is rarely defensible and often will yield misleading results. We consider here some simple statistical procedures for studying relationships of one or more independent variables to one dependent variable, where all variables are quantitative in nature and are measured on meaningful numerical scales. Such measures are often referred to as individual-differences measures, meaning that observed values of such measures are interpretable as reflecting individual differences on the attribute of interest. It is of course straightforward to analyze such data using correlational methods. In the case of a single independent variable, one can use simple linear regression and/or obtain a simple correlation coefficient. In the case of multiple independent variables, one can use multiple regression, possibly including interaction terms. Such methods are routinely used in practice. However, another approach to analysis of such data is also rather widely used. Considering the case of one independent variable, many investigators begin by converting that variable into a dichotomous variable by splitting the scale at some point and designating individuals above and below that point as defining
Trending Questions (10)
Related Topics (5)