scispace - formally typeset
Search or ask a question
Author

Jacob Marschak

Bio: Jacob Marschak is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Measures of national income and output & Interest rate. The author has an hindex of 29, co-authored 65 publications receiving 9285 citations. Previous affiliations of Jacob Marschak include University of Chicago & University of California.


Papers
More filters
Journal ArticleDOI
TL;DR: A sequential experiment that provides, at each stage in the sequence, an estimate of the utility to the subject of some amount of a commodity, and to present a few experimental results obtained with the method.
Abstract: The purpose of this paper is to describe a sequential experiment that provides, at each stage in the sequence, an estimate of the utility to the subject of some amount of a commodity (e.g., money), and to present a few experimental results obtained with the method. The procedure is based upon the following well-known ‘expected utility hypothesis’. For each person there exist numerical constants, called utilities, associated with the various possible outcomes of his actions, given the external events not under his control. If, for a given subject, we could know the values of these constants and the (‘personal’) probabilities he assigns to the various external events we could, according to this model, predict his choice from among any available set of actions. He will choose an action with the highest expected utility; i.e., with the highest average of utilities of outcomes, weighted by the probabilities he assigns to the corresponding events. He will be indifferent between any two actions with equal expected utilities. Note that (by the nature of weighted averages) the comparison between expected utilities does not depend on which two particular outcomes are regarded as having zero-utility and unit-utility.

2,426 citations

Book
01 Jan 1972

1,355 citations

Journal ArticleDOI
TL;DR: In this article, the authors outline a method for deriving optimal rules of inventory policy for finished goods, including goods which can be transformed, at a cost, into one or more kinds of finished goods if and when.
Abstract: WE ipropose to outline a method for deriving optimal rules of inventory policy for finished goods. The problem of inventories exists not only for business enterprises but also for nonprofit agencies such as governmental establishments and their various branches. Moreover, the concept of inventories can be generalized so as to include not only goods but also disposable reserves of manpower as well as various stand-by devices. Also, while inventories of finished goods present the simplest problem, the concept can be extended to goods which can be transformed, at a cost, into one or more kinds of finished goods if and when

885 citations

Journal ArticleDOI

770 citations

Book ChapterDOI
TL;DR: In interpreting human behavior there is a need to substitute "stochastic consistency of choices" for "absolute consistency of choice" as discussed by the authors, which is usually assumed in economic theory, but is not well supported by experience.
Abstract: In interpreting human behavior there is a need to substitute ‘stochastic consistency of choices’ for ‘absolute consistency of choices’. The latter is usually assumed in economic theory, but is not well supported by experience. It is, in fact, not assumed in empirical econometrics and psychology.

525 citations


Cited by
More filters
01 Jan 1967
TL;DR: The k-means algorithm as mentioned in this paper partitions an N-dimensional population into k sets on the basis of a sample, which is a generalization of the ordinary sample mean, and it is shown to give partitions which are reasonably efficient in the sense of within-class variance.
Abstract: The main purpose of this paper is to describe a process for partitioning an N-dimensional population into k sets on the basis of a sample. The process, which is called 'k-means,' appears to give partitions which are reasonably efficient in the sense of within-class variance. That is, if p is the probability mass function for the population, S = {S1, S2, * *, Sk} is a partition of EN, and ui, i = 1, 2, * , k, is the conditional mean of p over the set Si, then W2(S) = ff=ISi f z u42 dp(z) tends to be low for the partitions S generated by the method. We say 'tends to be low,' primarily because of intuitive considerations, corroborated to some extent by mathematical analysis and practical computational experience. Also, the k-means procedure is easily programmed and is computationally economical, so that it is feasible to process very large samples on a digital computer. Possible applications include methods for similarity grouping, nonlinear prediction, approximating multivariate distributions, and nonparametric tests for independence among several variables. In addition to suggesting practical classification methods, the study of k-means has proved to be theoretically interesting. The k-means concept represents a generalization of the ordinary sample mean, and one is naturally led to study the pertinent asymptotic behavior, the object being to establish some sort of law of large numbers for the k-means. This problem is sufficiently interesting, in fact, for us to devote a good portion of this paper to it. The k-means are defined in section 2.1, and the main results which have been obtained on the asymptotic behavior are given there. The rest of section 2 is devoted to the proofs of these results. Section 3 describes several specific possible applications, and reports some preliminary results from computer experiments conducted to explore the possibilities inherent in the k-means idea. The extension to general metric spaces is indicated briefly in section 4. The original point of departure for the work described here was a series of problems in optimal classification (MacQueen [9]) which represented special

24,320 citations

Journal ArticleDOI
TL;DR: Cumulative prospect theory as discussed by the authors applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses, and two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting function.
Abstract: We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains,

13,433 citations

Journal ArticleDOI
TL;DR: Models are proposed that show how organizations can be designed to meet the information needs of technology, interdepartmental relations, and the environment to both reduce uncertainty and resolve equivocality.
Abstract: This paper answers the question, "Why do organizations process information?" Uncertainty and equivocality are defined as two forces that influence information processing in organizations. Organization structure and internal systems determine both the amount and richness of information provided to managers. Models are proposed that show how organizations can be designed to meet the information needs of technology, interdepartmental relations, and the environment. One implication for managers is that a major problem is lack of clarity, not lack of data. The models indicate how organizations can be designed to provide information mechanisms to both reduce uncertainty and resolve equivocality.

8,674 citations

Book
01 Jan 2003
TL;DR: In this paper, the authors describe the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation, and compare simulation-assisted estimation procedures, including maximum simulated likelihood, method of simulated moments, and methods of simulated scores.
Abstract: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.

7,768 citations

Posted Content
TL;DR: In this paper, a population ecology model applicable to business related organizational analyses is derived by compiling elements of several theories, including competition theory and niche theory, to address factors not encompassed by ecological theory.
Abstract: Factors impacting the organizational structure of firms have been analyzed often utilizing organizations theory. However, several other theories and perspectives have been proposed as potential alternative means of analyzing organizational structure and functioning. While previous studies regarding organizational structure have utilized such perspectives as adaptation and exchange theory, few studies have utilized population ecology theory, thus leading to the current study. Although population ecology theory is most often used in the biological sciences, many of its principles lend well to organizational analysis. Due to internal structural arrangements (e.g. information constraints, political constraints) and environmental pressures (e.g. legal and fiscal barriers, legitimacy) of an organization, the inflexibility of an organization limits the firm's organizational analysis utilizing an adaptation perspective. The challenges and discontinuities associated with utilizing an ecological perspective are identified, including issues related to the primary sources of change (selection and adaptive learning) and related to differentiating between selection and viability. Utilizing competition theory and niche theory, several models for analyzing organizational diversity are incorporated to address factors not encompassed by ecological theory. By compiling elements of several theories, a population ecology model applicable to business related organizational analyses is derived. (AKP)

6,537 citations