scispace - formally typeset
Search or ask a question
Author

David K. Levine

Bio: David K. Levine is an academic researcher from Washington University in St. Louis. The author has contributed to research in topics: Repeated game & Monopoly. The author has an hindex of 66, co-authored 358 publications receiving 22455 citations. Previous affiliations of David K. Levine include Federal Reserve Bank of Minneapolis & Federal Reserve Bank of St. Louis.


Papers
More filters
Book
01 Jan 1998
TL;DR: Fudenberg and Levine as discussed by the authors developed an alternative explanation that equilibrium arises as the long-run outcome of a process in which less than fully rational players grope for optimality over time.
Abstract: In economics, most noncooperative game theory has focused on equilibrium in games, especially Nash equilibrium and its refinements. The traditional explanation for when and why equilibrium arises is that it results from analysis and introspection by the players in a situation where the rules of the game, the rationality of the players, and the players' payoff functions are all common knowledge. Both conceptually and empirically, this theory has many problems. In The Theory of Learning in Games Drew Fudenberg and David Levine develop an alternative explanation that equilibrium arises as the long-run outcome of a process in which less than fully rational players grope for optimality over time. The models they explore provide a foundation for equilibrium theory and suggest useful ways for economists to evaluate and modify traditional equilibrium concepts.

3,254 citations

Journal ArticleDOI
TL;DR: Zheng et al. as mentioned in this paper developed gdsfmt and SNPRelate (R packages for multi-core symmetric multiprocessing computer architectures) to accelerate two key computations on SNP data: principal component analysis and relatedness analysis using identity-by-descent measures.
Abstract: Summary: Genome-wide association studies are widely used to investigate the genetic basis of diseases and traits, but they pose many computational challenges. We developed gdsfmt and SNPRelate (R packages for multi-core symmetric multiprocessing computer architectures) to accelerate two key computations on SNP data: principal component analysis (PCA) and relatedness analysis using identity-by-descent measures. The kernels of our algorithms are written in C/C++ and highly optimized. Benchmarks show the uniprocessor implementations of PCA and identity-by-descent are ~8–50 times faster than the implementations provided in the popular EIGENSTRAT (v3.0) and PLINK (v1.07) programs, respectively, and can be sped up to 30–300-fold by using eight cores. SNPRelate can analyse tens of thousands of samples with millions of SNPs. For example, our package was used to perform PCA on 55 324 subjects from the ‘Gene-Environment Association Studies’ consortium studies. Availability and implementation: gdsfmt and SNPRelate are available from R CRAN ( http://cran.r-project.org), including a vignette. A tutorial can be found at https://www.genevastudy.org/Accomplishments/software. Contact: zhengx@u.washington.edu

1,738 citations

Journal ArticleDOI
TL;DR: In this article, the authors examined a simple theory of altruism in which players' payoffs are linear in their own monetary income and their opponents' opponents' private information and varies in the population, depending on what the opponent's coefficient is believed to be.

1,415 citations

Journal ArticleDOI
TL;DR: In this paper, the authors develop a theory of general equilibrium with endogenous debt limits in the form of individual rationality constraints similar to those in the dynamic consistency literature, where an agent defaults on a contract, he can be excluded from future contingent claims markets trading and can have his assets seized.
Abstract: We develop a theory of general equilibrium with endogenous debt limits in the form of individual rationality constraints similar to those in the dynamic consistency literature. If an agent defaults on a contract, he can be excluded from future contingent claims markets trading and can have his assets seized. He cannot be excluded from spot markets trading, however, and he has some private endowments that cannot be seized. All information is publicly held and common knowledge, and there is a complete set of contingent claims markets. Since there is complete information, an agent cannot enter into a contract in which he would have an incentive to default in some state. In general there is only partial insurance: variations in consumption may be imperfectly correlated across agents; interest rates may be lower than they would be without constraints; and equilibria may be Pareto ranked.

920 citations

Journal ArticleDOI
TL;DR: It is proposed that a simple "dual-self" model gives a unified explanation for several empirical regularities, including the apparent time inconsistency that has motivated models of quasi-hyperbolic discounting and Rabin’s paradox of risk aversion in the large and small.
Abstract: We propose that a simple "dual-self" model gives a unified explanation for several empirical regularities, including the apparent time-inconsistency that has motivated models of hyperbolic discounting and Rabin's paradox of risk aversion in the large and small. The model also implies that self-control costs imply excess delay, as in the O'Donoghue and Rabin models of hyperbolic utility, and it explains experimental evidence that increased cognitive load makes temptations harder to resist. Finally, the reduced form of the base version of our model is consistent with the Gul-Pesendorfer axioms.

870 citations


Cited by
More filters
Journal ArticleDOI
Simon Haykin1
TL;DR: Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks: radio-scene analysis, channel-state estimation and predictive modeling, and the emergent behavior of cognitive radio.
Abstract: Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.

12,172 citations

Journal ArticleDOI
05 Mar 2007
TL;DR: A theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees is provided.
Abstract: This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link/node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analysis for the algorithms are provided. Our analysis framework is based on tools from matrix theory, algebraic graph theory, and control theory. We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small-world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms. A brief introduction is provided on networked systems with nonlocal information flow that are considerably faster than distributed systems with lattice-type nearest neighbor interactions. Simulation results are presented that demonstrate the role of small-world effects on the speed of consensus algorithms and cooperative control of multivehicle formations

9,715 citations

Journal ArticleDOI
TL;DR: This paper showed that if some people care about equity, the puzzles can be resolved and that the economic environment determines whether the fair types or the selesh types dominate equilibrium behavior in cooperative games.
Abstract: There is strong evidence that people exploit their bargaining power in competitivemarkets butnot inbilateral bargainingsituations. Thereisalsostrong evidence that people exploit free-riding opportunities in voluntary cooperation games. Yet, when they are given the opportunity to punish free riders, stable cooperation is maintained, although punishment is costly for those who punish. This paper asks whether there is a simple common principle that can explain this puzzling evidence. We show that if some people care about equity the puzzles can be resolved. It turns out that the economic environment determines whether the fair types or the selesh types dominate equilibrium behavior.

8,783 citations

Journal ArticleDOI
TL;DR: An overview of the CHARMM program as it exists today is provided with an emphasis on developments since the publication of the original CHARMM article in 1983.
Abstract: CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecu- lar simulation program. It has been developed over the last three decades with a primary focus on molecules of bio- logical interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estima- tors, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numer- ous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983.

7,035 citations

Book
01 Jan 1994
TL;DR: A Course in Game Theory as discussed by the authors presents the main ideas of game theory at a level suitable for graduate students and advanced undergraduates, emphasizing the theory's foundations and interpretations of its basic concepts.
Abstract: A Course in Game Theory presents the main ideas of game theory at a level suitable for graduate students and advanced undergraduates, emphasizing the theory's foundations and interpretations of its basic concepts. The authors provide precise definitions and full proofs of results, sacrificing generalities and limiting the scope of the material in order to do so. The text is organized in four parts: strategic games, extensive games with perfect information, extensive games with imperfect information, and coalitional games. It includes over 100 exercises.

7,018 citations