scispace - formally typeset
Search or ask a question

Showing papers on "Replicator equation published in 2010"


Journal ArticleDOI
TL;DR: An evolutionary game framework is proposed to answer the question of "how to collaborate" in multiuser de-centralized cooperative spectrum sensing, because evolutionary game theory provides an excellent means to address the strategic uncertainty that a user/player may face.
Abstract: Cooperative spectrum sensing has been shown to be able to greatly improve the sensing performance in cognitive radio networks. However, if cognitive users belong to different service providers, they tend to contribute less in sensing in order to increase their own throughput. In this paper, we propose an evolutionary game framework to answer the question of "how to collaborate" in multiuser de-centralized cooperative spectrum sensing, because evolutionary game theory provides an excellent means to address the strategic uncertainty that a user/player may face by exploring different actions, adaptively learning during the strategic interactions, and approaching the best response strategy under changing conditions and environments using replicator dynamics. We derive the behavior dynamics and the evolutionarily stable strategy (ESS) of the secondary users. We then prove that the dynamics converge to the ESS, which renders the possibility of a de-centralized implementation of the proposed sensing game. According to the dynamics, we further develop a distributed learning algorithm so that the secondary users approach the ESS solely based on their own payoff observations. Simulation results show that the average throughput achieved in the proposed cooperative sensing game is higher than the case where secondary users sense the primary user individually without cooperation. The proposed game is demonstrated to converge to the ESS, and achieve a higher system throughput than the fully cooperative scenario, where all users contribute to sensing in every time slot.

233 citations


Book ChapterDOI
TL;DR: This review summary of recent developments in stochastic evolutionary game dynamics of finite populations concludes that the current state of play in finite populations is positive.
Abstract: In this review, we summarize recent developments in stochastic evolutionary game dynamics of finite populations.

144 citations


Posted Content
TL;DR: In this paper, an agent in a finite population chooses an action, is randomly matched, obtains a payoff and then observes the performance of another agent, and determines future behavior based on the information she receives from the present round.
Abstract: In consectutive rounds, each agent in a finite population chooses an action, is randomly matched, obtains a payoff and then observes the performance of another agent. An agent determines future behavior based on the information she receives from the present round. She chooses among the behavioral rules that increase expected payoffs in any specifications of the matching scenario. The rule that outperforms all other such rules specifies to imitate the action of an agent that performed better with probability proportional to how much better she performed. The evolution of a large population in which each agent uses this rule can be approximated in the short run by the replicator dynamics.

140 citations


Proceedings ArticleDOI
10 May 2010
TL;DR: Frequency Adjusted Q- (FAQ-) learning is proposed, a variation of Q-learning that perfectly adheres to the predictions of the evolutionary model for an arbitrarily large part of the policy space.
Abstract: Multi-agent learning is a crucial method to control or find solutions for systems, in which more than one entity needs to be adaptive. In today's interconnected world, such systems are ubiquitous in many domains, including auctions in economics, swarm robotics in computer science, and politics in social sciences. Multi-agent learning is inherently more complex than single-agent learning and has a relatively thin theoretical framework supporting it. Recently, multi-agent learning dynamics have been linked to evolutionary game theory, allowing the interpretation of learning as an evolution of competing policies in the mind of the learning agents. The dynamical system from evolutionary game theory that has been linked to Q-learning predicts the expected behavior of the learning agents. Closer analysis however allows for two interesting observations: the predicted behavior is not always the same as the actual behavior, and in case of deviation, the predicted behavior is more desirable. This discrepancy is elucidated in this article, and based on these new insights Frequency Adjusted Q- (FAQ-) learning is proposed. This variation of Q-learning perfectly adheres to the predictions of the evolutionary model for an arbitrarily large part of the policy space. In addition to the theoretical discussion, experiments in the three classes of two-agent two-action games illustrate the superiority of FAQ-learning.

73 citations


Journal ArticleDOI
TL;DR: The developed theory provides an effective method for solving selection systems; it reduces the initial complex model to a special system of ordinary differential equations (the escort system).
Abstract: Mathematical theory of selection is developed within the frameworks of general models of inhomogeneous populations with continuous time. Methods that allow us to study the distribution dynamics under natural selection and to construct explicit solutions of the models are developed. All statistical characteristics of interest, such as the mean values of the fitness or any trait can be computed effectively, and the results depend in a crucial way on the initial distribution. The developed theory provides an effective method for solving selection systems; it reduces the initial complex model to a special system of ordinary differential equations (the escort system). Applications of the method to the Price equations are given; the solutions of some particular inhomogeneous Malthusian, Ricker and logistic-like models used but not solved in the literature are derived in explicit form.

70 citations


Journal ArticleDOI
TL;DR: The stability analysis uses some passivity concepts and classic Lyapunov theory for a closed-loop system that combines the population dynamics (controller), and the WDS (process) to reach a stable equilibrium point.
Abstract: The water distribution system (WDS) is composed of several elements, where flow control is one of the most important components needed in order to provide a satisfactory level of service. In order to achieve an adequate level of water in the distribution tanks, we need to dynamically control the flow. Here, we propose a population dynamics approach in order to control tanks, by allocating in them the maximum uniform volume. The feedback interconnected systems reach a stable equilibrium point for both approaches presented (i.e. replicator and supply dynamics), and more specific an asymptotically stable equilibrium point for the replicator dynamics case. The stability analysis uses some passivity concepts and classic Lyapunov theory for a closed-loop system that combines the population dynamics (controller), and the WDS (process). We show via simulations the operation under different scenarios.

58 citations


Journal ArticleDOI
TL;DR: It is shown that the significance of the handicap principle can be challenged from two new directions, and the existence and stability of polymorphisms where players mix between being honest and being deceptive and where signalling costs can be very low are proved.
Abstract: We study the handicap principle in terms of the Sir Philip Sidney game. The handicap principle asserts that cost is required to allow for honest signalling in the face of conflicts of interest. We show that the significance of the handicap principle can be challenged from two new directions. Firstly, both the costly signalling equilibrium and certain states of no communication are stable under the replicator dynamics (i.e. standard evolutionary dynamics); however, the latter states are more likely in cases where honest signalling should apply. Secondly, we prove the existence and stability of polymorphisms where players mix between being honest and being deceptive and where signalling costs can be very low. Neither the polymorphisms nor the states of no communication are evolutionarily stable, but they turn out to be more important for standard evolutionary dynamics than the costly signalling equilibrium.

56 citations


Journal ArticleDOI
TL;DR: It is concluded that for a variety of topologies, the fact that the dynamics co-evolves with the strategies leads, in general, to more cooperation in the weak PD game.
Abstract: In this work we study a weak Prisoner\^as Dilemma game in which both strategies and update rules are subjected to evolutionary pressure. Interactions among agents are specified by complex topologies, and we consider both homogeneous and heterogeneous situations. We consider deterministic and stochastic update rules for the strategies, which in turn may consider single links or full context when selecting agents to copy from. Our results indicate that the co-evolutionary process preserves heterogeneous networks as a suitable framework for the emergence of cooperation. Furthermore, on those networks, the update rule leading to a larger fraction, which we call replicator dynamics, is selected during co-evolution. On homogeneous networks we observe that even if replicator dynamics turns out again to be the selected update rule, the cooperation level is larger than on a fixed update rule framework. We conclude that for a variety of topologies, the fact that the dynamics coevolves with the strategies leads in general to more cooperation in the weak Prisoner's Dilemma game.

53 citations


Journal ArticleDOI
TL;DR: In this article, a weak prisoner's dilemma (PD) game is considered and both strategies and update rules are subjected to evolutionary pressure, and it is shown that the co-evolutionary process preserves heterogeneous networks as a suitable framework for the emergence of cooperation.
Abstract: In this paper, we study a weak prisoner's dilemma (PD) game in which both strategies and update rules are subjected to evolutionary pressure. Interactions among agents are specified by complex topologies, and we consider both homogeneous and heterogeneous situations. We consider deterministic and stochastic update rules for the strategies, which in turn may consider single links or the full context when selecting agents to copy from. Our results indicate that the co-evolutionary process preserves heterogeneous networks as a suitable framework for the emergence of cooperation. Furthermore, on those networks the update rule leading to a larger fraction, which we call replicator dynamics, is selected during co-evolution. On homogeneous networks, we observe that even if the replicator dynamics again turns out to be the selected update rule, the cooperation level is greater than on a fixed update rule framework. We conclude that for a variety of topologies, the fact that the dynamics co-evolves with the strategies leads, in general, to more cooperation in the weak PD game.

49 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: In this article, the authors introduce heterogeneous learning schemes in which each agent adopts a distinct learning pattern in the context of games with incomplete information, and apply the results to a class of security games in which the attacker and the defender adopt different learning schemes due to differences in their rationality levels and the information they acquire.
Abstract: Learning algorithms are essential for the applications of game theory in a networking environment. In dynamic and decentralized settings where the traffic, topology and channel states may vary over time and the communication between agents is impractical, it is important to formulate and study games of incomplete information and fully distributed learning algorithms which for each agent requires a minimal amount of information regarding the remaining agents. In this paper, we address this major challenge and introduce heterogeneous learning schemes in which each agent adopts a distinct learning pattern in the context of games with incomplete information. We use stochastic approximation techniques to show that the heterogeneous learning schemes can be studied in terms of their deterministic ordinary differential equation (ODE) counterparts. Depending on the learning rates of the players, these ODEs could be different from the standard replicator dynamics, (myopic) best response (BR) dynamics, logit dynamics, and fictitious play dynamics. We apply the results to a class of security games in which the attacker and the defender adopt different learning schemes due to differences in their rationality levels and the information they acquire.

45 citations


Journal ArticleDOI
TL;DR: The authors generalize the Bisin-Verdier model of intergenerational transmis sion to permit an arbitrary number of cultural traits and demonstrate global stability of the long-run distribution of traits using recent results on stable games and potential games.
Abstract: We generalize the Bisin-Verdier model of intergenerational transmis sion to permit an arbitrary number of cultural traits. A key observa tion?that this model is equivalent to an evolutionary game under replicator dynamics?facilitates our analysis. For two special cases, obtained by restricting the pattern of ucultural distastes" between traits, we demonstrate global stability of the long-run distribution of traits using recent results on stable games and potential games. For the general three-trait case, we show that all three traits survive in the long run only if each trait satisfies an "invasion condition" involving the cultural distaste parameters. (JEL C73, D91, Z13)

Journal ArticleDOI
TL;DR: In this paper, the authors study repeated games where players use an exponential learning scheme in order to adapt to an everchanging environment and find that strategies which are dominated (even iteratively) eventually become extinct and that the strict Nash equilibria are stochastically asymptotically stable.
Abstract: We study repeated games where players use an exponential learning scheme in order to adapt to an ever-changing environment. If the game's payoffs are subject to random perturbations, this scheme leads to a new stochastic version of the replicator dynamics that is quite different from the ``aggregate shocks'' approach of evolutionary game theory. Irrespective of the perturba- tions' magnitude, we find that strategies which are dominated (even iteratively) eventually become extinct and that the game's strict Nash equilibria are stochastically asymptotically stable. We complement our analysis by illustrating these results in the case of congestion games.

Journal ArticleDOI
TL;DR: Evolutionary branching, which could result in the evolutionary emergence of cooperators and defectors, can occur only with locally convex functional responses, but it is illustrated that it can also result in coevolutionary extinction.
Abstract: It has recently been demonstrated that ecological feedback mechanisms can facilitate the emergence and maintenance of cooperation in public goods interactions: the replicator dynamics of defectors and cooperators can result, for example, in the ecological coexistence of cooperators and defectors. Here we show that these results change dramatically if cooperation strategy is not fixed but instead is a continuously varying trait under natural selection. For low values of the factor with which the value of resources is multiplied before they are shared among all participants, evolution will always favour lower cooperation strategies until the population falls below an Allee threshold and goes extinct, thus evolutionary suicide occurs. For higher values of the factor, there exists a unique evolutionarily singular strategy, which is convergence stable. Because the fitness function is linear with respect to the strategy of the mutant, this singular strategy is neutral against mutant invasions. This neutrality disappears if a nonlinear functional response in receiving benefits is assumed. For strictly concave functional responses, singular strategies become uninvadable. Evolutionary branching, which could result in the evolutionary emergence of cooperators and defectors, can occur only with locally convex functional responses, but we illustrate that it can also result in coevolutionary extinction.

Journal ArticleDOI
TL;DR: It is shown how the CSS is related to convergence stability for the canonical equation of adaptive dynamics and the NIS to convergence to a monomorphism for the replicator equation of evolutionary game theory.

Journal ArticleDOI
TL;DR: It is shown that solutions of a wide class of replicator equations minimize the KL-divergence of the initial and current distributions under time-dependent constraints, which means the Kullback principle of minimum discrimination information and the maximum entropy principle can be derived from the system dynamics rather than postulated.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: A two-level game framework is developed that can be implemented with minimum communication cost since no information broadcasting is required and observes that the selfish behavior of service providers can also maximize the social welfare.
Abstract: Bandwidth allocation for different service classes in heterogeneous wireless networks is an important issue for service provider in terms of balancing service quality and profit. It is especially challenging when considering the dynamic competition both among service providers and among users. To address this problem, a two-level game framework is developed in this paper. The underlying dynamic service selection is modeled as an evolutionary game based on replicator dynamics. An upper bandwidth allocation differential game is formulated to model the competition among different service providers. The service selection distribution of the underlying evolutionary game describes the state of the upper differential game. An open-loop Nash equilibrium is considered to be the solution of this linear state differential game. The proposed framework can be implemented with minimum communication cost since no information broadcasting is required. Also, we observe that the selfish behavior of service providers can also maximize the social welfare.

Journal ArticleDOI
TL;DR: In this article, the authors construct a class of three-by-three symmetric games for which the overlap in the basins of attraction under best response learning and replicator dynamics is arbitrarily small.
Abstract: A deterministic learning model applied to a game with multiple equilibria produces distinct basins of attraction for those equilibria. In symmetric two-by-two games, basins of attraction are invariant to a wide range of learning rules including best response dynamics, replicator dynamics, and fictitious play. In this paper, we construct a class of three-by-three symmetric games for which the overlap in the basins of attraction under best response learning and replicator dynamics is arbitrarily small. We then derive necessary and sufficient conditions on payoffs for these two learning rules to create basins of attraction with vanishing overlap. The necessary condition requires that with probability one the initial best response is not an equilibrium to the game. The existence of parasitic or misleading actions allows subtle differences in the learning rules to accumulate.

Journal ArticleDOI
TL;DR: Simulation results suggest that clustering performs always at least as good as Tagger, while in more than a third of the considered instances substantial improvement can be observed.
Abstract: Tag SNP selection is an important problem in genetic association studies. A class of algorithms to perform this task, among them a popular tool called Tagger, can be described as searching for a minimal vertex cover of a graph. In this article this approach is contrasted with a recently introduced clustering algorithm based on the graph theoretical concept of dominant sets. To compare the performance of both procedures comprehensive simulation studies have been performed using SNP data from the ten ENCODE regions included in the HapMap project. Quantitative traits have been simulated from additive models with a single causative SNP. Simulation results suggest that clustering performs always at least as good as Tagger, while in more than a third of the considered instances substantial improvement can be observed. Additionally an extension of the clustering algorithm is described which can be used for larger genomic data sets.

Book ChapterDOI
01 Jan 2010
TL;DR: This chapter explores the possibility that natural selection takes place in the brain and introduces three possible neuronal units of selection, and shows how these replicators may take part in diverse aspects of cognition such as causal inference, human problem solving, and memory.
Abstract: This chapter explores the possibility that natural selection takes place in the brain. We review the theoretical and experimental evidence for selectionist and competitive dynamics within the brain. We propose that in order to explain human problem-solving, selectionist mechanisms demand extension to encompass the full Darwinian dynamic that arises from introducing replication of neuronal units of selection. The algorithmic advantages of replication of representations that occur in natural selection are not immediately obvious to the neuroscientist when compared with the kind of search normally proposed by instrumental learning models, i.e. stochastic hill-climbing. Indeed, the notion of replicator dynamics in the brain remains controversial and unproven. It starts from early thoughts on the evolution of ideas, and extends to behaviourist notions of selection of state-action pairs, memetics, synaptic replicators, and hexagonal cortical replicators. Related but distinct concepts include neural selectionism, and synfire chains. Our contribution here is to introduce three possible neuronal units of selection and show how they relate to each other. First, we introduce the Adams synapse that can replicate (by quantal budding) and mutate by attaching to nearby postsynaptic neurons rather than to the current postsynaptic neuron. More generally, we show that Oja’s formulation of Hebbian learning is isomorphic to Eigen’s replicator equations, meaning that Hebbian learning can be thought of as a special case of natural selection. Second, we introduce a synaptic group replicator, a pattern of synaptic connectivity that can be copied to other neuronal groups. Third, we introduce an activity replicator that is a pattern of bistable neuronal activities that can be copied between vectors of neurons. This last type of replicator is not composed of the first two kinds, but may be dependent upon them. We suggest how these replicators may take part in diverse aspects of cognition such as causal inference, human problem solving, and memory.

Book ChapterDOI
20 Sep 2010
TL;DR: The evolutionary dynamics of the Regret Minimization polynomial weights learning algorithm are formally derived, which will be described by a system of differential equations that can easily investigate parameter settings and analyze the dynamics of multiple concurrently learning agents using regret minimization.
Abstract: Learning in multi-agent systems (MAS) is a complex task. Current learning theory for single-agent systems does not extend to multi-agent problems. In a MAS the reinforcement an agent receives may depend on the actions taken by the other agents present in the system. Hence, the Markov property no longer holds and convergence guarantees are lost. Currently there does not exist a general formal theory describing and elucidating the conditions under which algorithms for multi-agent learning (MAL) are successful. Therefore it is important to fully understand the dynamics of multi-agent reinforcement learning, and to be able to analyze learning behavior in terms of stability and resilience of equilibria. Recent work has considered the replicator dynamics of evolutionary game theory for this purpose. In this paper we contribute to this framework. More precisely, we formally derive the evolutionary dynamics of the Regret Minimization polynomial weights learning algorithm, which will be described by a system of differential equations. Using these equations we can easily investigate parameter settings and analyze the dynamics of multiple concurrently learning agents using regret minimization. In this way it is clear why certain attractors are stable and potentially preferred over others, and what the basins of attraction look like. Furthermore, we experimentally show that the dynamics predict the real learning behavior and we test the dynamics also in nonself play, comparing the polynomial weights algorithm against the previously derived dynamics of Q-learning and various Linear Reward algorithms in a set of benchmark normal form games.

Journal ArticleDOI
Dirk Helbing1, Sergi Lozano1
TL;DR: This work shows that a transition to cooperation may take place even if the stationary states and the eigenvalues of the replicator equation for the PD stay unchanged, and studies the related phase transitions, which can be of first order (discontinuous) or of second order (continuous), implying a variety of different routes to cooperation.
Abstract: Game theory formalizes certain interactions between physical particles or between living beings in biology, sociology, and economics and quantifies the outcomes by payoffs. The prisoner's dilemma (PD) describes situations in which it is profitable if everybody cooperates rather than defects (free rides or cheats), but as cooperation is risky and defection is tempting, the expected outcome is defection. Nevertheless, some biological and social mechanisms can support cooperation by effectively transforming the payoffs. Here, we study the related phase transitions, which can be of first order (discontinuous) or of second order (continuous), implying a variety of different routes to cooperation. After classifying the transitions into cases of equilibrium displacement, equilibrium selection, and equilibrium creation, we show that a transition to cooperation may take place even if the stationary states and the eigenvalues of the replicator equation for the PD stay unchanged. Our example is based on adaptive group pressure, which makes the payoffs dependent on the endogenous dynamics in the population. The resulting bistability can invert the expected outcome in favor of cooperation.

Journal ArticleDOI
28 Jun 2010-Entropy
TL;DR: Applications to inhomogeneous versions of some conceptual models of mathematical biology (logistic and Ricker models of populations and Volterra’ models of communities) are given.
Abstract: Dynamics of many complex systems can be described by replicator equations (RE). Here we present an effective method for solving a wide class of RE based on reduction theorems for models of inhomogeneous communities. The solutions of the RE minimize the discrimination information of the initial and current distributions at each point of the system trajectory, not only at the equilibrium, under time-dependent constraints. Applications to inhomogeneous versions of some conceptual models of mathematical biology (logistic and Ricker models of populations and Volterra’ models of communities) are given.

Journal ArticleDOI
TL;DR: A general framework is constructed that predicts the learning dynamics of any decision-making neural network that implements this synaptic plasticity rule and it is shown that melioration naturally emerges in such networks.
Abstract: According to the theory of Melioration, organisms in repeated choice settings shift their choice preference in favor of the alternative that provides the highest return. The goal of this paper is to explain how this learning behavior can emerge from microscopic changes in the efficacies of synapses, in the context of two-alternative repeated-choice experiment. I consider a large family of synaptic plasticity rules in which changes in synaptic efficacies are driven by the covariance between reward and neural activity. I construct a general framework that predicts the learning dynamics of any decision-making neural network that implements this synaptic plasticity rule and show that melioration naturally emerges in such networks. Moreover, the resultant learning dynamics follows the Replicator equation which is commonly used to phenomenologically describe changes in behavior in operant conditioning experiments. Several examples demonstrate how the learning rate of the network is affected by its properties and by the specifics of the plasticity rule. These results help bridge the gap between cellular physiology and learning behavior.

Journal ArticleDOI
TL;DR: A comprehensive review of different Game Theory applications that have been recently used to predict the behavior of non-rational agents in interaction situations arising from computational biology is provided.

Journal ArticleDOI
01 Jun 2010
TL;DR: It is argued that, in many economic and multiagent scenarios, the robustness criterion is inappropriate; in contrast, this method focuses on searching for strategies that are likely to be adopted by participating agents, which is formalized as the size of a strategy's basins of attraction under the replicator dynamics.
Abstract: We introduce a method for strategy acquisition in nonzero-sum n -player games and empirically validate it by applying it to a well-known benchmark problem in this domain, namely, the double-auction market. Many existing approaches to strategy acquisition focus on attempting to find strategies that are robust in the sense that they are good all-round performers against all-comers. We argue that, in many economic and multiagent scenarios, the robustness criterion is inappropriate; in contrast, our method focuses on searching for strategies that are likely to be adopted by participating agents, which is formalized as the size of a strategy's basins of attraction under the replicator dynamics.

Journal ArticleDOI
TL;DR: This study presents a new concept of quasi-stationary strategy (QSS) for large but finite populations and it is shown that the consistency between the QSS and the ESS implies that the long-term behavior of the replicator dynamics can be predicted by the quasi- Stationary Behavior of the stochastic dynamics.

Journal ArticleDOI
TL;DR: Simulation and analytical results for evolutionary lottery games demonstrate that for every population dynamic in this class except for the replicator dynamic, the interplay between risk-taking and sequentiality of choices allows state-dependent risk behavior to have an evolutionary advantage over expected-value maximization.
Abstract: Researchers have invested much effort in constructing models of the state-dependent (sometimes risk-averse and sometimes risk-prone) nature of human decision making. An important open question is how state-dependent risk behavior can arise and remain prominent in populations. We believe that one part of the answer is the interplay between risk-taking and sequentiality of choices in populations subject to evolutionary population dynamics. To support this hypothesis, we provide simulation and analytical results for evolutionary lottery games, including results on evolutionary stability. We consider a parameterized class of imitation dynamics in which the parameter 0 l α l 1 yields the replicator dynamic with α = 1 and the imitate-the-better dynamic with α = 0. Our results demonstrate that for every population dynamic in this class except for the replicator dynamic, the interplay between risk-taking and sequentiality of choices allows state-dependent risk behavior to have an evolutionary advantage over expected-value maximization.

Posted Content
TL;DR: In this paper, the authors studied the replicator dynamics in the presence of shocks and showed that strategies that do not survive the iterated deletion of strictly dominated strategies are eliminated in the long run.
Abstract: This article studies the replicator dynamics in the presence of shocks. I show that under these dynamics strategies that do not survive the iterated deletion of strictly dominated strategies are eliminated in the long run, even in the presence of nonvanishing perturbations, I also give an example that shows that the stochastic dynamics in this article have equilibrium selection properties that differ from other dynamics in the literature. Copyright 2000 by Economics Department of the University of Pennsylvania and the Osaka University Institute of Social and Economic Research Association.(This abstract was borrowed from another version of this item.)

Posted Content
TL;DR: It is shown that the Nash equilibrium is not asymptotically stable under payoff monotonic dynamics for arbitrary initial populations, and it is suggested that the convergence for the replicator dynamic is slow compared to that for the BNN dynamic.
Abstract: This paper studies the evolutionary stability of the unique Nash equilibrium of a first price sealed bid auction. It is shown that the Nash equilibrium is not asymptotically stable under payoff monotonic dynamics for arbitrary initial populations. In contrast, when the initial population includes a continuum of strategies around the equilibrium, the replicator dynamic does converge to the Nash equilibrium. Simulations are presented for the replicator and Brown-von Neumann-Nash dynamics. They suggest that the convergence for the replicator dynamic is slow compared to the Brown-von Neumann-Nash dynamics.

Proceedings ArticleDOI
01 Jan 2010
TL;DR: A model of radiation induced bystander effect based on the theory of evolutionary games, which considers three strategies (phenotypes) of cells being exposed to ionizing radiation, and the type of equilibrium points (polymorphism of phenotypes), dynamics leading to equilibrium (replicator dynamics).
Abstract: We present a model of radiation induced bystander effect based on the theory of evolutionary games. The model follows the line of reasoning used to describe the so called angiogenic games. We consider three strategies (phenotypes) of cells being exposed to ionizing radiation or more precisely to signals sent by the exposed cells to the unexposed ones. The proposed payoff table of fitness includes costs/profits of bystander effect, choice of apoptotic pathway, producing of growth factors, producing resistance against bystander effect. We discuss the type of equilibrium points (polymorphism of phenotypes) and dynamics leading to equilibrium (replicator dynamics). We present also examples of simulation experiments for various parameters.