Deterministic Approximation of a Stochastic Imitation Dynamics with Memory


Creative Commons License

Aydogmus O., Kang Y.

Dynamic Games and Applications, vol.14, no.3, pp.525-548, 2024 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 14 Issue: 3
  • Publication Date: 2024
  • Doi Number: 10.1007/s13235-023-00513-y
  • Journal Name: Dynamic Games and Applications
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, ABI/INFORM, EconLit, MathSciNet, zbMATH
  • Page Numbers: pp.525-548
  • Keywords: Evolutionary games with memory, Deterministic approximations, Non-Markovian stochastic processes, Delay differential equations, EVOLUTIONARY GAME DYNAMICS, DIFFERENTIAL-EQUATIONS, TIME, DELAY
  • Ankara University Affiliated: Yes

Abstract

We provide results of a deterministic approximation for non-Markovian stochastic processes modeling finite populations of individuals who recurrently play symmetric finite games and imitate each other according to payoffs. We show that a system of delay differential equations can be obtained as the deterministic approximation of such a non-Markovian process. We also show that if the initial states of stochastic process and the corresponding deterministic model are close enough, then the trajectory of stochastic process stays close to that of the deterministic model up to any given finite time horizon with a probability exponentially approaching one as the population size increases. We use this result to obtain that the lower bound of the population size on the absorption time of the non-Markovian process is exponentially increasing. Additionally, we obtain the replicator equations with distributed and discrete delay terms as examples and analyze how the memory of individuals can affect the evolution of cooperation in a two-player symmetric snow-drift game. We investigate the stability of the evolutionary stable state of the game when agents have the memory of past population states, and implications of these results are given for the stochastic model.