Reinforcement learning facilitates an optimal interaction intensity for cooperation

Zhao Song, Hao Guo, Danyang Jia, Matjaž Perc, Xuelong Li, Zhen Wang

科研成果: 期刊稿件文章同行评审

38 引用 (Scopus)

摘要

Our social interactions vary over time and they depend on various factors that determine our preferences and goals, both in personal and professional terms. Researches have shown that this plays an important role in promoting cooperation and prosocial behavior in general. Indeed, it is natural to assume that ties among cooperators would become stronger over time, while ties with defectors (non-cooperators) would eventually be severed. Here we introduce reinforcement learning as a determinant of adaptive interaction intensity in social dilemmas and study how this translates into the structure of the social network and its propensity to sustain cooperation. We merge the iterated prisoner's dilemma game with the Bush-Mostelle reinforcement learning model and show that there exists a moderate switching dynamics of the interaction intensity that is optimal for the evolution of cooperation. Besides, the results of Monte Carlo simulations are further supported by the calculations of dynamical pair approximation. These observations show that reinforcement learning is sufficient for the emergence of optimal social interaction patterns that facilitate cooperation. This in turn supports the social capital hypothesis with a minimal set of assumptions that guide the self-organization of our social fabric.

源语言英语
页(从-至)104-113
页数10
期刊Neurocomputing
513
DOI
出版状态已出版 - 7 11月 2022

指纹

探究 'Reinforcement learning facilitates an optimal interaction intensity for cooperation' 的科研主题。它们共同构成独一无二的指纹。

引用此