Research Highlights

Cultural Evolution of Cooperation among LLM Agents

Vallinder investigates whether large language model agents can develop and maintain cooperative norms through cultural evolution mechanisms. The paper demonstrates that LLM-based agents, when placed in repeated social dilemmas, exhibit norm formation dynamics analogous to those observed in human societies. This includes the emergence of punishment mechanisms, in-group favoritism, and stable cooperation even in the absence of explicit coordination. The work bridges evolutionary game theory with modern AI systems, suggesting that insights from cultural evolution could inform multi-agent AI safety. The findings raise both opportunities (norms as an alignment mechanism) and risks (harmful norm formation in AI systems).