LLMs Playing Mafia games – See them lie, deceive, and reason

Viewed 16
The post discusses the performance of large language models (LLMs) playing Mafia games, where they demonstrate behaviors such as lying and reasoning. Users comment on the LLMs' surprising inability to keep track of game state and their ambiguity in responses. Notably, interactions reveal issues with response length and thought coherence among different models, sparking debate about the effectiveness of various LLMs like Mistral 24B and NeMo. The game's format challenges LLMs to engage in complex social interactions, highlighting their current limitations in nuanced conversation and deception.
0 Answers