The discussion focuses on how emergent social conventions and collective biases can develop within populations of large language models (LLMs). It highlights the idea that LLMs can exhibit group behavior similar to social dynamics among humans, leading to unique language patterns and biases that arise from their training data and usage. These conventions might not only reflect societal biases but also create new patterns of communication that could be interpreted as an in-group behavior among LLM advocates. The comment suggests a contrast between seeing these conventions as reflections of pre-existing social biases versus emergent phenomena arising from the models themselves.