The discussion surrounding Large Language Models (LLMs) highlights significant challenges with context management in multi-turn conversations. Users find that LLMs struggle to maintain relevant context, leading to issues such as "context poisoning," where previous inputs distort the quality of subsequent outputs. Critics emphasize that LLMs often lack self-awareness and the ability to ask clarifying questions, which is crucial in nuanced interactions. An innovative solution discussed involves creating a two-system architecture, where one system acts as a 'curator' that manages context and breaks down tasks into manageable components, potentially mitigating the current limitations of LLMs. There’s a call for more research into context handling, emphasizing that companies should provide better guidance on effectively using their models. The difficulties experienced with LLMs mirror human conversational challenges, wherein contextual shifts can lead to confusion. This indicates room for improvement in the design of conversational AI to better simulate human-like interactions and maintain context across conversations.