Recent discussions emphasize that large language models (LLMs) like GPT are primarily pattern matchers, akin to traditional machine learning algorithms. While advancements, such as Chain of Thought reasoning, allow LLMs to tackle complex problems, limitations persist in their ability to handle tasks requiring deeper logical or graphical reasoning. Critics argue that many assessments of LLM capabilities are outdated, often failing to incorporate the latest developments, particularly in the context of synthesis engines like the o3-series. The discourse is divided, with some researchers advocating for further exploration into LLMs, while others suggest shifting focus to alternative models. The ongoing debate underlines the tension within the AI community regarding the potential and limitations of current LLMs and their adaptation strategies for increasingly complex problem-solving tasks.