Yann LeCun, a prominent AI researcher, is advocating for a shift away from existing large language models (LLMs) in favor of exploring new, more efficient AI architectures. Commenters support LeCun's position, noting the inefficiencies and limitations of current LLMs which they describe as glorified look-up tables lacking true understanding. They highlight the potential for significant breakthroughs through fundamental research, rather than further optimization of LLMs that have already reached a performance plateau. There is a consensus that advancing AI will require new algorithms to move past the limitations of current technologies, emphasizing the importance of innovative approaches like energy-based models and higher dimensionality methods that resemble the learning processes of the human brain. The community reflects a blend of skepticism about existing methods and optimism about the potential for revolutionary advancements in AI using different methodologies.