The development of OpenAI's GPT-5 has encountered delays, sparking discussions about the current limitations of Large Language Models (LLMs) and the company's strategy moving forward. Comments suggest that the traditional approach of scaling models and data has reached its limits, necessitating a shift in strategy. Insiders indicate a desire for significant improvements in scaling, data, and algorithms. There is debate around the effectiveness of existing models, with some suggesting substantial improvements in the recently released o1-Pro and o3 compared to GPT-4. Many emphasize the need for innovative training data creation methods, hinting at future synthetic datasets as crucial for training progress. Users are looking for practical applications of AI technology, expressing frustrations with current model outputs, especially in coding tasks, and comparing the advantages of different AI offerings. The potential for better clarifying question strategies in AI responses is highlighted as a way to enhance utility.