The post discusses the challenges and misunderstandings surrounding the capabilities of LLMs (Large Language Models) versus LRM (Logical Reasoning Models). The comments express a blend of skepticism toward the efficacy of current reasoning models while reinforcing the value of existing technology development. Critics argue that reasoning in LLMs essentially serves to improve responses rather than provide true reasoning capabilities, which raises questions about how we define reasoning itself. There is a call for more constructive criticism rather than simply dismissing existing efforts, suggesting that even flawed models represent significant advancements in AI technology.