Reasoning models don't always say what they think

Viewed 86
The post discusses the flaws in understanding how reasoning models, particularly large language models (LLMs), operate. It critiques the prevailing belief that models like CoT (Chain-of-Thought) provide genuine insights into their reasoning processes. Instead, it emphasizes that these models generate outputs based on statistical probabilities rather than actual cognition or understanding. Multiple commenters support the idea that LLMs improve responses by extending context, but this doesn't equate to real reasoning or awareness. They also note that users often inadvertently guide LLMs toward expected answers, highlighting the models' limitations in genuine reasoning capabilities and their inherent lack of awareness about their outputs or the reasoning process itself.
0 Answers