The article discusses the potential of auto-differentiating workflows for LLMs, which aims to eliminate the need for manual prompting. This is based on techniques from reverse graph traversal that allow LLMs to assess and adjust prompts based on intermediate outputs. Critics highlight that the terminologies used might obscure the core concept, suggesting a straightforward task becomes unnecessarily complex. Overall, there are concerns about the labor-intensiveness of crafting prompts and skepticism regarding the need for randomized prompt perturbations for improved accuracy. The conversation underscores a broader debate over the efficiency of prompt optimization versus the fundamental understanding of task requirements and data inputs necessary for successful outcomes with LLMs.