The post discusses a new method, CaMeL, proposed for mitigating prompt injection attacks on AI systems. A commenter expresses strong support for the method, emphasizing its unique approach of not relying on statistical models to identify prompt injections. Instead, it seems to build on more robust foundational concepts, potentially addressing weaknesses in conventional mitigation methods like those used for SQL injection or XSS attacks. However, there are concerns about the method's effectiveness in preventing escalation attacks. The commenter notes their own prior work on combining dual quarantined and privileged LLMs, suggesting both excitement and skepticism regarding how CaMeL addresses broader attack vectors beyond mere exfiltration.