In a recent post, developers shared their experience on improving the functionality of AI bots for code reviews by minimizing nitpicky comments. They discussed the prevailing issue with existing PR bots like Coderabbit and Korbit AI which often clutter pull requests with unnecessary critiques. Key insights included:
- The need for better prompting techniques to guide AI in making relevant comments, focusing on impactful feedback rather than trivial details.
- Several community members expressed frustration with their experiences in utilizing AI review tools, prompting discussions about customization options and the effectiveness of certain commenting styles.
- There was a consensus on the challenge of achieving a perfect balance in AI evaluation standards, as what may seem like a nitpick in one instance could be crucial in another.
- Suggestions included alternative methods like classifying comments and applying thresholds for filtering instead of heuristic similarity measures, to give users more control over AI responses.
- Many developers noted a lack of trust in AI-based code reviewers due to prior negative experiences with inaccuracies, leading to a preference for human intervention in code reviews. The inherent limitations of current AI models were critically analyzed.