Reviewing code generated by AI tools like GitHub Copilot, ChatGPT, or other coding agents is becoming an essential part of the modern developer workflow. This guide provides practical techniques, emphasizes the importance of human oversight and testing, and includes example prompts to showcase how AI can assist in the review process.
For both legacy codebases and larger pull requests in particular, a thorough review process is critical. Combining human expertise with automated tools can ensure that AI-generated code meets quality standards, aligns with project goals, and adheres to best practices.
With Copilot, you can streamline your review process and enhance your ability to identify potential issues in AI-generated code.
1. Start with functional checks
Always run automated tests and static analysis tools first.
- Make sure the code compiles and all tests pass. Check for any new warnings or errors.
- Use tools like CodeQL and Dependabot to catch vulnerabilities and dependency issues.
- See 生成单元测试 and 为网页创建端到端测试 for examples of verifying code with Copilot.
Example prompts
What functional tests to validate this code change do not exist or are missing?
What possible vulnerabilities or security issues could this code introduce?
2. Verify context and intent
Check that the AI-generated code fits the purpose and architecture of your project.
- Review the AI output for alignment with your requirements and design patterns.
- Ask yourself: “Does this code solve the right problem? Does it follow our conventions?”
- Use your README, docs, and recent pull requests as a starting point for context for AI. Tell AI what sources to trust, what not to use, and give it good examples to work with.
- Try 合成研究 to see how Copilot uses documentation and research to inform code generation.
- When asking AI to perform research and planning tasks, consider distilling the AI output into structured artifacts to then become context for future AI tasks such as code generation.
Example prompts
How does this refactored code section align with our project architecture?
What similar features or established design patterns did you identify and model your code after?
When examining this code, what assumptions about business logic, design preferences, or user behaviors have been made?
What are the potential issues or limitations with this approach?
3. Assess code quality
Human standards still matter.
- Look for readability, maintainability, and clear naming.
- Avoid accepting code that is hard to follow or would take longer to refactor than to rewrite.
- Prefer code that is well-documented and includes clear comments.
- Check 提高代码可读性和可维护性 for prompts and tips on reviewing and refactoring generated code.
Example prompts
What are some readability and maintainability issues in this code?
How can this code be improved for clarity and simplicity? Suggest an alternative structure or variable names to enhance clarity.
How could this code be broken down into smaller, testable units?
4. Scrutinize dependencies
Be vigilant with new packages and libraries.
- Check if suggested dependencies exist and are actively maintained. Consider the origins and contributors of new dependencies to ensure they come from reputable, non-competing sources.
- Review licensing. Avoid introducing code or dependencies that are incompatible with your project’s license (for example, AGPL-3.0 in a MIT licensed project, or dependencies with no declared license).
- Watch out for hallucinated or suspicious packages (such as packages that don't actually exist), or slopsquatting (a theoretical attack on LLMs using fake or malicious packages).
- 创建模板 demonstrates how Copilot can assist with dependency setup, however it is good practice to always verify suggested packages yourself.
- Use GitHub Copilot 代码引用 to review matches with publicly available code.
Example prompts
Analyze the attached package.json file and list all dependencies with their respective licenses.
Are each of the dependencies listed in this package.json file actively maintained (that is, not archived and have recent maintainer activity)?
5. Spot AI-specific pitfalls
AI tools can make unique mistakes.
- Look for hallucinated APIs, ignored constraints, or incorrect logic.
- Watch for tests that are deleted or skipped, instead of fixed.
- Be skeptical of code that “looks right” but doesn’t match your intent.
- See 调试无效的 JSON as an example of catching subtle errors and debugging with Copilot.
Example prompts
What was the reasoning behind the code change to delete the failing test? Suggest some alternatives that would fix the test instead of deleting it.
What potential complexities, edge cases, or scenarios are there that this code might not handle correctly?
What specific technical questions does this code raise that require human judgment or domain expertise to evaluate properly?
6. Use collaborative reviews
Pairing and team input helps catch subtle issues.
- Ask teammates to review complex or sensitive changes.
- Use checklists to ensure all key review points (functionality, security, maintainability) are covered.
- Share successful prompts and patterns for AI use across your team.
- See 有效沟通 for examples of how to work with Copilot collaboratively and document findings.
7. Automate what you can
Let tools handle the repetitive work.
- Set up CI checks for style, linting, and security.
- Use Dependabot for dependency updates and alerts.
- Apply CodeQL or similar scanners for static analysis.
- 查找与 GitHub Copilot 建议匹配的公开代码 shows how Copilot can help track down code patterns and automate search tasks.
- Consider if AI agents with reasoning capabilities can assist in automating parts of your review process. For example, build a self-reviewing agent that evaluates draft pull requests against your standards, checking for accuracy, appropriate tone, and business logic before requesting human review.
8. Keep improving your workflow
Embracing new AI tools and techniques can make your workflow even more effective.
- Document your best practices for reviewing AI-generated code.
- Encourage “AI champions” on your team to share tips and workflows.
- Update your onboarding and contribution guides to include your AI review techniques and resources. Use a
CONTRIBUTING.md
file in your repository to document your expectations for AI-generated source code and content, see 设置仓库参与者指南. - Reference GitHub Copilot 对话助手手册 for inspiration and share useful recipes in your team docs.
Further reading
- Human Oversight in Modern Code Review in GitHub Resources