GitHub Copilot Code Review Ignores Custom Instructions: A Setback for Engineering Quality Software

Developer looking frustrated at AI code review comments, with a crumpled instruction sheet in a thought bubble.
Developer looking frustrated at AI code review comments, with a crumpled instruction sheet in a thought bubble.

The Promise and the Problem: Copilot's Deaf Ear to Custom Rules

GitHub Copilot's integration into the code review process holds immense promise for boosting developer productivity and enhancing engineering quality software. By automating parts of the review, it can free up human reviewers for more complex architectural discussions and deeper logic checks. However, a recent community discussion highlights a significant hurdle: GitHub Copilot Code Review appears to be ignoring custom instructions, leading to generic feedback and missed opportunities for tailored quality enforcement.

The issue, raised by user Szer in Discussion #187926, details a frustrating experience where Copilot Code Review consistently produces its default ## Pull request overview format and generic findings, regardless of any custom guidance provided. This directly undermines the goal of achieving nuanced and context-aware code assessments, which are crucial for high-quality software development.

What Was Tried (and Failed)

Szer's attempts to customize Copilot's behavior were thorough, demonstrating a clear understanding of the documented methods for instruction customization. These included:

  • Path-specific instructions: Placing rules in .github/instructions/code-review.instructions.md with applyTo: "**" and excludeAgent: "coding-agent".
  • Repo-wide instructions: Defining rules in .github/copilot-instructions.md, specifically under a ## Code Review Rules section, to enforce domain-specific conventions.
  • Combined approach: Using both files simultaneously.
  • Settings verification: Confirming that "Use custom instructions when reviewing pull requests" was enabled in the repository settings.
  • Instruction format adherence: Ensuring instructions followed recommended practices (short bullet points, concrete examples, clear headings, under 100 lines).

Despite these diligent efforts, the outcome remained the same: Copilot's reviews showed no observable effect from the custom instructions.

The Frustrating Reality for Engineering Quality Software

The observed behavior paints a clear picture of the problem:

  • Reviews consistently used the default template, with no indication that custom instructions were processed.
  • Domain-specific rules—such as F# conventions, Telegram bot patterns, or specific Cyrillic handling—were never enforced.
  • The reviewer frequently commented on trivial style issues, directly contravening instructions that explicitly told it to skip such findings.
  • Crucially, no reference to the instruction files appeared in the generated review, suggesting they were simply not being read.

This bug significantly impacts the utility of Copilot for teams striving for specific engineering quality software standards. When an AI reviewer cannot be guided by a team's established conventions, its output becomes less actionable and can even introduce noise into the review process, hindering rather than helping code review analytics.

Community Acknowledgment, But No Solution Yet

The discussion received an automated response from GitHub Actions, confirming that the product feedback had been submitted. While this acknowledges the issue, it doesn't provide a current solution, workaround, or roadmap for a fix. This leaves developers like Szer in a holding pattern, unable to leverage Copilot's full potential for tailored code quality enforcement.

For organizations investing in AI-powered developer tools, the ability to customize and fine-tune behavior is paramount. This bug represents a critical gap in Copilot's functionality, preventing it from truly becoming an intelligent assistant that understands and enforces a project's unique quality requirements. Addressing this will be vital for Copilot to deliver on its promise of enhancing developer workflows and contributing meaningfully to engineering quality software.

Three gears representing code review, AI, and custom rules, with the custom rules gear disconnected from the AI gear.
Three gears representing code review, AI, and custom rules, with the custom rules gear disconnected from the AI gear.