One of the most dangerous prompts in AI coding is: “Look at this issue and fix it.”

That sentence collapses triage, analysis, and implementation into one step. It may work for small problems. In a real repository, it lets AI modify code before the task boundary has been confirmed.

A better workflow separates the phases.

Triage decides whether AI may work on the issue

Triage is not ceremony before implementation. It is the authorization boundary.

During triage, a human or scheduler should answer:

  • Is the user problem clear enough?
  • Is there current evidence or a reproduction path?
  • Are the acceptance criteria specific?
  • Are non-goals explicit?
  • Does the task touch high-risk areas?
  • Is the automation mode analysis-only, implement-after-triage, or manual-only?

If these questions cannot be answered, the worker should not implement.

The triage output can be short, but it must define how far the worker may go and where it must stop.

Analysis discovers the real code boundary

Even a well-written issue is not the system truth.

The code is the current implementation truth.

The analysis phase asks the worker to compare the issue against the repository:

  • Does the problem appear real?
  • Where is the relevant code?
  • Does the issue match current behavior?
  • Which modules might be touched?
  • What tests, screenshots, or data proof are needed?
  • Did any stop condition appear?

Many tasks should stop after analysis.

For example: the issue says the problem is UI-only, but the code shows an API contract change is required. The issue says the bug is simple, but the fix touches permissions, billing, security, or persisted data. The acceptance criteria contradict current product behavior.

In those cases, the best AI output is not a patch. It is a clear analysis report.

Implementation should only touch authorized scope

The worker should enter implementation only after triage allows it and analysis does not hit a stop condition.

Implementation should not become opportunistic cleanup.

The worker should make the smallest coherent change allowed by the issue contract:

  • fix the target problem;
  • add or update relevant tests;
  • avoid unrelated refactors;
  • preserve non-goals;
  • respect risk boundaries;
  • produce evidence.

These constraints may sound restrictive. They are what make AI-generated PRs reviewable.

A small, bounded AI PR is more valuable than a broad PR that “also cleaned up a few things.”

A gate must be allowed to stop the task

Many teams describe a process as gated, but every gate still pushes the work toward a PR.

That is not a real gate.

A real gate must be allowed to stop, downgrade, or escalate the task.

Examples:

  • Triage gate can mark the issue manual-only.
  • Risk gate can downgrade implementation to analysis-only.
  • Analysis gate can return needs-human.
  • Implementation gate can stop on failed tests.
  • Evidence gate can prevent a PR from being treated as complete.

If every gate always says continue, it is decoration.

Phases reduce rework

AI coding feels fast because the worker can edit many files quickly.

But if the boundary is wrong, rework appears just as quickly. Reviewers have to explain why the change should not have been made. The worker has to revert or redo the patch. The original issue intent may be overwritten by implementation discussion.

Phasing moves mistakes to cheaper places.

It is cheaper to discover during triage that a task should not be automated. It is cheaper to discover during analysis that a human decision is needed. It is cheaper to discover at the evidence gate that verification is weak than to discover it in production.

The goal of an AI issue workflow is not to automatically finish every issue.

The goal is to make every issue move or stop at the right boundary.