If a GitHub issue is the executable contract for AI, then labels, branches, and comments are the runtime state of that contract.

Many AI coding workflows fail not because the agent cannot modify code, but because the project loses state. Has the issue been claimed? Is it in triage, implementation, PR review, or release verification? Did the worker fail because tests failed, requirements were unclear, or a human took over?

If those answers live only in a chat, they disappear quickly.

A practical solution is to use GitHub itself as a lightweight state machine.

Labels provide visible state

Labels are visible, filterable, and easy for humans and automation to share.

A minimal label set could look like this:

ai:needs-triage
ai:triaged
ai:analysis-only
ai:ready-for-worker
ai:claimed
ai:working
ai:evidence-ready
ai:pr-opened
ai:needs-human
ai:blocked
ai:done

The names matter less than the semantics.

An issue should not be both ai:ready-for-worker and ai:working. It should not be marked ai:needs-human while a worker continues implementation.

If labels are casual tags, they do not form a state machine. They become useful only when each label represents a clear lifecycle stage.

Branches provide the claim lock

Labels are not locks.

Two workers can see the same ai:ready-for-worker issue at nearly the same time and both start working. Even if one worker later applies ai:working, the second worker may already be running.

That is why the workflow needs a branch claim.

Example:

ai/issue-123-refresh-status-badge

The branch is not only a code branch. It is also a claim: one worker has taken responsibility for this issue.

Recommended rules:

  • One issue may have only one active claim branch.
  • A worker must check for an existing branch before editing.
  • If the branch exists, a new worker should stop or switch to analysis-only mode.
  • The branch name should map back to the issue.

Labels tell people the state. Branches tell the system who owns the execution slot.

Comments provide the audit trail

Labels are too short. Branches are too implicit. The reason behind each state transition belongs in comments.

Each worker pass should leave a short state comment:

Worker state: evidence-ready
Branch: ai/issue-123-refresh-status-badge
Summary: updated dashboard refresh handling
Evidence: npm test -- dashboard-refresh.test.ts
Risk: UI-only; no API contract change
Next: human PR review

This comment does three things.

First, it keeps the reviewer out of the chat transcript.

Second, it lets the next worker recover context.

Third, it makes reconciliation possible: the label says evidence-ready, the comment says where the evidence is, and the PR body can point to the same proof.

Do not make one primitive do all the work

A common mistake is using labels only.

Labels make state visible, but they do not provide locking or reasoning.

Another mistake is using branches only.

Branches prevent some concurrency conflicts, but humans cannot scan issue lists and understand progress.

A third mistake is using comments only.

Comments preserve audit history, but automation cannot efficiently filter lifecycle states from prose.

The stronger pattern is:

  • labels represent lifecycle stage;
  • branches represent claim locks;
  • comments explain state transitions and next steps.

Together, they form a small but useful AI engineering state machine.

Keep the first state machine small

Do not start with dozens of states.

Too many states make humans avoid the workflow and make automation brittle. The first version only needs to answer a few questions:

  • Is this issue allowed to enter AI automation?
  • Has it been triaged?
  • Has a worker claimed it?
  • Is the worker analyzing, implementing, or waiting for a human?
  • Is evidence ready?
  • Is there a PR?
  • Is release or production verification still pending?

If GitHub state can answer these questions, the workflow has already moved from chat-driven AI to state-machine-driven AI.