Why Codex Should Be a Worker, Not the Scheduler
Explains why Codex and similar AI coding tools should remain bounded workers, not long-running schedulers or release controllers.
14 public items tagged ai-engineering.
Explains why Codex and similar AI coding tools should remain bounded workers, not long-running schedulers or release controllers.
Many AI task failures do not happen because the model cannot modify code. They happen because the model reads the wrong context.
Explains why AI delivery must include verifiable proof: tests, logs, screenshots, risk notes, and a review path, not only a claim that work is done.
Explains why AI pull requests should include reviewable evidence bundles, not just code changes and a claim that tests pass.
Shows how repeated AI mistakes should become project memory, updated rules, and regression checks so the delivery pipeline improves after each failure.
Shows how GitHub labels, branches, and comments can become an AI engineering state machine that keeps issue repair work observable and controllable.
Shows where humans should stand in an AI delivery pipeline: requirements, risk boundaries, release decisions, rollback choices, and final acceptance.
Explains why real projects should put AI work in isolated branches or workspaces, then move changes through explicit gates before they reach the main codebase.
Argues that an issue should be an executable contract for AI work, with scope, context, gates, evidence, and release boundaries instead of a loose todo.
Explains how to safely extract public methodology, templates, and toy examples from private production AI workflow experience.
Explains why a merged PR is not the end of AI engineering work; release verification, rollout checks, and post-merge evidence still define done.
Defines a project-specific AI delivery pipeline: AI acts as a worker while the project owns task intake, context, gates, evidence, and release boundaries.
The most common risk in AI-assisted development is not that the model cannot write code. It is that the model starts writing code too early.
Explains why AI repair work should move through triage, analysis, implementation, evidence, and review gates instead of jumping straight to code.