Interesting approach to planning via extensions. I took a similar direction with enforcement. A governance loop that hooks into the agent's tool calls and blocks
execution until protocol is followed. Every 10 actions (configurable), the agent re-centers. No permission popups, but the agent literally can't skip steps.
Hi HN, I built this to solve a problem I kept hitting: AI agents can generate test scripts fast, but without enforcement they produce inconsistent output, skip patterns, and repeat the same mistakes.
The platform uses a 5-layer test architecture (Test → Role → Task → Page Object → BrowserInterface) based on the Screenplay pattern, enforced by a governance loop that runs inside the AI agent. Every 10 actions, the agent stops, re-reads its protocol, and checks its own work. If it fails, it records a lesson and never makes that mistake again.
The kernel follows a minimalistic design. no external dependencies. I'll post about the kernel in a separate post.
Open source: https://github.com/isagawa-co/isagawa-kernel