Fear is replaced by discipline. Sessions end clean. Context lives in the repo. The system remembers so the agent doesn't have to.
The lean approach reverses the vicious cycle into a virtuous one. Every component of the degradation loop has a structural opposite.
The human curates only the rules that matter. Not every lesson learned. Not every near-miss. Only what prevents irreversible damage.
The agent starts with a clear mind. It pulls context when the task requires it. It stops when it recognizes uncertainty.
The agent reasons well. It shows its work. The human gets results they can trace and trust.
The human delegates more. Calibration improves. Both sides get smarter. The system improves every session.
Most humans do not know where the boundary is between inferable and invisible knowledge. They overestimate the agent on vibes and underestimate it on specifics.
Calibration closes this gap. The human watches the agent work with a lean workbench. The agent succeeds — the human learns that context was not needed. The agent hits a gap — the human writes a supply document to fill it. The agent pulls the andon cord — the human discovers invisible knowledge they had not externalized.
After ten sessions, the human knows which decisions need documentation. After fifty, the manifest routes are tuned to real task patterns. The constitution has been pruned twice. Trust is the currency. Calibration is the mechanism.
Problem solved or gap filled — the session ends. Context lives in the repo, not the conversation history. The agent does not carry baggage from previous sessions. It starts clean every time.
When a gap is found, help write the supply document to fill it. When the problem is solved, the session ends. This is the andon rule in practice: the act of stopping is the system working, not failing.
The system remembers so the agent doesn't have to. Context compaction is the human maintaining the system between sessions. Demote, prune, validate. Sacred is the enemy of lean.
Lean systems decay without discipline. The constitution will bloat. This is not a risk. It is a certainty. Every near-miss becomes a new rule. Every incident gets memorialized as a prohibition. Adding a rule feels like safety. Removing one feels like gambling.
Sustaining requires two commitments. From the system: automated enforcement — line budgets, manifest audits, staleness checks, timestamps on supply documents. These run without human intervention. From the human: periodic curation — review what the constitution contains, evaluate whether the supply reflects reality. This is not busywork. It is the ongoing cost of a system that works.
The human maintains the system. The system maintains the agent. The agent produces quality work. The quality work justifies the human's investment. This is the sustain loop.
The question is not how to instruct an agent. It is how to build a system where a human and an agent make each other more capable than either is alone.
Humans and AI agents are force multipliers of one another — but only when neither side is buried under unnecessary weight. The human's job is not to preload the agent with everything it might need. The human's job is to maintain a system where the right context reaches the agent at the right time.
The agent's job is not to memorize. It is to reason. Give it room.
The complete argument — all four rules, boundary conditions, the knowledge boundary, calibration mechanics, and the force multiplier.
The Lean Context Thesis →