
On March 20, 2026, we rebuilt all seven phases of our pipeline architecture in a single day. It was the most impactful change we’ve made to Maudel — and the story of how we got there is just as important as the result.
The Problem: Prompts in Code
Each pipeline stage had a single, massive prompt baked into TypeScript. The implementation stage alone was over 200 lines of template literal. Changing one stage’s behavior meant touching code, redeploying, and hoping you didn’t break another stage’s prompt.
This made iteration painful. Every prompt tweak required a developer. Testing prompt changes meant running the full pipeline. And version-controlling prompts embedded in code was effectively impossible — diffs were unreadable noise.
The First Attempt (And the Revert)
On March 19, we moved prompts into text files. The instinct was right — prompts belong in content files, not code. But the execution was wrong.
We moved the raw text into .md files and loaded them at runtime. No variable interpolation. No input hydration. No way to compose segments. The prompts were files, but they were just as monolithic as before — just stored differently.
We reverted the same day.
The Second Attempt (Getting It Right)
The next day, we rebuilt it properly. The composable pipeline architecture has three core principles:
File-driven stage definitions. Each stage lives in its own directory with markdown files: ACTION.md (what to do), INPUTS-OUTPUTS.md (what you need and what you produce), and mode-specific instructions. Stages are inspectable, editable, and version-controlled.
9-layer system prompt assembly. At runtime, a StagePromptAssembler service fetches prompts from disk and composes them into nine layers organized by cognitive zone — primacy (agent identity, pipeline rules), reference (stage instructions, skills, mode config), and recency (expected outputs, upstream context). This leverages how LLMs attend to different parts of the context window.
Template variable resolution. Stage definitions use variables like {{workspacePath}}, {{storyDocument}}, and {{testFrameworkGuidance}} that resolve to actual runtime values. Input hydration matches declared inputs to runtime outputs from upstream stages.
Why This Matters
Prompt engineering at scale requires the same modular thinking as software architecture. Most teams haven’t figured this out yet.
When your prompts live in code, only developers can change them. When your prompts are composable files, anyone on the team can iterate on stage behavior — product managers can adjust story generation criteria, QA can tune validation rules, architects can refine planning constraints.
The composable pipeline turned prompt engineering from a developer bottleneck into a team capability. And building all seven stages in a single day proved the architecture was right — the second time around.
