nudgeDSL is a token-dense, human-readable protocol that encodes executable intent from LLMs to any backend. Part of the Nudge Framework — the methodology for building software with AI without losing your mind.
// What you write today (78 tokens) { "action": "agent_interact", "agent_id": "tessier", "atom_modifiers": { "AT_R_CONF_tessier_delta": -8 }, "flag": "alerte" } // What you write with nudgeDSL (21 tokens) AGENT("tessier") >> CONF(-8) >> FLAG("alerte")
The Nudge Framework is a human-in-the-loop methodology for building software with AI. Not a library. Not a tool. A discipline. These three principles are non-negotiable.
When context is tight and relevant, the AI reasons better. Noise — irrelevant file content, dead history, verbose instructions — degrades output quality. Saving tokens is not an optimisation. It is the foundation of reliable AI reasoning.
The AI is an execution engine, not a driver. You define the blueprint, sequence the work, and set the acceptance criteria. The AI executes exactly what you scoped. Autonomy is earned per stage, not granted by default.
One monolithic prompt doesn't scale. A Nudge session gives the AI exactly what it needs for the current task — no more. The shard is the quantum of work. The session is the quantum of context.
The fastest way to start. Copy a prompt, paste it into any AI, follow the rules. No tooling required. Each stage has a specific role with a specific posture — the AI is not a single assistant, it's a pipeline.
Stress-test the idea, challenge every assumption, produce the four founding documents. The AI is adversarial here — that's intentional.
Take the entire tasklist and map it into specific, ruthless shard files. Each shard is an implementable spec — not a design document, not a wishlist.
Write the code exactly as specified by the shard. No hallucinating features. No refactoring adjacent code. If the shard is ambiguous, stop and report.
Validate the output against the shard. Do not improve, only verify. The reviewer's verdict is binding — READY, NEEDS REVISION, or BLOCKED.
Five stages. Five AI role profiles. Decreasing autonomy as you get closer to production code. The pipeline prevents hallucinations and expensive rewrites by separating thinking from delivery.
Once you've run a few sessions with the light version, these patterns will save you hours. They're not rules for their own sake — each one exists because someone wasted tokens finding out the hard way.
The AI is not a single tool. It operates in five distinct modes depending on the pipeline stage. Autonomy decreases as you get closer to production code. The tight constraints of later phases only apply to delivery — the early phases are deliberately unconstrained.
| Role | Stage | Trust | Posture |
|---|---|---|---|
Architecture Critic |
Blueprint | High | Adversarial. Breaks the design before you build it. Finds weak points, missing error paths, challenges every assumption. |
Dependency Analyst |
Task list | Medium | Analytical. Sequences tasks by dependency graph. Flags hidden coupling and risks. Free to reorganize the task order. |
Specification Writer |
Shard | Medium | Precise. Translates tasks into function signatures, data shapes, error cases, acceptance criteria. No implementation code. |
Disciplined Implementer |
Development | Low | Compliant. Executes against shards. No architectural decisions. Stops if spec is ambiguous — never guesses. |
QA Auditor |
Verification | Zero | Evaluative. Reviews deliverable against shard spec. Identifies untested edge cases. Flags out-of-scope changes. |
Do not restart after every phase by default. Restarting forces the AI to rebuild from a lossy handover. But carrying dead context is worse than restarting.
Does the next slice share more than ~60% of its file surface with the current slice? If YES → continue. If NO → handover and restart.
Before stopping, the AI generates a handover using strict Context Minification — telegraphic shorthand, no conversational filler. This is the only thing the next session needs.
nudgeDSL makes this executable: MARK >> MOD // CREATE >> NOTE >> NEXT
Stop using one giant CONTEXT.md. Create specific shards for each subsystem. Tell the AI to read only what's relevant to the current phase.
Your core rules file is injected every turn. Every token in it competes with your actual task context.
Keep it under 40 lines. Put phase-specific rules in the phase shard, not the global rules file. The shard is the spec — the rules file is the operating manual.
AIs waste massive amounts of context running ls commands or reading 1000-line files to find one function. Give the AI a map instead of making it explore. These scripts are project-agnostic — copy them once, use everywhere.
Session order follows the dependency graph, not convention. The practical default:
Do not write tests to satisfy a process rule. A bad test is worse than no test — the AI will contort its implementation to satisfy a broken contract.
Most data flow docs fail because people try to document everything. The Nudge approach uses three layers with decreasing maintenance cost:
The Nudge Framework already moved you toward telegraphic, structured outputs. nudgeDSL completes that journey — your orchestration documents become machine-executable without losing human readability.
slice: crisis-engine-v1 status: complete files_modified: - crisis_engine.go - crisis_test.go files_created: - crisis_types.go decisions_locked: - CrisisEngine uses mutex, not channel - Crisis IDs are UUIDs, not sequential next_slice_needs: - contracts.yaml - crisis_types.go skeleton
MARK("crisis-engine-v1", "done")
>> MOD("crisis_engine.go")
// MOD("crisis_test.go")
>> CREATE("crisis_types.go")
>> NOTE("mutex not channel")
>> NOTE("crisis-ids are UUIDs")
>> NEXT("next-slice",
needs="contracts.yaml")
MARK("job-7", "running") >> FETCH("data") >> MARK("job-7", "done")
FETCH("primary-db") | FETCH("replica-db")
WRITE("db") // CACHE("redis")
PING("health") ** 3
REGISTRY("nudge-framework", version="0.1") >> SHARD("A1", phase="A") >> ACCEPT("zero NotImplementedError") >> EXCLUDE("gif_writer.py")
MARK("slice-7", "running") >> (MOD("handler.go") // TEST("TestHandler")) >> MARK("slice-7", "done")
Tier 1 works without any key — validate syntax, generate prompts, export grammar. Tier 2 adds prose translation with your own Anthropic API key. Click any example above to load it.
Import a custom atom registry to validate DSL against your project's vocabulary. The registry is a JSON file following the nudgeDSL registry format. View core registry on GitHub.
or click to browse — JSON file, max 100KB
Generate the system prompt injection for your loaded registry. Paste this into your agent's system prompt — it teaches any LLM to output valid nudgeDSL for your atoms.
Nobody is locked out of the core value. The DSL saves tokens regardless of which tier you use. The tiers determine how much friction there is in adopting it.
The nudgeDSL spec is a single markdown file. It's the grammar source of truth, the agent context document, and the contributor reference — all in one. Start here before anything else.
Read nudgeDSL-spec-v0.1.md →Create an atoms.json file for your project. Register the actions your agents need to express. Start with 5–10 atoms — you can always add more. The core registry gives you the generic ones for free.
{
"domain": "my-project",
"version": "0.1",
"atoms": [
{
"atom": "MARK",
"fn": "UpdateStatus",
"args": [
{ "name": "id", "type": "string" },
{ "name": "status", "type": "string",
"enum": ["done","pending","skipped"] }
]
}
]
}
Use the Generate Prompt tab above with your registry loaded to get the exact system prompt injection for your atoms. Paste it into your agent's system prompt. Done — your agent now outputs nudgeDSL.
For the Nudge Framework specifically, use the nudge-framework.json registry and you get all five pipeline stages ready to use.