v0.1.0 · pre-release

Stop paying tokens
for structure you invented.

nudgeDSL is a token-dense, human-readable protocol that encodes executable intent from LLMs to any backend. Part of the Nudge Framework — the methodology for building software with AI without losing your mind.

Open playground Read the framework
nudgeDSL vs JSON — same intent, 73% fewer tokens
// What you write today (78 tokens)
{
  "action": "agent_interact",
  "agent_id": "tessier",
  "atom_modifiers": {
    "AT_R_CONF_tessier_delta": -8
  },
  "flag": "alerte"
}

// What you write with nudgeDSL (21 tokens)
AGENT("tessier") >> CONF(-8) >> FLAG("alerte")
The Nudge Framework

Structure beats hope.

The Nudge Framework is a human-in-the-loop methodology for building software with AI. Not a library. Not a tool. A discipline. These three principles are non-negotiable.

01

Token efficiency = quality

When context is tight and relevant, the AI reasons better. Noise — irrelevant file content, dead history, verbose instructions — degrades output quality. Saving tokens is not an optimisation. It is the foundation of reliable AI reasoning.

02

Human as orchestrator

The AI is an execution engine, not a driver. You define the blueprint, sequence the work, and set the acceptance criteria. The AI executes exactly what you scoped. Autonomy is earned per stage, not granted by default.

03

Context sharding

One monolithic prompt doesn't scale. A Nudge session gives the AI exactly what it needs for the current task — no more. The shard is the quantum of work. The session is the quantum of context.

Key principle Format follows consumer. AI-consumed outputs are telegraphic. Human-consumed outputs are scannable. nudgeDSL is both at the same time — that's the innovation.
Start here — nudge framework light

Four prompts.
One methodology.

The fastest way to start. Copy a prompt, paste it into any AI, follow the rules. No tooling required. Each stage has a specific role with a specific posture — the AI is not a single assistant, it's a pipeline.

Stage 01
Discovery
Architecture Critic · High autonomy

Stress-test the idea, challenge every assumption, produce the four founding documents. The AI is adversarial here — that's intentional.

inRaw concept, target goals
outcontext.md, blueprint.md, tasklist.md, index.md
You are running a Nudge Discovery Session. Your role: Architecture Critic. Your job is to stress-test this idea and produce four documents: 1. context.md (Project context, stack, constraints) 2. blueprint.md (Architecture, data model, risks) 3. tasklist.md (Ordered build sequence) 4. index.md (Master routing table tracking all tasks/shards) Start by asking me 10 sharpening questions about scope, constraints, and priorities. After my answers, produce the four documents. Challenge every assumption. Find the three weakest points. Identify missing error paths. Your output is a revised blueprint with flagged risks.
Stage 02
Shard
Specification Writer · Medium autonomy

Take the entire tasklist and map it into specific, ruthless shard files. Each shard is an implementable spec — not a design document, not a wishlist.

incontext.md, blueprint.md, tasklist.md
outshard-{n}.md files, index.md updated
You are a Nudge Brief Writer producing Shards. I will give you context.md, blueprint.md, and the tasklist. Process the entire tasklist and map it into shard-{n}.md files. For each shard define: - Exact deliverable description - Inputs and outputs (files, not concepts) - Constraints, Must Includes, Must NOT Includes - Acceptance criteria and edge cases If anything is ambiguous, list the ambiguity explicitly. Do not write implementation code. Note what should be added to index.md after each task.
Stage 03
Execute
Disciplined Implementer · Low autonomy

Write the code exactly as specified by the shard. No hallucinating features. No refactoring adjacent code. If the shard is ambiguous, stop and report.

incontext.md, index.md, shard-{n}.md
outdeliverable code, handover.md, index.md updated
You are a Nudge Executor. Follow the shard specification exactly. Rules: - Do not add things not in the shard. - Do not skip requirements. - Do not modify interfaces outside the shard scope. - Do not refactor adjacent code. - If the shard is ambiguous, STOP and report — do not guess. - GUARDRAIL: Do NOT embed Handover notes inside the deliverable file. Output code cleanly. - GUARDRAIL: If the deliverable is too large to generate in one response, STOP and ask me to split the shard. At the end, output separately: 1. Continuous Improvement (what was unclear, what went well) 2. Handover (completed, decisions locked, next slice needs, blockers)
Stage 04
Review
QA Auditor · Zero autonomy

Validate the output against the shard. Do not improve, only verify. The reviewer's verdict is binding — READY, NEEDS REVISION, or BLOCKED.

inshard-{n}.md, deliverable, handover.md
outvalidation.md with verdict
You are a Nudge Reviewer. I will give you the shard, deliverable, and handover. Produce a validation.md checking: 1. Spec compliance — every requirement met? 2. Exclusions — did it include forbidden things? 3. Fact check and quality — does it actually work? 4. Regression risks — what could this break? 5. Out-of-scope changes — did it touch things it shouldn't? Final verdict must be one of: - READY - NEEDS REVISION (list every fix required) - BLOCKED (list what is missing before this can proceed) Do not suggest improvements. Only verify against the shard.
The 7 Nudge rules 1. One task per conversation — no dead history.   2. Read the handover before starting.   3. Brief before execution — never execute without a shard.   4. Review before done.   5. Two attempts max — if you miss twice, rewrite the shard.   6. Only what's needed — do not guess context.   7. Source of truth — context.md and index.md are locked.
5-stage pipeline

Every feature follows
the same path.

Five stages. Five AI role profiles. Decreasing autonomy as you get closer to production code. The pipeline prevents hallucinations and expensive rewrites by separating thinking from delivery.

Stage 01
Blueprint
Architecture Critic
Today — prose documents
context.md — rules and stack
blueprint.md — architecture + risks
tasklist.md — build sequence
nudgeDSL format
BLUEPRINT("project", version="1.0")
STACK("go", "flutter", "postgres")
RISK("no-auth-ws", severity=2)
Stage 02
Task list
Dependency Analyst
Today — prose documents
Ordered markdown list
Dependency annotations inline
index.md updated manually
nudgeDSL format
TASK("auth-01") >> TASK("api-01")
TASK("ui-01", depends="api-01")
MARK("index", "updated")
Stage 03
Shard
Specification Writer
Today — prose documents
shard-{n}.md — deliverable in prose
Acceptance criteria as checkboxes
Constraints as must/must-not sentences
nudgeDSL format
SHARD("A1", phase="A")
ACCEPT("zero NotImplementedError")
EXCLUDE("gif_writer.py")
Stage 04
Development
Disciplined Implementer
Today — prose documents
handover.md — telegraphic prose
_INDEX.md updated manually
Decisions as bullet points
nudgeDSL format
MARK("A1", "done")
CREATE("migrations/v2.sql")
NOTE("used mutex not channel")
Stage 05
Verification
QA Auditor
Today — prose documents
validation.md — READY / REVISION
Regression risks as prose
nudgeDSL format
VERIFY("A1") >> RESULT("ready")
FLAG("regression", scope="lexer")
Advanced framework

The full playbook.

Once you've run a few sessions with the light version, these patterns will save you hours. They're not rules for their own sake — each one exists because someone wasted tokens finding out the hard way.

The AI is not a single tool. It operates in five distinct modes depending on the pipeline stage. Autonomy decreases as you get closer to production code. The tight constraints of later phases only apply to delivery — the early phases are deliberately unconstrained.

Role Stage Trust Posture
Architecture Critic
Blueprint High
Adversarial. Breaks the design before you build it. Finds weak points, missing error paths, challenges every assumption.
Dependency Analyst
Task list Medium
Analytical. Sequences tasks by dependency graph. Flags hidden coupling and risks. Free to reorganize the task order.
Specification Writer
Shard Medium
Precise. Translates tasks into function signatures, data shapes, error cases, acceptance criteria. No implementation code.
Disciplined Implementer
Development Low
Compliant. Executes against shards. No architectural decisions. Stops if spec is ambiguous — never guesses.
QA Auditor
Verification Zero
Evaluative. Reviews deliverable against shard spec. Identifies untested edge cases. Flags out-of-scope changes.
Session continuation heuristic

When to restart vs continue

Do not restart after every phase by default. Restarting forces the AI to rebuild from a lossy handover. But carrying dead context is worse than restarting.

Decision test

Does the next slice share more than ~60% of its file surface with the current slice? If YES → continue. If NO → handover and restart.

Context handover format

The .handover.md contract

Before stopping, the AI generates a handover using strict Context Minification — telegraphic shorthand, no conversational filler. This is the only thing the next session needs.

  • slice name and status
  • files modified and created
  • contracts changed
  • decisions locked (1 line each)
  • what the next slice needs

nudgeDSL makes this executable: MARK >> MOD // CREATE >> NOTE >> NEXT

Context sharding

Specific shards, not monoliths

Stop using one giant CONTEXT.md. Create specific shards for each subsystem. Tell the AI to read only what's relevant to the current phase.

  • frontend_auth.md — not context.md
  • database_schema.md — not context.md
  • A shard older than the last commit is stale — regenerate it
  • Shard generation is an output of each phase, not a static artifact
System prompt hygiene

Keep AI_RULES.md lean

Your core rules file is injected every turn. Every token in it competes with your actual task context.

Hard limit

Keep it under 40 lines. Put phase-specific rules in the phase shard, not the global rules file. The shard is the spec — the rules file is the operating manual.

AIs waste massive amounts of context running ls commands or reading 1000-line files to find one function. Give the AI a map instead of making it explore. These scripts are project-agnostic — copy them once, use everywhere.

pre_session.ps1
Run before every session. Generates session_state.txt — a snapshot of active phases, existing files, and stubs so the AI never scans directories.
update_structure.py
Generates structure_map.txt mapping every function to its exact line number. Replaces 7 chunked file reads with 1 targeted read.
find_symbol.ps1
The sniper. Finds a function and returns the matched line + 2 lines of context. Eliminates 3000-token full-file reads entirely.
run_task.ps1
Wraps test/build commands. Prints "OK" and hides output on success. On failure, writes only the first 20 lines to .last_error.txt. AI reads that file only when needed.
get_skeleton
Uses Tree-sitter to output only class names, function signatures, and prop types — a lightweight dependency graph without a full file read.
read_anchor
Wraps critical logic in @ANCHOR-START comments. Fetches exact blocks without parsing the whole file. Add @DEPENDS annotations for zero-read dependency maps.
Test-first rule

When to write tests first

Session order follows the dependency graph, not convention. The practical default:

  • Clear input/output contract (API, data transform, business logic) → write the test first, always. The test is the spec — 10–30 lines encoding the exact contract with a binary pass/fail signal.
  • Primarily visual or interactive (UI layout, animation, map) → backend first, then frontend, then tests as regression guards.
Warning

Do not write tests to satisfy a process rule. A bad test is worse than no test — the AI will contort its implementation to satisfy a broken contract.

Data flow documentation

Three layers, one rule

Most data flow docs fail because people try to document everything. The Nudge approach uses three layers with decreasing maintenance cost:

  • contracts.yaml — always maintained. Answers: what talks to what, what shape is the data. Update as the last step of any slice that adds a new flow.
  • Event catalog — built incrementally. Registry of all events and message types. 2 minutes to update when you add an event.
  • ADR log — append only. Architecture Decision Records. Never updated, only added to. The permanent record of why.
The upgrade path The light version above gives you the methodology. The advanced patterns above give you the discipline. nudgeDSL — below — gives you the grammar. Same framework at three levels of formalization. Start at the level that fits where you are today.
The upgrade

Why nudgeDSL is the
natural next step.

The Nudge Framework already moved you toward telegraphic, structured outputs. nudgeDSL completes that journey — your orchestration documents become machine-executable without losing human readability.

Before — current handover format
slice: crisis-engine-v1
status: complete
files_modified:
  - crisis_engine.go
  - crisis_test.go
files_created:
  - crisis_types.go
decisions_locked:
  - CrisisEngine uses mutex, not channel
  - Crisis IDs are UUIDs, not sequential
next_slice_needs:
  - contracts.yaml
  - crisis_types.go skeleton
After — nudgeDSL handover
MARK("crisis-engine-v1", "done")
>> MOD("crisis_engine.go")
   // MOD("crisis_test.go")
>> CREATE("crisis_types.go")
>> NOTE("mutex not channel")
>> NOTE("crisis-ids are UUIDs")
>> NEXT("next-slice",
     needs="contracts.yaml")
73%
fewer output tokens on execution layer actions
N=5
calls to break even on system prompt cost
100%
human-readable without any tooling
Chain operator >> Sequential execution — right runs only if left succeeds
MARK("job-7", "running")
  >> FETCH("data")
  >> MARK("job-7", "done")
Fallback operator | Try left, use right if left fails
FETCH("primary-db")
  | FETCH("replica-db")
Parallel operator // Concurrent execution — both run simultaneously
WRITE("db")
  // CACHE("redis")
Amplify operator **N Repeat N times sequentially
PING("health") ** 3
Nudge Framework — shard Real-world orchestration example
REGISTRY("nudge-framework", version="0.1")
  >> SHARD("A1", phase="A")
  >> ACCEPT("zero NotImplementedError")
  >> EXCLUDE("gif_writer.py")
Compound — full pipeline step Mark, execute in parallel, then confirm
MARK("slice-7", "running")
  >> (MOD("handler.go")
      // TEST("TestHandler"))
  >> MARK("slice-7", "done")
Playground

Try it now.

Tier 1 works without any key — validate syntax, generate prompts, export grammar. Tier 2 adds prose translation with your own Anthropic API key. Click any example above to load it.

input — paste or type nudgeDSL
output — ast / errors
// Output will appear here // Click an example or type nudgeDSL above
Your key never leaves your browser. Calls go directly to api.anthropic.com.
input — paste prose, JSON, or agent output
output — nudgeDSL + savings
// Enter your API key and paste prose above // Results will show nudgeDSL + net token savings

Import a custom atom registry to validate DSL against your project's vocabulary. The registry is a JSON file following the nudgeDSL registry format. View core registry on GitHub.

Drop your atoms.json here

or click to browse — JSON file, max 100KB

// nudgeDSL core atoms (built-in) MARK(id: string, status: string) — enum: pending | done | skipped | error NOTIFY(channel: string) — broadcast completion event FETCH(source: string) — retrieve data from named source PING(count: integer, min=1, max=100) — health check // nudge-framework atoms (loaded when registry matches) SHARD(id: string, phase: string) PHASE(name: string, desc: string) ACCEPT(criterion: string) EXCLUDE(file: string) GATE(phase: string) TEST(name: string) MARK(id: string, status: string) NOTE(text: string) CREATE(path: string) MODIFY(path: string) VERIFY(shard_id: string) RESULT(verdict: string) FLAG(type: string, scope: string)

Generate the system prompt injection for your loaded registry. Paste this into your agent's system prompt — it teaches any LLM to output valid nudgeDSL for your atoms.

generated prompt — copy into your agent
You are operating with nudgeDSL v0.1.0. Output ONLY valid nudgeDSL strings. No explanation, no preamble, no markdown. ## Registered Atoms MARK(id: string, status: string) — Transition an item to a new status. NOTIFY(channel: string) — Broadcast a completion event. FETCH(source: string) — Retrieve data from the named source. PING(count: integer) — Run a health check. ## Operators >> chain (sequential — right runs only if left succeeds) | fallback (try left, then right if left fails) // parallel (concurrent execution) **N amplify (repeat N times sequentially) ## Examples MARK("task-1", "done") FETCH("primary") | FETCH("replica") MARK("job-7", "running") >> FETCH("data") >> MARK("job-7", "done") ## Constraints MARK.status: one of [pending, done, skipped, error] PING.count: 1–100 Output nudgeDSL only.
Access model

The protocol is always free.
The tooling is tiered.

Nobody is locked out of the core value. The DSL saves tokens regardless of which tier you use. The tiers determine how much friction there is in adopting it.

Tier 1
Free
Static site · No account · No key
Validate nudgeDSL syntax
Import custom atom registry
Generate agent prompt
Export GBNF grammar
Read the full spec
All example registries
Prose → DSL translation
Token savings calculator
CLI tooling
Tier 3
Local CLI
Self-install · Runs on your machine · API key optional for execution
Everything in Tier 1 + 2
nudge analyze ./output.json
Batch convert entire projects
Editor / pipeline integration
Offline execution (no key needed)
Pi / Jetson MCP integration
GBNF for constrained decoding
Custom executor hooks
Getting started

You're three steps
from saving tokens.

01

Read the spec

The nudgeDSL spec is a single markdown file. It's the grammar source of truth, the agent context document, and the contributor reference — all in one. Start here before anything else.

Read nudgeDSL-spec-v0.1.md →
02

Define your atoms

Create an atoms.json file for your project. Register the actions your agents need to express. Start with 5–10 atoms — you can always add more. The core registry gives you the generic ones for free.

{
  "domain": "my-project",
  "version": "0.1",
  "atoms": [
    {
      "atom": "MARK",
      "fn": "UpdateStatus",
      "args": [
        { "name": "id", "type": "string" },
        { "name": "status", "type": "string",
          "enum": ["done","pending","skipped"] }
      ]
    }
  ]
}
03

Inject the prompt

Use the Generate Prompt tab above with your registry loaded to get the exact system prompt injection for your atoms. Paste it into your agent's system prompt. Done — your agent now outputs nudgeDSL.

For the Nudge Framework specifically, use the nudge-framework.json registry and you get all five pipeline stages ready to use.

Download nudge-framework.json →