Start with the default prompt structure — a system message and a user message — or add more sections using the 'Add Message Section' dropdown to include assistant or additional user/system messages.
Write your prompt content in each message section. Use {{variableName}} syntax anywhere in the text to create dynamic variables that can be filled in later — useful for templating reusable prompts.
Select your target AI model from the model dropdown. The builder auto-detects the provider (OpenAI, Anthropic, or Google) and formats the payload correctly for that API.
Monitor real-time token counts per section and the total context window usage bar. The builder warns you if your system prompt is oversized, if you're near the context limit, or if adjacent messages share the same role.
Fill in detected variable values in the Variables panel on the left. Values are injected into the export output automatically.
Choose your export format — JSON, cURL, JavaScript fetch, or Python — and view the generated code in the Export Preview panel on the right. Copy with ⌘⇧E or the Copy Export button.
Reorder message sections using the ↑/↓ arrows next to each section. Remove sections with the trash icon, or reset everything with ⌘⇧K.
Multi-message prompt builder with system, user, and assistant role sections — the three roles used by all major LLM APIs.
Per-section token counting: see exactly how many tokens each message consumes for the selected model.
Context window usage bar: visual progress bar showing how much of the model's context window your prompt fills, with color-coded warnings at 70% and 90%.
Variable injection: use {{variableName}} syntax in any message and fill values in the Variables panel. Variables are replaced in the exported output.
Provider-aware exports: automatically formats the payload for OpenAI (chat completions), Anthropic (messages API with system field), and Google Gemini (generateContent).
Four export formats: raw JSON payload, cURL command (ready to paste in terminal), JavaScript fetch code, and Python code (using openai or anthropic SDK).
Prompt validation: checks for oversized system prompts (>50% context), near-limit usage (>80%), empty sections, duplicate adjacent roles, and missing user messages.
Reorder messages: move sections up or down to experiment with prompt structure and ordering.
Model selector with all major LLMs: GPT-4o, GPT-4.1, o3, Claude Opus/Sonnet, Gemini 2.5, Llama 3, Mistral, and more.
Auto-persist to localStorage: your prompt structure and variables are saved automatically and restored on reload.
URL-shareable settings: model and format selections are stored in URL parameters for sharing.
Runs entirely in your browser — no prompt data is ever sent to a server. Your prompts and templates stay completely private.
Keyboard shortcuts: ⌘↵ to recount tokens and re-export, ⌘⇧E to copy the export output, ⌘⇧K to clear all messages.