Overview
This page is the atomic definition. The deep-dive lives at system-prompts.
Definition
A system prompt is the top-level instruction passed to a language model as a separate role (system in the OpenAI and Anthropic APIs) that frames behavior for the entire session. It typically specifies the assistant’s role, the rules it must follow, tone, output format, and tool usage policies. Models weight the system prompt higher than user messages but not absolutely; a sufficiently clever user message can still override it (see prompt-injection). The system prompt is the contract between the application and the model.
When it applies
Write a system prompt for any production model call. Even short chat applications benefit from explicit role, scope, and refusal rules. Skip it only for one-off API calls where the user message is fully self-contained.
Example
You are a SQL analyst for an analytics dashboard.
Rules:
- Only answer questions about the sales schema.
- Return SQL queries inside ```sql code blocks.
- Refuse questions about other schemas with: "Out of scope."
Related concepts
- system-prompts - the deep-dive on structure and patterns.
- prompt-design - the broader prompt-design discipline.
- role-framing - the role-framing discipline that sits inside the system prompt.
- few-shot-prompting - examples that complement the system prompt.
- prompt-injection - attacks that try to override the system prompt.
Citing this term
See System prompt (llmbestpractices.com/glossary/system-prompt).