Overview
This page is the atomic definition. The deep-dive lives at chain-of-thought.
Definition
Chain-of-thought (CoT) is a prompting technique that asks a language model to show intermediate reasoning steps before producing a final answer. The 2022 paper by Wei et al. found that prompting “Let’s think step by step” or providing few-shot examples that include reasoning substantially improved accuracy on math, logic, and multi-step problems. Modern reasoning models (OpenAI o3, Claude with extended thinking, Gemini with deep think) generate chain-of-thought internally and expose only the final answer; CoT prompting is most useful with non-reasoning models or when the rationale itself is part of the desired output.
When it applies
Use chain-of-thought for multi-step math, logical deduction, code review, or any task where the model benefits from explicit working. Skip it for short factual lookups, simple classification, and any task where token cost matters more than accuracy.
Example
Prompt: “A coffee shop sells lattes for 3. If a customer buys 4 lattes and 2 bagels with a 10% discount, what is the total? Think step by step before answering.”
The model lists subtotals, applies the discount, then states the final figure.
Related concepts
- chain-of-thought - the deep-dive on when CoT helps and when it hurts.
- prompt-design - the broader prompt-design discipline.
- few-shot-prompting - few-shot examples often include CoT traces.
- structured-output - separating reasoning from the final structured answer.
- system-prompts - where the “think step by step” instruction usually sits.
Citing this term
See Chain-of-thought (llmbestpractices.com/glossary/chain-of-thought).