Chain of Thought
Prompting technique that encourages the AI to reason step-by-step before answering.
Complex problems need structured thinking. Chain-of-thought prompting dramatically improves accuracy on reasoning tasks — the AI shows its work instead of jumping to conclusions.
When our agents evaluate content quality, they use chain-of-thought reasoning: first list what's good, then identify issues, then propose specific fixes — producing more reliable evaluations.
Related terms
Prompt
An instruction or question given to an AI model. Quality determines output quality.
ReAct Pattern
Agent action model: Reason, Act, Observe, Repeat.
LLM (Large Language Model)
A large language model like Claude, GPT, or Gemini. The "brain" that understands and generates language.
Evaluations (Evals)
Systematic testing of agent performance: accuracy, safety, reliability.