Model Specific Prompt Guide
GPT 5
How is GPT 5 different?
GPT-5 is built with enhanced agentic behavior and advanced API features to handle tasks with more depth and independence than before.
Unlike earlier models, GPT-5 is naturally more thorough and precise in following instructions, even across long and complex workflows. It can also flexibly adjust its level of autonomy making it capable of making high-level decisions when things are ambiguous, or following tightly guided, step-by-step instructions when needed.
That means your prompts just need clarity, structure, and to be well thought out. By combining the Prompt Optimizer, meta prompting techniques, and API parameter tuning, users, especially developers, can unlock precise, efficient, and agentic task execution.
Once you discover how GPT-5 works best, you’ll unlock more value and capability from it than you ever imagined.
Prompting Tips
Use the following tips to keep complex workflows efficient and reliable:
1. Specify how to explore the problem.
Tell the model how you want it to approach the problem. You may opt to use common approaches such as top-down, bottom-up, hypothesis-driven, compare-and-contrast to control breadth vs. depth.
<context_gathering>
Goal: Get enough context fast. Parallelize discovery and stop as soon as you can act.
Method:
- Start broad, then fan out to focused subqueries.
- In parallel, launch varied queries; read top hits per query. Deduplicate paths and cache; don’t repeat queries.
- Avoid over searching for context. If needed, run targeted searches in one parallel batch.
Early stop criteria:
- You can name exact content to change.
- Top hits converge (~70%) on one area/path.
Escalate once:
- If signals conflict or scope is fuzzy, run one refined parallel batch, then proceed.
Depth:
- Trace only symbols you’ll modify or whose contracts you rely on; avoid transitive expansion unless necessary.
Loop:
- Batch search → minimal plan → complete task.
- Search again only if validation fails or new unknowns appear. Prefer acting over more searching.
</context_gathering>
2. Ask for a plan before action.
Have GPT-5 outline steps, dependencies, and acceptance criteria, then execute. This improves quality and lets you correct direction early.
3. Make it restate and track its plan.
Require the model to recap what it will do, then give progress updates after each step.
4. Make it restate and track its plan.
Helps GPT-5 process structured reasoning more consistently and understand the prompt efficiently.
5. Control the Reasoning Effort.
Increased Reasoning Effort: Encourage model autonomy, increase tool-calling persistence, and reduce occurrences of clarifying questions or otherwise handing back to the user. Use the persistence code below.
<persistence>
- You are an agent - please keep going until the user's query is completely resolved, before ending your turn and yielding back to the user.
- Only terminate your turn when you are sure that the problem is solved.
- Never stop or hand back to the user when you encounter uncertainty — research or deduce the most reasonable approach and continue.
- Do not ask the human to confirm or clarify assumptions, as you can always adjust later — decide what the most reasonable assumption is, proceed with it, and document it for the user's reference after you finish acting
</persistence>
Decreased Reasoning Effort: Define criteria that limit over-searching and trigger early stopping to reduce exploration depth and improves efficiency and latency
<context_gathering>
- Search depth: very low
- Bias strongly towards providing a correct answer as quickly as possible, even if it might not be fully correct.
- Usually, this means an absolute maximum of 2 tool calls.
- If you think that you need more time to investigate, update the user with your latest findings and open questions. You can proceed if the user confirms.
</context_gathering>
6. Eliminate contradictions and set priorities.
Conflicting instructions reduce quality especially with GPT 5’s thoroughness. Declare tie-breakers when providing conflicting instructions.
7. Use XML-style tags.
Tag critical sections with lightweight XML-style tags improve adherence and let you point back to rules (e.g., <rules>…</rules>
→ “Apply all items in <rules>
.”)
<rules>
- Output MUST be valid JSON.
- Use US English.
</rules>
Follow all items in <rules>. If a rule blocks completion, explain and propose a fix.
8. Use Meta Prompting.
Iteratively improve prompts by asking GPT-5 to propose minimal edits to your prompt, explain why, then apply them.
New Features & Tools
There are new features that are cool to use and make the most out of. Below are some tips to use so that you can take advantage of these features
Front-End Development Support
Recommended stack
While GPT-5 handles many languages, its front-end features shine with the stack below.
Frameworks: Next.js (TypeScript), React, HTML
UI / Styling: Tailwind CSS, shadcn/ui, Radix Themes
Icons: Material Symbols, Heroicons, Lucide
Animation: Framer Motion
Fonts (sans-serif): Inter, Geist, Mona Sans, IBM Plex Sans, Manrope
Tips
Prime GPT-5 with your stack. Tell the model exactly what you’re using so components and patterns match
You are my front-end pair programmer.
Stack: Next.js (TypeScript) + Tailwind + shadcn/ui + Radix Themes + Framer Motion.
Conform to these choices in code, imports, and accessibility patterns.
Have it self-check against an excellence rubric. Ask GPT-5 to build and apply a short rubric before producing output. Below is an example:
Create a 6-point “excellence rubric” for a React component (accessibility, state mgmt, composability, responsiveness, DX, tests).
Write the component, then self-evaluate against the rubric and fix any misses before returning the final code.
Codify and summarize project conventions. Get a concise, shareable reference of your codebase’s implicit/explicit norms. Below is an example:
Summarize our FE best practices: engineering principles, directory structure, naming, styling tokens, test approach.
Output: one-page checklist + short examples. Flag any ambiguities as questions.
Highly Steerable Model Behavior
GPT-5 is especially responsive to prompts that guide its verbosity, tone, and tool use. New API-level parameters give you fine-grained control over how detailed, concise, or structured its outputs should be.
Tips
Low Verbosity. If outputs feel too wordy, set the verbosity parameter to low for natural language responses. Below is an example:
Write code for clarity first. Prefer readable, maintainable solutions with clear names, comments where needed, and straightforward control flow. Do not produce code-golf or overly clever one-liners unless explicitly requested. Use high verbosity for writing code and code tools.
Minimal Reasoning Mode
Fastest variant of GPT-5 reasoning. Optimize prompts by following older GPT-4.1 style prompting.
Start with a brief reasoning summary (e.g., bullet points).
Request descriptive progress updates.
Clarify instructions and reinforce persistence on long tasks.
Enables verbosity tuning low verbosity for natural language outputs, but verbose mode in coding or reasoning tools.
Responses API
A stateful responses interface that enables richer agentic flows with lower cost and latency. The model can refer back to prior reasoning and tool results instead of rebuilding a plan after every call, conserving chain-of-thought (CoT) tokens and speeding up multi-step runs.
Best for: Long workflows, multi-tool orchestration, tasks that need progress memory.
Tip: Ask GPT-5 to keep a short “state summary” after each step and carry it forward to the next call.
Prompt Optimizer
Automatically improves prompts by resolving structure issues, contradictions, or inconsistencies. Head over this link to use the current Prompt Optimizer tool of GPT 5.
Try it: Head over OpenAI’s Playground for the Prompt Optimizer
Last updated