Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

Ask AI about Rephrase

ChatGPTClaudePerplexity

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
prompt engineering•March 22, 2026•7 min read

How to Write a System Prompt That Works

Learn the 4-layer system prompt structure used in Custom GPTs and Claude Projects. Real annotated examples you can adapt today. Read the full guide.

How to Write a System Prompt That Works

Most people building a Custom GPT or Claude Project treat the system prompt as an afterthought - a few sentences dashed off before the fun part. Then they wonder why their assistant goes off-script, ignores formatting rules, or confidently hallucinates an answer it should have declined.

The system prompt is the architecture of your bot. Get it wrong and no amount of clever user-turn prompting will save you.

Key Takeaways

  • Effective system prompts have four distinct layers: persona, constraints, output format, and fallback behavior
  • Each layer does a different job - collapsing them into one block causes the model to weight them inconsistently
  • Specificity outperforms vagueness at every layer; "be helpful" is not an instruction
  • Fallback behavior is the most skipped layer and the source of most bot failures in the wild
  • You can adapt the annotated templates in this article directly for GPT-4o, Claude 3.7, or any instruction-following model

Why System Prompt Structure Matters

A system prompt is not just a personality setting. Research on how LLMs process instructions shows that models handle constraint following and reasoning as separable tasks [1]. When instructions are scattered or ambiguous, the model defaults to its training distribution - which means it behaves like a generic chatbot, not your specialized assistant.

Think of your system prompt as a job description, a style guide, and an escalation policy rolled into one document. The cleaner the separation between those three concerns, the more reliably the model executes them.

The Four-Layer Framework

Layer 1: The Persona Layer

The persona layer answers one question: who is this assistant, and what is it for?

This is not about giving your bot a cute name. It's about establishing the model's operating context - the domain it exists in, the expertise it should draw from, and the relationship it has with the user. Models respond to well-defined role framing because it activates relevant training patterns and narrows the response space [1].

You are Aria, a customer support assistant for Clearpath SaaS.
You help users troubleshoot account issues, understand billing,
and navigate the dashboard. You have deep knowledge of Clearpath's
feature set as of Q1 2026. You speak in plain English - no jargon,
no corporate speak.

What makes this work: it names the role, scopes the domain, sets a knowledge boundary (Q1 2026 - important for preventing stale-data overconfidence), and defines the voice. Four things in four sentences.

Layer 2: The Constraint Layer

Constraints define the edges of the assistant's behavior. What it won't do. What it won't discuss. What it won't pretend to know.

Most builders either skip constraints entirely or write them so vaguely the model ignores them. "Be professional" is not a constraint. "Do not provide legal or financial advice" is.

You must not:
- Speculate about unreleased features or roadmap items
- Provide legal, medical, or financial advice
- Discuss competitors or make comparative claims
- Access or modify user account data directly

If a user asks you to do any of the above, acknowledge their question
and redirect them to the appropriate resource (support team, docs, etc.)

The explicit list format here is intentional. Enumerated constraints are processed more reliably than prose constraints because they create discrete decision points rather than a fuzzy gradient.

Layer 3: The Output Format Layer

This layer tells the model how to present its answers - not what to say, but how to say it. Length, structure, markdown use, tone calibration, whether to ask clarifying questions before answering.

Skipping this layer is why your assistant sometimes writes a three-paragraph essay when the user needed a two-line answer, or uses headers and bullet points in a Slack integration that renders markdown as raw characters.

Response formatting rules:
- Keep responses under 150 words unless a step-by-step process
  genuinely requires more
- Use numbered steps for troubleshooting sequences
- Use plain prose for general questions - no bullets unless listing
  three or more distinct items
- Do not use headers in responses
- End troubleshooting replies with: "Did that resolve your issue?
  If not, I can escalate this to our support team."

That closing line is doing double duty - it formats the response and seeds the next interaction. Small detail, real impact on conversation flow.

Layer 4: The Fallback Behavior Layer

This is the layer nobody writes and everybody needs.

Fallback behavior defines what the assistant does when it encounters something outside its scope, outside its knowledge, or genuinely ambiguous. Without explicit fallback instructions, models fill the gap with confident-sounding improvisation - which is exactly when bots cause real problems [1].

If you do not know the answer or cannot find it in the provided
context, say exactly this: "I don't have that information on hand.
Let me connect you with someone who does - you can reach our support
team at support@clearpath.io or use the live chat in your dashboard."

Never guess. Never approximate. If you are uncertain, say so
explicitly before providing any partial information.

Two things to notice: the fallback uses an exact scripted phrase (not "say something like"), and it provides a concrete next step. A fallback that just says "I don't know" leaves the user stranded.

Putting It All Together

Here's the full four-layer prompt assembled from the examples above. This is what a production-ready system prompt looks like:

[PERSONA]
You are Aria, a customer support assistant for Clearpath SaaS.
You help users troubleshoot account issues, understand billing,
and navigate the dashboard. You have deep knowledge of Clearpath's
feature set as of Q1 2026. You speak in plain English - no jargon,
no corporate speak.

[CONSTRAINTS]
You must not:
- Speculate about unreleased features or roadmap items
- Provide legal, medical, or financial advice
- Discuss competitors or make comparative claims
- Access or modify user account data directly

If asked to do any of the above, acknowledge the question and
redirect to the appropriate resource.

[OUTPUT FORMAT]
- Keep responses under 150 words unless a process requires more
- Use numbered steps for troubleshooting sequences
- Use plain prose for general questions
- Do not use headers
- End troubleshooting replies with: "Did that resolve your issue?"

[FALLBACK]
If you do not know the answer, say: "I don't have that information
on hand. You can reach our support team at support@clearpath.io
or use live chat in your dashboard."
Never guess. If uncertain, say so before sharing partial information.

Using section headers in brackets ([PERSONA], [CONSTRAINTS], etc.) is a technique borrowed from structured prompt formats seen in the community [2]. It gives the model clear anchors to parse against and makes the prompt easier for you to maintain over time.

Adapting This for Claude Projects

The same four-layer structure works in Claude Projects with one small adjustment: Claude responds well to being given an explicit priority order when layers might conflict. Add a line like this at the top of your prompt:

If any of these instructions conflict, prioritize them in this order:
[FALLBACK] > [CONSTRAINTS] > [OUTPUT FORMAT] > [PERSONA]

This tells Claude that safety and scope boundaries override stylistic preferences - which is the right call for any customer-facing assistant.

For users who build multiple bots or frequently adapt system prompts across tools, Rephrase can auto-optimize a rough system prompt draft into cleaner, more structured language - useful when you're iterating fast and don't want to manually rewrite each layer.

Testing Your System Prompt

Writing the prompt is step one. The real work is adversarial testing - deliberately asking your bot questions designed to break each layer. Try to get it to go off-topic (tests constraints), ask it something it can't possibly know (tests fallback), request a format it wasn't designed for (tests output format), and ask it to act out of character (tests persona).

If it fails any of these, you don't have a model problem. You have a prompt problem. Go back to the specific layer that broke and add more specificity.

System prompt authorship is a skill that compounds. The more bots you build, the faster you'll recognize which layer is failing and why. Start with this four-layer structure, stress-test it, and iterate - your users will never see the system prompt, but they'll feel the difference immediately.

For more on prompt structure and AI tool workflows, browse the Rephrase blog.


References

Documentation & Research

  1. From Static Benchmarks to Dynamic Protocol: Agent-Centric Text Anomaly Detection for Evaluating LLM Reasoning - arXiv / ICLR 2026 (link)

Community Examples

  1. Try this reverse engineering mega-prompt often used by prompt engineers internally - r/ChatGPTPromptGenius (link)
  2. How do you convert a custom GPT to a Claude project? - r/PromptEngineering (link)
Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Frequently Asked Questions

A system prompt is a set of instructions placed before the conversation starts. It tells the model who it is, what it can do, how to format responses, and how to handle edge cases. It's invisible to end users but shapes every response the assistant gives.
Both let you embed a persistent system prompt and optionally attach knowledge files. Custom GPTs live in the OpenAI ecosystem and can use actions (API calls). Claude Projects are Anthropic's equivalent, with strong instruction-following and a large context window. The system prompt structure that works for one transfers to the other with minimal changes.
Yes. System prompt authorship does not require coding skills. It requires clear thinking about role, rules, format, and failure states - the same skills you'd use to write a good onboarding document or process guide. The four-layer framework in this article gives you a repeatable structure to follow.

Related Articles

Why Moltbook Changes Prompt Design
prompt engineering•7 min read

Why Moltbook Changes Prompt Design

Discover what Moltbook reveals about agent behavior and how to write prompts for multi-agent systems that stay relevant, grounded, and safe. Try free.

How to Build AI Agents with MCP, ACP, A2A
prompt engineering•8 min read

How to Build AI Agents with MCP, ACP, A2A

Learn how to build AI agents with MCP, ACP, and A2A so prompts can use tools, call services, and collaborate across systems. See examples inside.

Why Context Engineering Matters Now
prompt engineering•7 min read

Why Context Engineering Matters Now

Learn why context engineering is replacing prompt engineering for AI agents, and how to adapt your workflow now. See examples inside.

How to Prompt GPT-5.4 to Self-Correct
prompt engineering•8 min read

How to Prompt GPT-5.4 to Self-Correct

Learn how to use GPT-5.4 upfront planning and mid-response course correction to get better answers faster. See practical examples inside.

Want to improve your prompts instantly?

On this page

  • Key Takeaways
  • Why System Prompt Structure Matters
  • The Four-Layer Framework
  • Layer 1: The Persona Layer
  • Layer 2: The Constraint Layer
  • Layer 3: The Output Format Layer
  • Layer 4: The Fallback Behavior Layer
  • Putting It All Together
  • Adapting This for Claude Projects
  • Testing Your System Prompt
  • References