Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation

Legal

  • Privacy
  • Terms
  • Refund Policy

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
Prompt Tips•Mar 07, 2026•8 min

How to Get AI to Write Like You (Not Like Every Other AI-Generated Email)

A practical prompting workflow to clone your email voice with examples, constraints, and an iteration loop that keeps outputs human.

How to Get AI to Write Like You (Not Like Every Other AI-Generated Email)

If your AI emails sound like they were written by "Helpful Assistant #4,729," you don't have a model problem. You have a spec problem.

Most people try to fix "AI voice" with vague adjectives. "Make it more casual." "Less salesy." "More me." That's basically telling a probabilistic text engine to read your mind.

Here's what actually works: you treat your voice like a product requirement, then you train it in-context with a handful of tight examples, and you run a short feedback loop until it sticks. The same core idea shows up in prompt-optimization research: you don't guess your way to better outputs-you search, compare, and select prompts iteratively using evaluation signals, even if those signals are "pairwise preferences" rather than a perfect metric [1]. And if you want reliability, you add a deliberate self-check step-self-dialogue/reflection improves robustness over one-pass generation [2].

Let's turn that into a workflow you can use for emails.


Your "voice" is a bundle of constraints (not a vibe)

When people say "write like me," they usually mean a bunch of smaller, testable constraints:

You have default sentence length. You have favorite transitions. You have a tolerance for fluff (usually low). You have a "politeness ceiling" ("Thanks in advance" might be too much). You have signature moves like one-line paragraphs, a strategic em dash, or a blunt CTA.

If you don't specify those, the model falls back to the safest general-average email voice. That's the blandness you're hearing.

So the move is: stop asking for personality. Start asking for pattern extraction from your own writing samples, then lock those patterns into a reusable "voice contract."


Step 1: Create a tiny voice dataset (yes, dataset)

You don't need 200 emails. You need 6-10 good ones.

Pick examples that match the type of email you'll generate: intros, follow-ups, customer replies, internal asks, apology emails, whatever. Voice is contextual. Your "shipping an update" tone is not your "asking for a favor" tone.

Also: remove sensitive details. Replace names, deals, and numbers with placeholders. You want style, not secrets.


Step 2: Make the model reverse-engineer your writing, then write the spec

This is the part most people skip. They jump straight to "write me an email." Don't.

First ask the model to analyze your examples and produce a concrete style guide it can follow. This "inverse prompting" idea gets shared a lot in the wild for brand/voice alignment: give examples, ask for the "linguistic DNA," then generate new text using that DNA [3]. The community framing is a bit poetic, but the practical kernel is correct: examples are stronger than adjectives.

Here's a prompt I use to generate a voice spec (paste your examples where indicated):

You are my writing analyst.

Study the email samples below and infer a reusable voice guide.

Output:
1) Voice rules (10-15 bullets) covering tone, sentence length, paragraphing, hedging, assertiveness, humor, punctuation habits, and typical CTA style.
2) "Do" phrases I naturally use (10 items).
3) "Don't" phrases that would sound unlike me (10 items).
4) A short checklist you will run before finalizing any email.

Email samples:
[EMAIL 1]
[EMAIL 2]
[EMAIL 3]
...

What you're doing here is converting "style" into constraints the model can execute. That's the difference between "sound human" and "sound like you."


Step 3: Use a two-pass generation loop (draft, then self-critique)

One-pass generation is why AI email feels like a template: it's optimized for plausibility, not authenticity.

Research on multi-step "self-dialogue" pipelines shows that having the model reflect on an initial answer and then revise can improve output quality and reduce wrong turns [2]. In prompt-optimization work, quality improves when you compare candidates and iterate rather than settling for the first output [1]. You don't need a full tree-search agent for emails, but you do want the spirit: generate, evaluate, revise.

Here's the exact pattern.

SYSTEM: You are an email writer who must follow the Voice Contract exactly.

VOICE CONTRACT:
[Paste the voice rules + do/don't list + checklist from the analysis step]

USER:
Context: [who am I emailing, relationship, goal]
Details to include: [bullets]
Hard constraints: [word count max, must include link, must ask for time, etc.]

Task:
1) Draft the email.
2) Then critique your own draft against the Voice Contract. Name any violations.
3) Rewrite the email fixing only those violations. Keep the content intent the same.

Output only the final rewritten email.

This is a mini "dialectic pipeline." Draft (thesis), critique (antithesis), rewrite (synthesis) [2]. It's simple, but it changes everything.


Step 4: Don't "describe" style-few-shot it

If you want the model to consistently match cadence and structure, examples are your strongest lever. In-context learning (few-shot prompting) is literally the mechanism: demonstrations condition the model's next outputs [2]. You don't have to fine-tune anything to get real gains-just show it what "right" looks like.

There's a popular community pattern that's basically "3-shot voice replication": give three examples, ask for a fourth in the same style [4]. It's not academic, but it's a great practical default.

Here's a version that's email-safe and controlled:

SYSTEM: You are a Pattern Replication Engine for email voice.

USER:
Goal: Write a new email that matches my voice and formatting exactly.

Examples (do not copy details, only style):
[EXAMPLE EMAIL 1]
---
[EXAMPLE EMAIL 2]
---
[EXAMPLE EMAIL 3]

New email requirements:
Recipient: [who]
Situation: [what happened]
My intent: [what I want]
Include: [specific facts]
Avoid: [phrases, tone, claims]

Write the email.

The catch: this works best when your examples are the same category as the new email. If you mix "angry escalation" with "friendly intro," the model averages them and you get… assistant voice again.


Practical example: before/after prompt for a real email

Let's say you want a follow-up email that doesn't sound like a CRM automation.

Bad prompt:

Write a friendly follow-up email checking in.

Better prompt (voice contract + constraints + two-pass loop):

SYSTEM: You are an email writer who must follow the Voice Contract exactly.

VOICE CONTRACT:
- Short paragraphs. Max 2 sentences per paragraph.
- Direct. No filler like "I hope you're doing well."
- One clear ask per email.
- Casual-professional. No exclamation points.
- Preferred sign-off: "Thanks," then first name.
- DON'T use: "circle back", "touch base", "at your earliest convenience", "I wanted to reach out"

USER:
Context: I emailed Dana last week about a pilot for our API monitoring tool. She replied "looks interesting" but didn't pick a time.
Details to include: We can support Slack alerts + custom dashboards. Pilot is 2 weeks. I can do a 15-min setup call.
Hard constraints: Under 90 words. Ask for two specific time options next week.

Task:
1) Draft the email.
2) Critique it against the Voice Contract.
3) Rewrite fixing only violations.
Output only the final rewritten email.

This prompt gives the model a job it can actually do: satisfy constraints, not invent a personality.


What I've noticed: the "don't list" matters more than people think

Most AI-email tells share up in the same places: generic transitions, softening phrases, and over-explaining.

A "don't say" list is your fastest win because it blocks the model's default autopilot phrases. And it's easy to test: generate five drafts, see what annoys you, add those phrases to the list, repeat. That's the lightweight version of iterative prompt improvement that prompt-agent research formalizes at scale [1].


Closing thought

If you want AI to write like you, you can't just request "your style." You have to operationalize it.

Give examples. Extract rules. Add a do/don't lexicon. Then run a short generate → critique → rewrite loop until your "voice contract" is tight. After that, emails stop sounding AI-generated not because the model got smarter, but because you finally told it what "you" means.


References

References
Documentation & Research

  1. UPA: Unsupervised Prompt Agent via Tree-Based Search and Selection - arXiv (The Prompt Report) - http://arxiv.org/abs/2601.23273v1
  2. A Dialectic Pipeline for Improving LLM Robustness - arXiv - https://arxiv.org/abs/2601.20659

Community Examples

  1. The "Inverse Prompting" Loop for perfect brand alignment. - r/PromptEngineering - https://www.reddit.com/r/PromptEngineering/comments/1rasd0y/the_inverse_prompting_loop_for_perfect_brand/
  2. The "3-Shot" Pattern for perfect brand voice replication. - r/PromptEngineering - https://www.reddit.com/r/PromptEngineering/comments/1rduv3n/the_3shot_pattern_for_perfect_brand_voice/
Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Related Articles

The MCP Prompting Playbook: How Model Context Protocol Changes What You Put in Prompts
Prompt Tips•9 min

The MCP Prompting Playbook: How Model Context Protocol Changes What You Put in Prompts

MCP shifts prompting from "stuff instructions + data" to "declare intent, let schemas + tools carry the weight"-and it changes how you debug, secure, and scale agents.

Prompt Engineering for Non‑English Speakers: How to Get High‑Quality Output in Any Language
Prompt Tips•9 min

Prompt Engineering for Non‑English Speakers: How to Get High‑Quality Output in Any Language

A practical playbook for getting reliable, fluent, culturally appropriate LLM output when you don't prompt in English.

Claude Projects and Skills: How to Stop Rewriting the Same Prompts (A Builder's Playbook)
Prompt Tips•9 min

Claude Projects and Skills: How to Stop Rewriting the Same Prompts (A Builder's Playbook)

A practical way to turn your best Claude instructions into reusable building blocks using Projects, Skills, and a few opinionated workflows.

The Anti-Prompting Guide: 12 Prompt Patterns That Used to Work (and Now Make Models Worse)
Prompt Tips•10 min

The Anti-Prompting Guide: 12 Prompt Patterns That Used to Work (and Now Make Models Worse)

Twelve once-popular prompt tricks that now backfire on modern models-plus what to do instead.

Want to improve your prompts instantly?

On this page

  • Your "voice" is a bundle of constraints (not a vibe)
  • Step 1: Create a tiny voice dataset (yes, dataset)
  • Step 2: Make the model reverse-engineer your writing, then write the spec
  • Step 3: Use a two-pass generation loop (draft, then self-critique)
  • Step 4: Don't "describe" style-few-shot it
  • Practical example: before/after prompt for a real email
  • What I've noticed: the "don't list" matters more than people think
  • Closing thought
  • References