Prompt TipsMar 02, 20267 min

The Prompt That Moves Your Memory From ChatGPT to Claude in 60 Seconds

Anthropic built a one-prompt migration path from ChatGPT to Claude. Here is the exact prompt, how to use it, what transfers, and what gets lost.

The Prompt That Moves Your Memory From ChatGPT to Claude in 60 Seconds

Switching AI assistants used to mean starting from zero. All those preferences, project contexts, writing style adjustments, tool configurations - gone. You would spend weeks re-teaching a new model things the old one already knew.

Anthropic just made that problem disappear with a single prompt.

Claude now has an official "Import Memory" flow that lets you extract everything ChatGPT knows about you and paste it straight into Claude. No file exports, no JSON parsing, no API tokens. Just copy, paste, done. And the prompt itself is the interesting part - because it is doing real work.


The prompt and how to use it

Here is the step-by-step process:

Step 1. Go to Claude Settings > Capabilities > Memory, and click "Start import" (or visit the import card on the Claude home screen).

Step 2. Claude gives you a prompt to copy. The core of it looks like this:

"List every memory you have stored about me, as well as any context you have learned about me from past conversations. Output everything in a single code block. Include: instructions you have received, personal details, projects, tools I use, and behavioral preferences. Format each entry as: [date saved, if available] - memory content."

Step 3. Paste that prompt into ChatGPT (or Gemini, Copilot, Grok - it works with any provider that stores user context).

Step 4. Copy the output - a big code block of structured memories.

Step 5. Paste it back into Claude's import box and click "Add to memory."

That is the whole migration. Under a minute if your ChatGPT has a manageable memory list.


Why this prompt works well

The prompt is not asking ChatGPT to "summarize what you know about me." That would produce a fluffy paragraph. Instead, it asks for structured, exhaustive, itemized output in a code block.

Three design choices make it effective:

1. Code block format. By requesting output inside a code block, the prompt avoids ChatGPT's tendency to add conversational padding, disclaimers, and transitions. You get raw data, not a narrative.

2. Explicit categories. The prompt lists what to include: instructions, personal details, projects, tools, preferences. This prevents ChatGPT from self-censoring or deciding some memories are "not important enough" to mention. LLMs are conservative by default - if you do not explicitly ask for something, they often skip it.

3. Date stamps. Requesting dates (when available) gives Claude temporal context. A preference from two years ago might be stale. A project mentioned last week is probably active. This metadata helps Claude prioritize what matters.

The result is a structured knowledge transfer document - not a conversation summary.


What actually transfers (and what does not)

From my testing and what users are reporting, the transfer works well for:

  • Work preferences: communication style, formatting rules, tone guidelines
  • Project context: active projects, tech stacks, team structures
  • Tool configurations: preferred languages, frameworks, editor setups
  • Behavioral patterns: how you like code reviewed, how verbose you want explanations

What does not transfer cleanly:

  • Personal details unrelated to work. Claude explicitly prioritizes work-related content. Your ChatGPT might remember your dog's name or your favorite restaurant - Claude may not retain those.
  • Conversation history. The prompt exports memories, not full chat logs. If ChatGPT learned something implicitly from a conversation but never stored it as a memory, it will not appear in the export.
  • Custom GPTs and system prompts. If you built custom GPTs with specific instructions, those live in OpenAI's system - not in your memory. You would need to manually recreate those as Claude Projects.

There is also a timing factor: Claude processes imported memories in daily synthesis cycles, so it can take up to 24 hours for everything to fully integrate. Do not panic if Claude does not immediately "know" something you just imported.


The competitive angle you should care about

This is not just a convenience feature. It is a strategic weapon.

The single biggest moat for any AI assistant is accumulated context. The longer you use ChatGPT, the more it knows about you, and the harder it is to leave. Anthropic just built a bridge over that moat.

The Product Hunt launch was telling - the feature trended to #1 Product of the Day, and the discourse around it was dominated by users saying they had wanted to switch but felt "locked in" by their ChatGPT memory. Anthropic gave them permission and a tool.

The move also pressures OpenAI to respond. If Claude can import from ChatGPT but ChatGPT cannot import from Claude, the switching cost becomes asymmetric. That is a powerful acquisition lever.

And Anthropic made the export side available too - you can export your Claude memories at any time. This signals confidence: "We do not need to trap you. We think you will stay because the product is better."


Practical tips for a clean migration

If you are planning to make the switch, a few things I have learned:

Clean up ChatGPT memories first. Go to Settings > Personalization > Memory in ChatGPT and review what is stored. Delete outdated or wrong entries before exporting. Garbage in, garbage out.

Run the prompt twice. ChatGPT sometimes truncates long memory lists. If the output looks short, ask "Is that everything? List any remaining memories you have about me." You might get a second batch.

Edit before importing. The paste box in Claude is editable. Remove anything irrelevant or outdated before clicking "Add to memory." It is easier to curate now than to clean up Claude's memory later.

Check after 24 hours. Give Claude a day to process, then ask it "What do you remember about me?" Compare against your export. If something critical is missing, you can manually add it in Claude's memory settings.


The bigger picture

Data portability in AI is becoming a real battleground. For years, the assumption was that your AI context was locked inside the provider. Anthropic just proved that a well-designed prompt can flatten that barrier.

The irony is beautiful: the tool that breaks the lock-in is a prompt. The most basic unit of interaction with an LLM is also the key to leaving one.

If you are building products on top of AI providers, pay attention. User context portability is coming whether providers want it or not. The ones who embrace it - like Anthropic is doing - will earn trust. The ones who fight it will eventually lose to the ones who do not.


Original data sources

Claude Import Memory on Product Hunt: https://www.producthunt.com/products/claude

Claude Help Center - Importing and exporting memory: https://support.claude.com/en/articles/12123587-importing-and-exporting-your-memory-from-claude

Artificial Corner - How to move from ChatGPT to Claude: https://artificialcorner.com/p/switch-to-claude

Awesome Agents - Claude Import Memory feature: https://awesomeagents.ai/news/claude-import-memory-switch-providers/

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Related Articles