Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

Ask AI about Rephrase

ChatGPTClaudePerplexity

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
tutorials•April 10, 2026•8 min read

How to Prompt Claude for SQL via MCP

Learn how to prompt Claude for SQL on real databases via MCP with safer schema-aware workflows, better prompts, and fewer bad queries. Try free.

How to Prompt Claude for SQL via MCP

You can absolutely get Claude to write useful SQL against a real database through MCP. The catch is that "useful" depends less on model magic and more on how you frame the job.

Most bad SQL prompts ask for the final query too early. Claude does much better when you make it discover the schema first, narrow the candidate tables, and only then generate SQL.

Key Takeaways

  • Claude works better with real databases via MCP when you prompt for discovery before generation.
  • The strongest SQL prompts specify dialect, allowed tools, schema boundaries, and safety rules.
  • MCP helps because Claude can inspect tools and resources at runtime instead of guessing from pasted context.
  • Schema-aware prompting and iterative refinement consistently improve text-to-SQL reliability in research [1][2].
  • In practice, hyper-specific prompts reduce hallucinations when Claude has live tool access [4].

What is MCP for Claude SQL workflows?

MCP lets Claude discover tools, resources, and prompt templates at runtime, which makes real database work far more reliable than stuffing schema dumps into a chat box. Instead of guessing what it can access, Claude can query an MCP server for available capabilities and use structured tool inputs and outputs [3].

That matters for SQL because database work is rarely "one prompt, one answer." Claude may need to inspect tables, read schema resources, sample values, and then call a query tool. MCP was built around exactly those primitives: tools, resources, and prompts [3]. In plain English, that means your database can become a first-class tool instead of an awkward blob of pasted text.

Here's what I noticed: once Claude has live tool access, you should stop prompting it like a chatbot and start prompting it like an analyst with guardrails.


How should you prompt Claude before it writes SQL?

The best Claude SQL prompts define the task as a staged workflow: inspect, verify, narrow scope, generate, and only then answer. Research on text-to-SQL repeatedly shows that schema linking, decomposition, and revision improve execution accuracy, especially on more complex questions [1][2].

If you ask, "Write SQL to find our top churn risks," Claude has to infer the dialect, guess which tables matter, and invent business logic. That is exactly where things go off the rails.

A better framing is more like this:

You have access to a PostgreSQL database through MCP.

Goal: identify customers at risk of churn in the last 90 days.

Before writing SQL:
1. Inspect available schema resources and relevant tables.
2. Identify the tables and columns most relevant to churn signals.
3. If churn is ambiguous, explain the ambiguity and propose a measurable definition.
4. Write a read-only SQL query only after confirming the likely schema.
5. Return:
   - a short explanation
   - the final SQL
   - any assumptions
Constraints:
- PostgreSQL syntax only
- Prefer explicit JOINs
- Do not use write operations
- If key fields are missing, say so instead of guessing

That prompt works because it forces Claude to do what strong text-to-SQL systems do internally: identify intent, link schema, extract entities, and generate SQL from a constrained search space [1]. It also mirrors MCP's schema-driven design, where tool descriptions and input schemas guide action selection [3].

If you want to speed up this kind of restructuring across apps, tools like Rephrase are handy because they can turn a rough request into a tighter prompt before you send it to Claude.


Why do schema-aware prompts work better for SQL?

Schema-aware prompts work because text-to-SQL errors usually start before SQL generation, during table selection, column mapping, and ambiguity resolution. The model is often not failing at syntax first. It is failing at understanding what the database means [1][2].

One of the strongest patterns in the literature is decomposition. In IESR, the system separates information understanding, schema linking, and SQL generation instead of letting one pass do everything [1]. DataFactory shows a related idea from a practical multi-agent angle: retrieval improves when the system combines DDL, domain knowledge, and historical question-SQL examples [2].

That maps directly to prompting Claude via MCP. Your job is to give Claude enough structure to answer questions like:

  • Which tables are even relevant?
  • Which columns correspond to business terms?
  • What values should appear in filters?
  • What dialect rules apply?

You do not need academic complexity in your prompt. But you do need academic discipline.

Here's a simple comparison.

Prompt style What Claude does Typical outcome
"Write SQL for X" Guesses schema and logic Fast, brittle, often wrong
"Use schema Y and write SQL" Better grounding Good for simple cases
"Inspect schema, identify relevant tables, explain assumptions, then write SQL" Structured reasoning Best for real databases via MCP

What should a practical Claude SQL prompt include?

A practical prompt for Claude SQL via MCP should include the business question, database dialect, allowed scope, workflow steps, and output format. If any of those are missing, Claude tends to fill the gap with a guess, and guesses are expensive against live data.

My default template looks like this:

You are querying a real database through MCP.

Business question:
[insert question]

Database rules:
- Dialect: [PostgreSQL / MySQL / SQLite / BigQuery]
- Read-only queries only
- Use only MCP-exposed tools and resources
- Do not guess missing columns or tables
- Ask for clarification if two interpretations are plausible

Process:
1. Inspect schema-related resources or tools first.
2. Identify the minimum set of relevant tables and columns.
3. Summarize your assumptions in 2-4 bullets.
4. Produce one executable SQL query.
5. If confidence is low, say why before giving SQL.

Output:
- Assumptions
- SQL
- Brief explanation

This is boring on purpose. Boring prompts are underrated.

A community example about Claude with MCP and Excel made the same point from another angle: once a model has live tool access, being hyper-specific about where to look and how to recover from errors reduces hallucinations [4]. Databases behave the same way. Specificity wins.


What does a before-and-after SQL prompt look like?

The difference between a weak and strong Claude SQL prompt is usually not length. It is operational clarity. A better prompt tells Claude what to inspect, what not to do, and how to handle uncertainty before it touches the query tool.

Here's a real-world style transformation.

Before After
"Find our highest-value customers and show churn risk." "Using the MCP database tools, inspect the customer, orders, subscriptions, and support-related schema first. Define 'highest-value' as top 10% by revenue unless schema suggests a better metric. Define 'churn risk' using recent inactivity, cancelled subscriptions, or declining order frequency if available. Use PostgreSQL syntax, read-only SQL, explicit JOINs, and list assumptions before the final query."

The "after" version does three useful things. It gives Claude candidate domains, defines fallback business logic, and constrains the SQL behavior. That is exactly the sort of context engineering that keeps the model from inventing a neat-looking but useless answer [2].

If you do this often, more prompt engineering articles on the Rephrase blog are worth browsing because the same rewrite pattern works for code, docs, Slack, and analyst workflows too.


How do you make Claude safer on production databases?

Safer Claude SQL workflows come from limiting capability, not trusting the model more. MCP helps by making tool boundaries explicit, but you still need permission design, read-only defaults, and clear action boundaries around anything destructive [3].

This part is easy to ignore because prompting is more fun than permissions. But schema-guided MCP work is blunt about the problem: tools need explicit action boundaries and failure modes, especially when an agent can discover them dynamically [3].

My take is simple. For production use:

  1. Default to read-only tools.
  2. Expose schema resources separately from execution tools.
  3. Require Claude to inspect before querying.
  4. Require explicit user confirmation for write actions.
  5. Tell Claude to state uncertainty instead of fabricating confidence.

That sounds strict, but strict is good when the model has access to something real.

A second organic mention here: Rephrase can help standardize these rules into cleaner prompts, especially when your first draft is just "ask Claude to query Postgres for revenue by segment."


Claude can be excellent at SQL through MCP, but only if you prompt for the workflow, not just the output. Treat schema discovery as part of the task, make the constraints explicit, and force assumptions into the open. That is where the quality jump happens.


References

Documentation & Research

  1. IESR: Efficient MCTS-Based Modular Reasoning for Text-to-SQL with Large Language Models - arXiv cs.CL (link)
  2. DataFactory: Collaborative Multi-Agent Framework for Advanced Table Question Answering - arXiv cs.AI (link)
  3. The Convergence of Schema-Guided Dialogue Systems and the Model Context Protocol - arXiv cs.AI (link)

Community Examples 4. Automating Excel workflows with Claude Sonnet 4.6 & MCP (Model Context Protocol) - r/PromptEngineering (link)

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Frequently Asked Questions

Give Claude access through MCP, then prompt it with the task, the database dialect, the allowed tables, and the success criteria. The best prompts also require schema inspection first and force Claude to explain uncertainty before it runs a risky query.
Usually yes, because MCP lets Claude discover tools, resources, and prompts at runtime instead of stuffing raw data into the context window. That said, safety still depends on permissions, tool boundaries, and whether write actions require explicit approval.

Related Articles

How to Prompt AI for Financial Models
tutorials•8 min read

How to Prompt AI for Financial Models

Learn how to prompt AI for revenue forecasts, unit economics, and scenario planning without bad assumptions or fake precision. Try free.

How to Clean CSV Files With AI Prompts
tutorials•7 min read

How to Clean CSV Files With AI Prompts

Learn how to clean messy CSV files with AI prompts in under 60 seconds using a reliable workflow that reduces guesswork and errors. Try free.

How to Prompt AI for GA4 Analysis
tutorials•8 min read

How to Prompt AI for GA4 Analysis

Learn how to write AI prompts for GA4 custom reports, anomaly detection, and attribution analysis with better outputs and cleaner insights. Try free.

How to Repurpose Content With AI
tutorials•8 min read

How to Repurpose Content With AI

Learn how to turn one idea into 15 content formats with AI prompts that preserve meaning, tone, and quality. See the workflow and examples inside.

Want to improve your prompts instantly?

On this page

  • Key Takeaways
  • What is MCP for Claude SQL workflows?
  • How should you prompt Claude before it writes SQL?
  • Why do schema-aware prompts work better for SQL?
  • What should a practical Claude SQL prompt include?
  • What does a before-and-after SQL prompt look like?
  • How do you make Claude safer on production databases?
  • References