Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

Ask AI about Rephrase

ChatGPTClaudePerplexity

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Prompt engineering62
How to Design an AI-Friendly CodebaseHow to Write Better CLAUDE.md FilesHow to Hedge AI Workflow CapabilitiesHow to Design Lean Tool Sets for AI AgentsHow LLM Agent Memory Should WorkHow to Apply Anthropic's Context GuideHow to Build a 12-Factor AI AgentWhy Agents Must Keep Their Wrong TurnsWhy Dynamic Tool Loading Breaks AI AgentsWhy KV-Cache Hit Rate Matters MostHow the 4 Moves of Context Engineering WorkHow to Engineer Context for AI AgentsPrompt Engineering as a Career SkillWhy Prompt Marketplaces DiedFine-Tuning vs RAG vs System PromptsWhy Regulated AI Prompts Fail in 2026Why Prompt Wording Creates AI BiasHow to Write Guardrail PromptsPrompt Attacks Every AI Builder Should KnowHow to Prompt AI for Better StoriesHow to Prompt for Database DesignHow to Prompt Natural-Sounding AI VoicesHow to Prompt for E-Commerce at ScaleHow to Prompt Multi-Agent LLM PipelinesMake.com vs n8n: Prompting Matters MoreOpenClaw vs Claude System PromptsWhy Long Prompts Hurt AI ReasoningHow Adaptive Prompting Changes AI WorkWhy GenAI Creates Technical DebtWhy Context Engineer Is the AI Job to WatchWhy Prompt Engineering Isn't Enough in 2026Prompt Pattern Libraries for AI in 2026How to Build a 6-Component PromptPrompting LLMs Over Long Documents: A GuideLLM Prompts for No-Code Automation (2026)Few-Shot Prompting: A Practical Deep DiveDecision-Making Prompts for AI AgentsPrompt Compression: Cut Tokens Without Losing Qu…Why Your Prompts Break After Model UpdatesDiff-Style Prompting: Edit Without RewritingWhy Long Chats Break Your AI Prompts6 Prompt Failure Modes That Show Up at ScaleMulti-Modal Prompting: GPT-5, Gemini 3, Claude 4LLM Classification Prompts That Actually Work40 Prompt Engineering Terms DefinedVoice AI Prompting: Why Text Prompts FailAdvanced JSON Extraction Patterns for LLMsNegative Prompting: When to Cut, Not AddHow to Write a System Prompt That WorksWhy Moltbook Changes Prompt DesignHow to Build AI Agents with MCP, ACP, A2AWhy Context Engineering Matters NowHow to Prompt GPT-5.4 to Self-CorrectHow to Secure OpenClaw AgentsHow MCP and Tool Search Change AgentsWhy Prompt Engineering ROI Is Now MeasuredHow to Secure AI Agents in 2026System Prompts That Make LLMs BetterWhat GTC 2026 Means for Local LLMs7 Steps to Context Engineering (2026)7 GPT-5.4 Tool Prompt Rules for 20267 Agent Prompt Rules That Work in 2026
Tutorials42
How Unsloth Speeds Up LLM Fine-TuningHow to Build an Open Coding Agent StackHow to Prompt Mistral Small 4How to Run a 10-Minute Prompt AuditHow to Benchmark Your Prompting SkillsHow to Optimize Small Context PromptsHow to Prompt Ollama in Open WebUIHow to Prompt AI for Financial ModelsHow to Clean CSV Files With AI PromptsHow to Prompt AI for GA4 AnalysisHow to Prompt Claude for SQL via MCPHow to Repurpose Content With AIHow to Prompt AI for SEO Long-FormHow to Prompt AI for IaCHow to Prompt AI for API DesignHow to Teach Kids to Prompt AIHow to Build an AI Learning CurriculumHow to Use AI as a Socratic TutorHow to Prompt AI for Podcast ProductionHow to Build a One-Person AI AgencyHow to Build a Personal AI AssistantHow to Prompt in Cursor 3.0How to Create Gen AI Content in 2026How to Use Open Source LLMsHow to Build a Content Factory LLM PipelineHow to Turn Any LLM Into a Second BrainHow to Write Claude System PromptsHow Claude Computer Use Really WorksHow to Build the n8n Dify Ollama StackHow to Run Qwen 3.5 Small LocallyHow to Build an AI Content FactoryHow to Prompt Cursor Composer 2.0How to Launch on Product Hunt With AIHow to Make Nano Banana 2 InfographicsHow to Prompt for AI Game DevelopmentHow to Prompt Gemini in Google WorkspaceHow to Set Up OpenClawHow to Switch ChatGPT Prompts to ClaudeHow to Prompt for a Product Hunt LaunchHow to Build an AI Content FactoryHow to Keep AI Characters ConsistentHow to Run AI Models Locally in 2026
Tools18
Cursor vs Claude Code vs Codex CLIHow GPT-6 Becomes an AI Super-AppDeepSeek V3.2 vs GPT-5.4 on a BudgetLlama 4 Scout vs Maverick: Which Fits?How Shopify Sells Inside ChatGPT and GeminiWhy OpenClaw Took Over GTC 2026Why AI Agents Matter More Than ChatbotsWhy Mistral Small 4 Matters for ReasoningChatGPT vs Claude: How to Choose in 2026How AI Agents Are Reshaping WorkWhy Vibe Coding Is Replacing Junior DevsClaude Marketplace: Why Developers CareOpenClaw vs Claude Code vs ChatGPT TasksWhy Promptfoo Alternatives Matter NowClaude vs ChatGPT for Russian in 2026Why AI Agents Threaten SaaS in 2026AI Deep Research Tools Compared for 2026Nano Banana 2 Is Here: What Changed and How to P…
Prompt tips169
How to Prompt Qwen 3.6-Plus for CodingHow to Prompt Gemma 4 for Best ResultsHow to Prompt GPT-6 for Long ContextWhy Twitter Prompts FailHow to Prompt DeepSeek V3 in 2026GPT vs Llama Prompting DifferencesHow to Write Privacy-First AI PromptsHow to Prompt AI Dashboards BetterHow to Write AI Prompts for NewslettersHow to Prompt AI for Better Software TestsHow to Write CLAUDE.md PromptsHow to Prompt AI for Ethical Exam PrepHow Teachers Can Write Better AI PromptsHow to Prompt AI Music in 2026How to Write Audio Prompts That WorkHow to Prompt ElevenLabs in 2026How to Prompt for Amazon FBA TasksHow Freelancers Should Prompt AI in 2026How to Prompt Gemma 4 in 2026How to Prompt Web Scraping Agents EthicallyHow to Prompt Claude TasksHow to Define an LLM RoleHow to Create a Stable AI CharacterHow to Use Emotion Prompts in Claude5 Best Prompt Patterns That Actually WorkHow to Write the Best AI Prompts in 2026How to Prompt Gemma BetterHow to Write Multimodal PromptsHow to Optimize Content for AI ChatbotsWhy Step-by-Step Prompts Fail in 2026How to Prompt AI Presentation Tools RightHow to Prompt AI for Video Scripts That Actually…Summarization Prompts That Force Format Complian…SQL Prompts That Actually Work (2026)How to Prompt GLM-5 EffectivelyHow to Prompt Gemini 3.1 Flash-LiteHow Siri Prompting Changes in iOS 26.4How to Prompt Small LLMs on iPhoneHow to Prompt AI Code Editors in 2026How to Prompt Claude Sonnet 4.6How to Prompt GPT-5.4 for Huge DocumentsHow to Prompt GPT-5.4 Computer UseClaude in Excel: 15 Prompts That WorkHow to Prompt OpenClaw BetterHow to Prompt AI for Academic IntegrityHow to Prompt AI in Any Language (2026)How to Make ChatGPT Sound HumanHow to Write Viral AI Photo Editing Prompts7 Claude PR Review Prompts for 20267 Vibe Coding Prompts for Apps (2026)Copilot Cowork + Claude in Microsoft 365 (2026):…GPT-5.4 vs Claude Opus 4.6 vs Gemini 3.1 Pro (Ma…Prompting Nano Banana 2 (Gemini 3.1 Flash Image)…Prompting GPT-5.4 Thinking: Plan Upfront, Correc…Prompt Engineering for Roblox Development: NPC D…AI Prompts for Figma-to-Code Workflows: Design S…The Real Cost of Bad Prompts: Time Wasted, Token…Prompts That Pass Brand Voice: A Practical Syste…Voice + Prompts: The Fastest Way I Know to Ship…AI Prompts for Startup Fundraising: Pitch Decks,…Prompts for AI 3D Generation That Actually Work:…Prompt Engineering for Telegram Bots: How to Mak…How to Prompt AI for Cold Outreach That Doesn't…Why Your AI Outputs All Sound the Same (And 7 Te…Apple Intelligence Prompting Is Not ChatGPT Prom…Prompt Engineering for Google Sheets and Notion…Consistent Style Across AI Image Generators: The…AI Prompts for Product Managers: PRDs, User Stor…Prompt Design for RAG Systems: What Goes in the…AI Prompts for YouTube Creators: Titles, Scripts…Structured Output Prompting: How to Force Any AI…How to Audit a Failing Prompt: A Debugging Frame…Prompt Versioning: How to A/B Test Your Prompts…Prompting n8n Like a Pro: Generate Nodes, Fix Br…The MCP Prompting Playbook: How Model Context Pr…Prompt Engineering for Non‑English Speakers: How…How to Get AI to Write Like You (Not Like Every…Claude Projects and Skills: How to Stop Rewritin…The Anti-Prompting Guide: 12 Prompt Patterns Tha…AI Prompts for Indie Hackers: Ship Landing Pages…Prompts That Actually Work for Claude Code (and…Prompt Engineering Statistics 2026: 40 Data Poin…Midjourney v7 Prompting That Actually Sticks: Us…Prompt Patterns for AI Agents That Don't Break i…System Prompts Decoded: What Claude 4.6, GPT‑5.3…How to Write Prompts for Cursor, Windsurf, and A…Context Engineering in Practice: A Step-by-Step…How to Write Prompts for GPT-5.3 (March 2026): T…How to Write Prompts for DeepSeek R1: A Practica…How to Test and Evaluate Your Prompts Systematic…Prompt Engineering Certification: Is It Worth It…Multimodal Prompting in Practice: Combining Text…What Are Tokens in AI (Really) - and Why They Ma…Temperature vs Top‑P: The Two Knobs That Quietly…How to Reduce AI Hallucinations with Better Prom…Fine-Tuning vs Prompt Engineering: Which Is Bett…Prompt Injection: What It Is, Why It Works, and…The Prompt That Moves Your Memory From ChatGPT t…AI Prompts for Market Research: The Workflow I U…Prompt Engineering Salary and Career Guide (2026…Best AI Prompts for Customer Support Chatbots: T…How to Automate Workflows with Prompt Templates…AI Prompts for Project Management and Planning:…How to Build a Prompt Library for Your Team (Tha…Prompt Engineering for SEO: How to Boost Ranking…How to avoid your Claude agent getting jailbroke…Alert: Avoid Gemini Agent Jailbreaks by Designin…How to Write Prompts for AI Animation and Motion…Best Prompts for AI Product Photography: Packsho…Consistent Characters in AI Art: The Prompting S…Aesthetic AI Photo Prompts for Social Media Prof…How to Write Prompts for AI Logo Design (Without…AI Image Prompt Formulas for Lighting, Style, an…How to Write Prompts for AI Photo Editing in Cha…Copilot Prompts for Microsoft Office and Windows…Prompting SDXL Like You Mean It: A Developer's G…Perplexity AI: How to Write Search Prompts That…How to Write Prompts for Grok (xAI): A Practical…Best Prompts for Llama Models: Reliable Template…GPT-5.2 Prompts vs Claude 4.6 Prompts: What Actu…Google Gemini Prompts: The Complete Guide for 20…How to Write Prompts for AI Music Generation (Th…AI Prompts for Real Estate Listings That Don't S…Best Prompts for Social Media Content Creation (…How to Use AI Prompts for Academic Research (Wit…Prompts for Business Plan Writing with AI: A Pra…How to Write Prompts for AI Code Generation (So…Best AI Prompts for Learning a New Language (Wit…ChatGPT Prompts for Data Analysis and Excel: The…How to Write AI Prompts for Email Marketing (Tha…Best Prompts for Writing a Resume with AI (That…How to Structure Prompts with XML and Markdown T…RAG vs Prompt Engineering: Which One Do You Actu…Prompt Chaining for Complex Tasks: Build Reliabl…Tree of Thought Prompting: A Step-by-Step Guide…Self-Consistency Prompting: How Majority-Vote Re…Meta Prompting: How to Make AI Improve Its Own P…Role Prompting That Actually Works: How to Get E…System Prompt vs User Prompt: What's the Differe…Context Engineering: the real reason prompt engi…Zero-Shot vs Few-Shot Prompting: When to Use Eac…GenAI & Creative Practices: Stop Treating Prompt…Gemini AI Prompting: The 5 Prompt Patterns That…How to Reduce ChatGPT Hallucinations: Make It Ci…How to Make AI Creative (Without Begging It to "…How to Research With AI (Without Getting Burned…How to Speak With AI: Treat Prompts Like Interfa…Prompt to Make Money: Stop Chasing "Magic Prompt…10 tips for writing image prompts that actually…10 tips for writing video prompts that actually…How to Prompt Nano Banana (Gemini 3 Pro Image):…How to Prompt the Best Way (Without Turning It I…What Is a Prompt? The Input That Turns an LLM In…How to Generate Images in 2026: Prompting Like a…The Latest LLM Prompt Updates (Early 2026): What…How Prompts Changed in 2026: From Clever Wording…ChatGPT prompt for photo editing: the only templ…How ChatGPT Works (Without the Hand-Wavy Magic)Keeping Context in a Prompt: The 3-Layer Pattern…How to Keep Context in a Prompt (Without Writing…How to Write Prompts for Claude 4.5: A Practical…How to Write Prompts for Sora 2: The Spec That T…How to Write Prompts for Veo 3: A Developer's Pl…How to Write Video Prompts That Actually Direct…What Is Prompt Engineering? A Practical Definiti…What Is Prompt Engineering? A Practical Definiti…AI prompts vs. generative AI prompts: the differ…Chain-of-Thought Prompting in 2026: When "Think…How to Write Prompts for ChatGPT: The Only Struc…
News86
Why Meta Made Muse Spark ProprietaryWhy GLM-5.1 Is a Big Deal for CodingWhy Anthropic Won't Release Claude MythosHow MCP Became the AI Agent StandardFrom 'write me the math' to 'run it locally': AI…AI's New Power Trio: Faster Transformers, Real-T…The Week AI Got Practical: Better Metrics, Faste…AI Agents Are Getting a Supply Chain: Vercel "Sk…Amazon Bedrock quietly turns RAG into a multimod…ChatGPT Gets Ads, Google Gets Personal, and AWS…Amazon's Bedrock push is getting real: multimoda…Faster models, cheaper context, and search witho…Google Wants Agents to Shop, Claude Wants Your F…Memory Is the New MoE: Agents, Observability, an…AWS Is Turning Agents Into Infrastructure - and…AI Gets Practical: Cheaper RAG, Faster Small Mod…AI Is Getting Better at 'Near-Misses'-and That's…Tiny embeddings, terminal agents, and a sleep mo…OpenAI Goes to the Hospital - and to the Power P…AWS's latest AI playbook: multimodal search, che…AI Is Leaving the Lab: Benchmarks That Run Apps,…ChatGPT Goes Clinical, Robots Get Smarter, and S…AI Is Getting Measured, Agentic, and Political -…LoRA Everywhere, and OpenMed's Big Bet: The 2026…OpenAI Wants a Pen-Sized ChatGPT, and It's Not t…Caching, Routing, and "Small" Models: The Quiet…Blackwell's FP4 Hype Meets Reality, While NVIDIA…GPT-4.5, T5Gemma, and MedGemma: The Model Wars S…OpenAI Ships a Cheaper Reasoner, a Medical Bench…Gemini hits IMO gold, and the rest of the stack…AI Is Leaving the Chat Box: GUI Agents, Long-Hor…Agents are growing up: red-teaming, contracts, a…AI Is Getting Smaller, Faster, and Weirder - and…OpenAI's Prompt Packs vs. Hugging Face Quantizat…OpenAI's GPT-5.2-Codex and Google's Flash-Lite s…Google Ships Cheap, Fast Gemini - While AWS Trie…Gold-Medal Gemini, a "Misaligned Persona" in GPT…OpenAI floods the zone: GPT-4.5, o3-mini, and a…Deep research agents get real, robots ship to Sp…Agents Everywhere, But the Real Story Is the Bor…AI Is Becoming Infrastructure: AWS Automation, H…Agents Are Moving Into the Browser - and AWS Is…Small models are eating the stack - and they're…Skills are the new plugins: IBM's open agent, Hu…NVIDIA's Big Week: Gaming Agents, Inference Powe…Transformers v5, EuroLLM, and Nemotron: Open AI…MIT's latest AI work screams one thing: stop bru…AI Is Escaping the Chatbox: Meta's SAM Goes Fiel…DeepMind Goes Full "National Lab Mode" - While C…AI Is Getting a Memory, a Voice, and a Governmen…GPT-5.2, Image 1.5, and the ChatGPT App Store mo…GPT-5.2, ChatGPT Apps, and the Real Fight: Ownin…GPT‑5.2 Lands, ChatGPT Gets an App Store, and "A…AI Is Getting Cheaper, More Grounded, and Weirdl…Cogito's 671B open-weight drop, "uncensor" hacks…AWS and Anthropic Just Made AI Apps Boringly Rel…Agents Are Growing Up - And So Are the Ways They…The Unsexy Parts of AI Are Winning: Inference St…ChatGPT Is Turning Into an App Store (and Safety…From code agents to generative UI: AI is quietly…Google's Gemini 3 week isn't a model launch - it…The AI Stack Is Growing Up: Testing Gates, Reaso…AI's New Bottleneck Isn't Models - It's the Stuf…Agents grow up: Google brings ADK to Go, while C…AI Is Moving Back to Your Laptop - and the Open…AI's New Obsession: Trust, Latency, and Software…Agents Are Growing Hands and Long-Term Memory -…Voice AI Just Went Open-Season: New Models, Real…NVIDIA Goes All-In on Spatial AI, While the Rest…AI Is Eating the Grid: Power Becomes the New Mod…Agents Are Growing Up: Google's DS-STAR and AWS'…ChatGPT Learns Your Company, Codex Gets Cheaper,…GPT-5.1 Drops, and OpenAI Quietly Reframes What…AI in 2025: AWS squeezes the GPUs, OpenAI hits 1…Google's Space TPUs and AWS's $38B Deal Signal a…AI Is Sliding Into Your Workflow: Real‑Time Meet…MIT's AI signal this week: smaller models, smart…Agents Are Leaving the Chatbox - and Everyone's…DeepMind goes after fusion control while AWS tur…Google's AI push is getting serious about privac…Google Is Shipping Agents, Video, and "AI for Ma…OpenAI's Atlas browser is the real product launc…Neural rendering goes end-to-end, and AI starts…Sora 2, Gemini Robotics, and VaultGemma: AI Is S…Meta's DINOv3, NASA's micro-rovers, and Llama in…GPT-5 vs Gemini Deep Think: The reasoning arms r…
Image generation5
How to Prompt AI for Memes That SpreadHow to Write Better Nano Banana 2 PromptsHow to Use AI Images for Marketing in 2026Midjourney v7 vs ChatGPT Image GenAI Image Prompts for Social Media (2026)
Video generation6
Top 10 Video Prompts That Actually WorkKling 3 vs Seedance: Prompting DifferencesHow to Write Seedance 2.0 Video PromptsWhy OpenAI Killed SoraAI Video Prompts for Veo 3 and KlingVeo 3 vs Sora 2 vs Kling AI Prompts
Ai digest2
February 2026 AI Prompt Digest: Context Engineer…January 2026 AI Prompt Digest: Prompting Became…
Generative ai1
Prompting Text AI vs Image AI: Totally Different…
Comparison1
Why Your ChatGPT Prompt Sucks in Claude (And Vic…
Gemini1
What I Figured Out About Writing Prompts for Goo…
Claude1
What Makes Claude Different (And How to Write Pr…
Chatgpt1
How I Learned to Write Decent Prompts for ChatGP…
Blog / Prompt tips / AI Prompts for Market Research: The Work…
← All notes

AI Prompts for Market Research: The Workflow I Use to Go From "Vibes" to Evidence

A practical prompt workflow for market research: scoping, sourcing, synthesizing, simulating, and stress-testing insights without fooling yourself.

Ilia Ilinskii
Ilia Ilinskii
Rephrase · Mar 02, 2026
Prompt tips10 min
On this page
Step 1: Start with a research contract, not a questionStep 2: Turn "the market" into a set of evidence bucketsStep 3: Use the model to update a prior, not to "generate insights"Step 4: Synthesize with decision-aware outputs (not pretty prose)Step 5: Use synthetic personas carefully (good for exploration, bad for proof)Practical example: generating survey questions that don't poison your dataClosing thought: prompts don't replace research, they replace blank pagesReferences

Market research used to mean one of two painful modes: either you paid a firm for a glossy deck, or you did the work yourself and drowned in tabs, transcripts, and half-finished spreadsheets.

Now you can ask an LLM for "market research" and get something that sounds like a deck in 30 seconds.

That's the trap.

LLMs are great at producing fluent narratives, but market research is not a narrative task. It's an evidence task. If you don't design prompts that force the model to separate what it knows from what it's making up, you'll get confident fiction. And you'll make decisions on it.

So the way I think about prompting for market research is: don't prompt for "answers." Prompt for a repeatable research system. One that ingests context, produces hypotheses, tags uncertainty, and keeps you honest.

Below is the workflow I use, and the prompt patterns that make it work.


Step 1: Start with a research contract, not a question

The fastest way to waste tokens is to start with "research X market." You'll get generic segmentation and a Porter's Five Forces impression.

Instead, I write a "research contract" prompt that pins down scope, decision, and what counts as evidence. This is basically the same idea as treating a forecast as an update problem: start from a prior, then revise based on evidence, instead of "predicting from scratch." That framing matters a lot when you want calibrated outputs rather than vibes [1].

Here's the prompt I actually use:

You are a market research lead supporting a product decision.

Decision to support:
- Decision: [e.g., "Should we launch an AI meeting notes product for small legal firms?"]
- Time horizon: [e.g., 6 months]
- Geography: [e.g., US + Canada]
- Buyer: [role, budget owner]
- Alternative choices: [do nothing / build feature A / target segment B]

Research outputs I need:
1) A list of hypotheses ranked by decision impact.
2) For each hypothesis: what evidence would confirm vs. disconfirm it.
3) A sourcing plan: what public sources to check first, and what to ask in primary research.
4) A "risk of hallucination" note: where you are most likely to guess.

Rules:
- If you are unsure, say "unknown" and propose how to verify.
- Don't invent statistics. If you use a number, label it as an assumption.
- Use short, skimmable sections.

What I noticed: once you force the model into "decision support," it stops trying to be a Wikipedia page and starts acting like a research operator.


Step 2: Turn "the market" into a set of evidence buckets

Market research is usually a mix of four evidence types: what people say, what they do, what competitors sell, and what the environment makes possible (pricing, regulation, distribution).

I prompt the model to build buckets, then fill each bucket with questions, not conclusions. This keeps your research open-ended longer, which is good. Premature certainty is the enemy.

Given the decision above, create an evidence map with 4 buckets:
A) Customer pain & willingness-to-pay
B) Competitive landscape & substitutes
C) Distribution & acquisition constraints
D) Macro constraints (compliance, procurement, security, switching costs)

For each bucket:
- list 5-8 research questions
- identify which questions can be answered with public data vs. require interviews/surveys
- list "likely pitfalls" (ways an LLM might overgeneralize)

This is also where you start defending against "synthetic respondents" misuse. There's a growing body of work warning that LLMs can be tempting as survey replacements, but that treating simulated responses as interchangeable with human data is not safe for confirmatory claims [3]. In practice: use the model to shape what to ask, not to pretend you've already asked it.


Step 3: Use the model to update a prior, not to "generate insights"

If you're doing market sizing, competitive comparisons, or trend calls, you should think like a forecaster: you have a prior belief, and new evidence should update it. That's not philosophy; it's an operational prompt design that improves calibration.

The mention-markets paper is a nice concrete demonstration: when the prompt explicitly tells the model to treat a baseline probability as a prior and revise it using provided text, calibration improves versus naive prompting [1]. Different domain, same principle.

So I use a "prior → evidence → posterior" template for claims like "Segment X will pay $Y" or "Competitor Z is most at risk."

We are evaluating this hypothesis:

H: "[state hypothesis]"

My current prior belief:
- Prior probability (0-100): [e.g., 55]
- Why: [2-3 bullets, can be rough]

Evidence provided (delimited):
"""
[paste notes, quotes, competitor pages, pricing screenshots, links, etc.]
"""

Task:
1) Extract only the evidence relevant to H (quote it back).
2) List arguments that increase confidence and arguments that decrease confidence.
3) Provide an updated probability (0-100) and explain the update.
4) Flag what additional evidence would move the number the most.

Rules:
- Don't invent new facts.
- If evidence is weak, say so and keep the posterior close to the prior.

That "keep the posterior close to the prior when evidence is weak" line is doing real work. It nudges the model away from overreacting to a single spicy anecdote.


Step 4: Synthesize with decision-aware outputs (not pretty prose)

Most "AI market research" outputs fail because they optimize for readability, not usefulness.

One research thread I like here comes from operations/decision-making: if you use LLMs to generate distributions (e.g., willingness-to-pay samples) the right evaluation is decision quality, not whether the distribution "looks similar" under a distance metric [2]. Translation: your synthesis should be shaped by what decision you'll take next.

So I ask for synthesis in a format that can be acted on: decision options, expected upside, key assumptions, and the minimum viable validation plan.

Create a decision memo draft with these sections:

1) Recommendation: choose one of [Option A/B/C] and state confidence (low/med/high)
2) Decision drivers: the 3 variables that matter most
3) What we know (grounded): only statements supported by the evidence pasted
4) What we assume: list assumptions with expected impact if wrong
5) Fast validation plan: 5 cheapest tests (landing page, cold outreach, pricing interview, etc.)
6) Kill criteria: what results would make us stop

Rules:
- Separate "grounded" vs "assumed" explicitly.
- No generic market fluff.

You're basically building a small "research-to-action" pipeline. If the output can't tell you what to do Monday morning, it's not market research.


Step 5: Use synthetic personas carefully (good for exploration, bad for proof)

People love prompting: "Pretend you're a CFO at a 200-person company. Would you buy this?"

It can be useful. But treat it like improv, not measurement.

The validation paper makes the key point I wish more founders internalized: heuristic prompt-tweaks can make LLM simulations look human-like, but that doesn't give you the statistical guarantees you'd want for confirmatory research [3]. In other words: personas are great for generating objections you forgot, not for estimating demand.

Here's a safer persona prompt:

Act as 6 distinct buyer personas for [product]. Your job is NOT to answer whether you'd buy.
Your job is to generate objections and buying criteria I should test in real interviews.

For each persona:
- context (role, company size, current tool stack)
- top 3 "must-have" requirements
- top 3 objections / reasons to ignore this product
- what proof would change your mind (evidence, security review, ROI calc, reference, etc.)

Rules:
- Avoid invented statistics or "market facts."
- Keep each persona realistic and internally consistent.

Now you're using the model as an adversarial brainstorming partner, not a fake survey panel.


Practical example: generating survey questions that don't poison your data

One of the most concrete "prompt as market research tool" moves is using the LLM to draft survey/interview instruments.

A community example I've seen shared is a "Market Research Question Generator" prompt that forces the model to produce a mix of demographic, behavioral, intent, open-ended, and Likert questions [4]. That structure is useful, because it stops you from writing five versions of the same leading question.

I'd adapt it like this:

You are a survey methodologist helping me design unbiased questions.

Research goal:
- [e.g., "Understand why trial users of our API churn in the first 7 days"]

Audience:
- [e.g., "backend engineers at startups, 5-200 employees"]

Generate:
- 2 screening questions
- 8 core questions split into: behavioral (past), situational (recent), alternatives/substitutes, willingness-to-pay, and open-ended
- 2 attention/quality checks

Rules:
- Avoid leading phrasing and double-barreled questions.
- Each question must state what decision it informs.
- Use simple language.

Then I run a second prompt that red-teams the survey for bias and ambiguity.


Closing thought: prompts don't replace research, they replace blank pages

If you take one thing from this: use prompts to create structure, not certainty.

The best prompting workflows for market research behave like a loop. You define a decision. You state a prior. You gather evidence. You update. You design the next test. Repeat.

If your prompts don't force the model to label assumptions, quote evidence, and admit uncertainty, you're not doing "AI market research." You're doing AI storytelling.


References

Documentation & Research

  1. Forecasting Future Language: Context Design for Mention Markets - arXiv cs.CL
    https://arxiv.org/abs/2602.21229

  2. Evaluating LLM-persona Generated Distributions for Decision-making - arXiv cs.LG
    https://arxiv.org/abs/2602.06357

  3. This human study did not involve human subjects: Validating LLM simulations as behavioral evidence - arXiv cs.AI
    https://arxiv.org/abs/2602.15785

Community Examples

  1. The "Market Research Question Generator" prompt: Instantly creates 5 structured questions for any survey - r/PromptEngineering
    https://www.reddit.com/r/PromptEngineering/comments/1qpp77m/the_market_research_question_generator_prompt/
← Previous
The Prompt That Moves Your Memory From ChatGPT to Claude in 60 Seconds
Next →
Prompt Engineering Salary and Career Guide (2026): What's Actually Getting Paid Now

On this page

Step 1: Start with a research contract, not a questionStep 2: Turn "the market" into a set of evidence bucketsStep 3: Use the model to update a prior, not to "generate insights"Step 4: Synthesize with decision-aware outputs (not pretty prose)Step 5: Use synthetic personas carefully (good for exploration, bad for proof)Practical example: generating survey questions that don't poison your dataClosing thought: prompts don't replace research, they replace blank pagesReferences