Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
Prompt Tips•Mar 12, 2026•10 min

Copilot Cowork + Claude in Microsoft 365 (2026): How I Prompt AI Inside Excel, PowerPoint, and Word

A practical prompting playbook for the new agentic Microsoft 365 workflow: Excel analysis, Word drafting, and PowerPoint building with Copilot Cowork + Claude.

Copilot Cowork + Claude in Microsoft 365 (2026): How I Prompt AI Inside Excel, PowerPoint, and Word

The weird thing about prompting in 2026 is that you're often not prompting a chatbot anymore. You're briefing an agent that can actually change your files.

That's the mindset shift behind "Copilot Cowork + Claude" in Microsoft 365: you're no longer asking Excel to "explain this pivot table." You're asking for an outcome, then supervising a plan that touches Word, PowerPoint, and Excel in one run. The closer your prompt looks like a solid work ticket, the better your results.

And if you're thinking "fine, I'll just type more instructions," that's where people still get burned. Long prompts don't fix ambiguous prompts. Agents don't need more words; they need better constraints, a smaller action surface, and explicit check-ins.


The 2026 mental model: you're orchestrating an iterative tool user

A lot of old prompt advice assumed a single-shot response: you ask, it answers, you paste the result somewhere. The newer agentic approach is fundamentally iterative: retrieve some evidence, take an action, verify, repeat.

This isn't just hype. In spreadsheet workflows, the best-performing systems increasingly rely on an iterative tool-calling loop plus planning and decomposition (planner → subtasks → specialized tools), because a single "retrieve once, answer once" pass tends to miss cross-sheet dependencies and key context in large workbooks [1]. What I took from this research is practical: if you want reliable Excel outcomes, you should prompt for a loop (plan, execute, validate), not for a one-and-done explanation.

There's a second, less obvious point: humans don't collaborate with agents in one fixed way. People switch between hands-off supervision and hands-on takeover depending on risk and ambiguity. Research on human intervention patterns in agents shows distinct interaction styles (hands-off, hands-on, collaborative, takeover) and argues agents should anticipate when humans need to step in-especially around critical decision points [2]. In Microsoft 365 terms, this maps nicely to "draft first, ask before applying" as a default prompting posture.

So my core take for 2026 M365 prompting is: write prompts that (1) encourage iterative evidence gathering, and (2) specify when you want the agent to stop and ask you.


The "Cowork + Claude" split: delegate execution, keep judgment

Ethan Mollick's framing helped me explain what's happening to teams: the same underlying model can behave wildly differently depending on the harness-the tool wrapper that grants actions, file access, and multi-step execution [3]. Claude in a chat window is not Claude operating through a work harness inside Excel/PowerPoint, and neither is the same as an agentic desktop runner.

That matters because Copilot Cowork-style workflows are harness-heavy. So your prompts should assume the system can: open files, read tables, generate slides, draft a memo, and update artifacts. Your prompt job is to allocate responsibility.

Here's the split that's worked best for me:

You delegate mechanical work. You keep judgment calls. Mechanical work is "create a variance table and chart." Judgment is "which narrative is safest to present to the board." If you don't separate those in the prompt, the agent will happily make narrative leaps-especially in PowerPoint.


How I prompt inside Excel in 2026 (without getting garbage)

The research on spreadsheet agents keeps landing on the same operational truth: spreadsheets are too big, too messy, and too cross-linked for naive full-context injection. Iterative retrieval and stepwise tool use wins because it re-queries until evidence is sufficient, and it keeps an audit trail of tool calls [1]. You can mirror that in your prompt, even if you never see the underlying tool trace.

I like a three-part Excel brief: context, tasks, validation.

You are my Excel analyst working in this workbook.

Context:
- Sheet(s) to use: "Actuals_2025", "Budget_2025", "Summary"
- Metric definition: Gross Margin = (Revenue - COGS) / Revenue
- Time window: Q4 2025 only
- Grain: by Region and Product Line

Tasks:
1) Create a Q4 variance table (Actual vs Budget) for Revenue, COGS, and GM%.
2) Identify top 5 drivers of GM% change (by absolute impact).
3) Create a chart suitable for an exec slide (clean labels, no clutter).

Validation (stop and ask if any check fails):
- Confirm totals tie to "Summary" sheet within ±0.1%.
- Flag missing or duplicated rows.
- Before writing formulas or formatting, show me the planned cell ranges and formula approach.

The catch: the "show me planned ranges first" line feels slow, but it's the cheapest way to prevent silent spreadsheet corruption. It also forces the agent into a collaborative mode rather than "just do stuff."

When results look off, I don't say "that's wrong." I say "re-run step 1 but only for Region = EMEA; list the exact rows used." This plays into iterative retrieval patterns that outperform one-pass approaches on complex workbooks [1].


How I prompt inside Word: write the doc, but make claims falsifiable

Word is where hallucination gets expensive, because prose sounds confident even when it's wrong.

So I prompt Word like a spec: audience, voice, structure, and sourcing boundaries. I also make the agent separate "facts from files" vs "assumptions."

Draft a 1-page executive memo in Word.

Audience: CFO + FP&A leads
Tone: crisp, non-salesy, no hype
Structure: headline, 3 key takeaways, then short sections with evidence

Constraints:
- Use only facts that can be traced to the Excel workbook tables we just produced.
- If you infer anything, label it explicitly as an assumption.
- Add a "Questions / Risks" section with 5 bullets that I should verify.

Before you finalize, ask me 3 clarification questions about what we want to emphasize.

That last line is me intentionally inserting a "human intervention point," which aligns with the broader finding that agents and humans collaborate better when intervention timing is handled deliberately, not randomly [2].


How I prompt inside PowerPoint: don't ask for slides-ask for a slide system

Slide decks fail when the agent optimizes for "looks like a deck" instead of "supports a decision."

So I prompt PowerPoint in two passes. First: a slide map. Second: build.

We are creating a 6-slide deck for a monthly business review.

First, propose a slide-by-slide outline (titles + 1 sentence objective per slide),
and tell me what data from the Excel file each slide will use.

After I approve the outline, build the slides with:
- one chart per slide max
- consistent labeling (units, time window)
- speaker notes that explain what to say in 20-30 seconds per slide
- a final slide with 3 decisions needed + recommended next actions

This "outline first" approach is basically planner/executor in human terms: decompose, then execute. Spreadsheet research shows decomposition and iterative loops materially improve long-horizon task success [1], and you can borrow that idea even in a slide workflow.


Practical prompts people are actually using (community patterns worth stealing)

Community prompt lists aren't authoritative, but they're great for seeing what "works in the wild." One Reddit thread compiling Copilot prompts is basically a catalog of reusable prompt stems-rewrite, summarize, extract, compare-that map nicely into Word and PowerPoint workflows [5]. I don't copy these verbatim, but I do steal the pattern: keep the action clear ("summarize," "compare"), add constraints (length, tone), and paste the source.

And there's another community signal worth noting: people are noticing Copilot Cowork as "describe what you want, it plans, it executes, and it checks in before applying final changes" [4]. That check-in behavior is exactly what you should reinforce in your prompts when files are on the line.


Closing thought: prompt like a manager, not a typist

If you try to prompt Copilot Cowork + Claude like it's 2023 ChatGPT, you'll get 2023 results: plausible text, weak grounding, and lots of manual cleanup.

If you prompt it like a manager-clear outcome, scoped inputs, explicit validations, and planned check-ins-you get what these agentic systems are actually good at: multi-step work across Excel, Word, and PowerPoint, with you stepping in at the moments that matter.

Try this tomorrow: take one real spreadsheet task and rewrite your prompt to include "before you apply changes, show me the plan and ask for approval." That one line upgrades your workflow from "AI helper" to "AI coworker."


References

Documentation & Research

  1. Beyond Rows to Reasoning: Agentic Retrieval for Multimodal Spreadsheet Understanding and Editing - arXiv (cs.CL)
    https://arxiv.org/abs/2603.06503

  2. Modeling Distinct Human Interaction in Web Agents - arXiv (cs.CL)
    https://arxiv.org/abs/2602.17588

Community Examples

  1. A Guide to Which AI to Use in the Agentic Era - One Useful Thing
    https://www.oneusefulthing.org/p/a-guide-to-which-ai-to-use-in-the

  2. Microsoft just launched an AI that does your office work for you - and it's built on Anthropic's Claude - r/ChatGPT
    https://www.reddit.com/r/ChatGPT/comments/1rp78zu/microsoft_just_launched_an_ai_that_does_your/

  3. I compiled 50 Microsoft Copilot prompts that work with ANY version - no M365 integration needed - r/PromptEngineering
    https://www.reddit.com/r/PromptEngineering/comments/1r0s8v1/i_compiled_50_microsoft_copilot_prompts_that_work/

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Related Articles

How to Make ChatGPT Sound Human
prompt tips•8 min read

How to Make ChatGPT Sound Human

Learn how to make ChatGPT write like a human with better prompts, voice examples, and editing tricks for natural AI emails. Try free.

How to Write Viral AI Photo Editing Prompts
prompt tips•7 min read

How to Write Viral AI Photo Editing Prompts

Learn how to write AI photo editing prompts for LinkedIn, Instagram, and dating profiles with better realism and style control. Try free.

7 Claude PR Review Prompts for 2026
prompt tips•8 min read

7 Claude PR Review Prompts for 2026

Learn how to write Claude code review prompts that catch risks, explain diffs, and improve PR feedback quality. See examples inside.

7 Vibe Coding Prompts for Apps (2026)
prompt tips•8 min read

7 Vibe Coding Prompts for Apps (2026)

Learn how to prompt Cursor, Lovable, Bolt, and Replit to build full apps in 2026 with less rework, better tests, and cleaner handoffs. Try free.

Want to improve your prompts instantly?

On this page

  • The 2026 mental model: you're orchestrating an iterative tool user
  • The "Cowork + Claude" split: delegate execution, keep judgment
  • How I prompt inside Excel in 2026 (without getting garbage)
  • How I prompt inside Word: write the doc, but make claims falsifiable
  • How I prompt inside PowerPoint: don't ask for slides-ask for a slide system
  • Practical prompts people are actually using (community patterns worth stealing)
  • Closing thought: prompt like a manager, not a typist
  • References