"Write me a formula" is the prompting equivalent of saying "make it good" to an engineer.
Sometimes you get lucky. Most of the time you get something that almost works, breaks on edge cases, and turns into a back-and-forth spiral of screenshots, copy/paste, and "no, not like that".
Here's the thing I wish more people internalized: Sheets and Notion aren't "just text boxes". They're tiny programming environments with their own constraints, types, and failure modes. If you want better output, you need to prompt like you're writing a spec, not a wish.
The good news is we already have strong evidence (and a bunch of practical patterns) that structure beats vibes. Research on structured prompting for planning shows that giving the model an explicit decomposition framework can radically improve correctness on tasks that demand step-by-step validity, not just plausible prose [3]. And agent training research keeps landing on the same meta-lesson: separating a high-level plan from low-level execution reduces "knowing-doing" gaps and cuts unproductive loops [4]. That same idea maps perfectly onto "business tooling" prompts-Sheets and Notion included.
So let's go beyond "write me a formula" and start prompting for systems.
The real job: stop the model from inventing your spreadsheet
When people say "AI messed up my formula," it's usually not because the model can't write formulas. It's because it made silent assumptions about your schema.
In spreadsheets and Notion databases, schema is everything: column names, types, locale, date formats, blank handling, whether you want array outputs, whether headers exist, whether you need performance for 10k rows, and so on. If you don't declare those constraints, the model will pick defaults. And those defaults are rarely your defaults.
A prompt that works reliably has three layers.
First, you describe the data interface. Not your business story-your columns/properties and their types. Second, you declare the transformation. What the output should compute, including edge cases. Third, you declare the output contract. Exact formula(s), where they go, and how you'll validate them.
This "explicit contract" approach is basically the same reason TMK-style prompting helps models plan: you're steering the model away from fuzzy language-mode and toward a more symbolic, procedure-following mode [3]. And yes, it matters even when the end product is a single formula.
Sheets: prompt for a mini design doc, not a single cell
Google's ecosystem is moving toward "AI as a callable function" in data tooling. BigQuery's native AI functions are a good signal of direction: AI.generate, AI.embed, AI.similarity-i.e., models invoked as structured operations embedded into data workflows [1]. Even if you're not using BigQuery, the mental model is useful: treat the model like a function that needs typed inputs and a deterministic output format.
So in Sheets, I prompt in two passes: a spec pass and a formula pass.
In the spec pass, I force the model to restate assumptions and propose the simplest approach (single formula vs helper columns vs Apps Script). In the formula pass, I force exact output and tests.
Here's a prompt template that consistently upgrades results:
You are a spreadsheet engineer. Ask up to 5 clarifying questions ONLY if required.
If assumptions are necessary, list them explicitly under "Assumptions".
Context:
- Platform: Google Sheets
- Locale: en-US (comma separators)
- Data range: A1:G (headers in row 1)
- Row count: up to 20,000
Schema:
A: Order ID (text)
B: Date (date)
C: Customer (text)
D: Region (text)
E: SKU (text)
F: Qty (number, may be blank)
G: Net Revenue (number, may be blank)
Task:
Create a single formula for H2 that outputs a "Data Quality" label:
- "OK" if Qty and Net Revenue are both numbers and Qty > 0 and Net Revenue >= 0
- "MISSING" if either Qty or Net Revenue is blank
- "INVALID" if Qty is not a number OR Net Revenue is not a number OR Qty <= 0 OR Net Revenue < 0
Constraints:
- Must fill down automatically for all rows (use ARRAYFORMULA).
- Must not flag the header row.
- Treat whitespace-only cells as blank.
- Prefer readability over cleverness.
Output:
1) Final formula only
2) 5 test cases as example rows and expected label
Why this works is boring, which is what we want. It pins down locale, range shape, and blank rules, then demands an array-capable formula and tests. You're not asking for "a formula". You're asking for an implementation that satisfies a contract.
And for performance: models love nesting heavy functions everywhere. In large Sheets, prompt for the "cheap checks first" ordering (blanks, then type checks, then comparisons). You'll get formulas that recalc faster and are easier to debug.
Notion AI: prompt for property architecture, not "a formula"
Notion formulas are only one piece of a database system. The bigger leverage is in prompting for the design: what properties exist, what types, what relations and rollups you need, and where computation should live.
A practical example: a lot of "Notion AI formula" requests are really "build me a workflow". Community prompt packs lean into that by asking the model to generate not just formulas, but full systems: properties, relations, rollups, dashboards, safeguards, and explanations [5]. The pack itself is over-the-top, but the core idea is correct: treat the prompt as a system spec, not a one-liner request.
Here's a Notion-oriented prompt that's actually usable:
You are a Notion database architect.
Goal:
Design a Notion database system for content production with reliable status tracking and deadlines.
Current databases:
- "Content" (empty, to be designed)
Requirements:
- Track: Idea → Drafting → Review → Scheduled → Published
- Each item has: Title, Owner, Channel, Publish Date, Status, Priority
- Need a computed "Next Action" property that changes based on Status and missing fields.
- Need a computed "Overdue" flag: true if Status is not Published and Publish Date is before today.
- Avoid null/type errors. Handle missing dates safely.
Output format:
1) Database properties list with types (exact Notion property types)
2) Any relations/rollups you recommend (and why)
3) Notion formulas for:
a) Next Action (as a formula property)
b) Overdue (as a formula property)
4) 6 edge cases and what the formulas should output
The trick: you're explicitly asking for safe handling and edge cases. You're making the model think in "database constraints" instead of "cute productivity template".
Also, I always ask for edge cases because Notion formula errors are where systems go to die. If the AI can't describe edge cases, it's not done designing.
Practical examples: two "beyond formula" workflows that compound
Most teams stop at generating a formula. The bigger wins come from chaining: spec → generate → verify → document. That's the same "draft then execute" concept you see in agent training research: create a high-level plan, then follow it, and reduce loops [4].
Here are two workflows I use.
First is "Sheets audit mode." You prompt the model to produce formulas plus a validation checklist. That turns the AI into a QA buddy.
Act as a spreadsheet QA reviewer.
Given:
- A data-cleaning formula (I will paste it)
- A description of the sheet schema
Task:
1) Identify likely failure modes (blanks, text-as-number, extra spaces, locale issues)
2) Propose 3 lightweight validation formulas I can add to catch errors
3) Provide a minimal set of test rows (10) that maximizes edge-case coverage
Output: headings + formulas in code blocks.
Second is "Notion migration assistant." When you're changing a database schema, prompts that output both the new architecture and a migration plan save hours.
You are migrating a Notion database schema.
Input:
- Old properties list (I will paste)
- New requirements (I will paste)
Task:
1) Propose a new properties list (types, naming conventions)
2) Identify which old properties map to which new ones
3) Provide a step-by-step migration sequence that avoids breaking views/filters
4) Include a rollback plan
Output: prose steps + mapping table.
This is where "prompt engineering" stops being a trick and becomes an operating system.
Closing thought: ask for contracts, not creativity
If you want the AI to stop hallucinating your spreadsheet, you have to stop asking it to read your mind.
Give it a schema. Give it constraints. Demand tests. Demand an output contract.
Once you do, Sheets and Notion AI stop feeling like novelty features and start acting like what they really are: fast junior engineers who work best when the ticket is written properly.
References
Documentation & Research
- BigQuery AI supports Gemini 3.0, simplified embedding generation and new similarity function - Google Cloud AI Blog. https://cloud.google.com/blog/products/data-analytics/new-bigquery-gen-ai-functions-for-better-data-analysis/
- Introducing Gemini 3.1 Pro on Google Cloud - Google Cloud AI Blog. https://cloud.google.com/blog/products/ai-machine-learning/gemini-3-1-pro-on-gemini-cli-gemini-enterprise-and-vertex-ai/
- Knowledge Model Prompting Increases LLM Performance on Planning Tasks - arXiv. https://arxiv.org/abs/2602.03900
- PaperGuide: Making Small Language-Model Paper-Reading Agents More Efficient - arXiv. https://arxiv.org/abs/2601.12988
Community Examples
5. I made a prompt pack that generates Notion formulas, databases, and full systems - r/PromptEngineering. https://www.reddit.com/r/PromptEngineering/comments/1raynmk/i_made_a_prompt_pack_that_generates_notion/
-0178.png&w=3840&q=75)

-0204.png&w=3840&q=75)
-0202.png&w=3840&q=75)
-0197.png&w=3840&q=75)
-0196.png&w=3840&q=75)