Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
Prompt Tips•Mar 10, 2026•9 min

Prompt Engineering for Telegram Bots: How to Make Chat Automations Reliable (Not Random)

A practical, engineering-first way to prompt LLMs inside Telegram bots: routing, tool calls, privacy, cost control, and multi-turn strategy.

Prompt Engineering for Telegram Bots: How to Make Chat Automations Reliable (Not Random)

The hard part about building an AI Telegram bot isn't Telegram. It's the moment you ship a "helpful" chat automation into a real group or support channel and realize your prompt is basically a production runtime.

Users talk over each other. They paste screenshots. They ask for actions ("refund this", "schedule this", "ban this"), then change their mind. They overshare. And your bot has to decide something that classic prompt recipes rarely address: when to respond, when to shut up, and when to escalate to tools.

What's interesting is that the most useful recent research on chat assistants doesn't start with "write a better system prompt." It starts with architecture: separating intervention decisions from response generation, minimizing unnecessary model calls, and sanitizing messages before they ever hit a big model [1]. That maps perfectly onto Telegram bots, because Telegram is naturally event-driven and multi-user, and your bot is always one misfire away from being annoying.

Let's talk about prompt engineering for Telegram bots as if it's an engineering system, not a writing exercise.


The mental model: your bot is a router, not a chatterbox

If you only take one idea from this: don't use one monolithic prompt to do everything.

GroupGPT formalizes the group assistant problem as choosing "what to say, when to intervene, and who to respond to," then implements it as multiple components: an intervention judge, a privacy transcriber, a multimodal processor, and a final respondent [1]. Even if you're not training models, you can steal the pattern and implement it with prompts and small heuristics.

In Telegram terms, every incoming update should first be classified into a small set of intents that decide the rest of the prompt pipeline. My default routing buckets look like this in practice: "silent", "simple reply", "needs clarification", "tool/action required", "moderation/safety", and "handoff."

The trick is to make routing cheap and deterministic. GroupGPT's point is that invoking the big model for every message is both expensive and noisy; it's better to run a lightweight judge that says "stay silent" most of the time [1]. In Telegram groups, "stay silent" is a feature, not a failure mode.

This changes how you write prompts. Instead of one mega prompt, you write a prompt for a router and a prompt for a responder, and you only call the responder when the router approves.


Prompt pattern #1: "Intervention first" prompts for group chats

Here's a router prompt style that borrows directly from the "intervention judge" framing in GroupGPT [1]. You can run this with a smaller model, or even the same model with low max tokens.

SYSTEM:
You are the Routing Judge for a Telegram bot in a busy chat.
Decide whether the bot should respond now.
Output ONLY valid JSON.

Rules:
- Prefer staying silent unless you can add clear value.
- If the user asks for an action (schedule, create, lookup, post, ban), choose "tool".
- If the user's message is ambiguous, choose "clarify".
- If the message is not addressed to the bot and doesn't need correction/support, choose "silent".
- If private data appears (emails, phone numbers, addresses, tokens), flag "privacy_risk".

JSON schema:
{
  "decision": "silent" | "reply" | "clarify" | "tool" | "privacy_risk" | "handoff",
  "reason": "short",
  "target_user": "optional @username",
  "confidence": 0.0-1.0
}

USER:
CHAT_CONTEXT (last 12 messages):
...
NEW_MESSAGE:
{message text}

This is "prompt engineering", but it's really control engineering. It gives you a stable gate so the rest of your bot doesn't thrash.

And it aligns with research: intervention timing is its own problem, and treating it separately reduces token usage and improves reliability [1].


Prompt pattern #2: privacy and sanitization as a first-class step

Telegram users overshare. In groups, they overshare louder.

GroupGPT explicitly inserts a privacy transcriber that rewrites messages to remove PII before sending content to a cloud model [1]. That's not academic hand-wringing; it's a product requirement if you're piping chats into an LLM API.

So: don't just add "don't reveal secrets" to your system prompt. Add a sanitization step before the model sees the data. You can do this with a dedicated prompt (or a local regex + LLM hybrid if you want).

SYSTEM:
You are a Privacy Transcriber. Rewrite the message to remove sensitive details while preserving meaning.
Replace:
- emails -> [EMAIL]
- phone numbers -> [PHONE]
- addresses -> [ADDRESS]
- API keys/tokens/passwords -> [SECRET]
Output ONLY the rewritten message text.

USER:
Original:
{message}

This is one of those steps that feels optional until you have a real incident.


Prompt pattern #3: strategy vs execution (and why it matters for "automations")

A lot of Telegram bots fail in a subtle way: they give good answers but don't complete workflows. They get stuck in polite conversation instead of driving the task to done.

GOPO is research aimed at task-focused dialogue (think customer service), and its core claim is that long-horizon success improves when you decouple strategy planning from response execution [2]. In practice, that means you shouldn't ask the same model turn to both choose the plan and write the final user-facing message unless you're okay with the plan evaporating mid-generation.

For Telegram automations, the analog is: generate a structured "plan object" first, then generate the message constrained by that plan. Even without RL, you can do it with two prompts: a planner that outputs JSON, and a responder that must follow it.

SYSTEM:
You are the Planner. Produce a minimal plan for completing the user request inside a Telegram bot.
Output ONLY JSON:
{
  "goal": "...",
  "required_info": ["..."],
  "tools": [{"name":"...", "args":{}}],
  "user_questions": ["..."],
  "success_criteria": ["..."]
}

USER:
{sanitized message + context}

Then:

SYSTEM:
You are the Responder. You MUST follow the plan JSON exactly.
- Ask only the plan's user_questions, unless you can execute tools immediately.
- If tools are listed, request tool execution using the tool call format used by this system.
- Keep it short for Telegram.

PLAN_JSON:
{...}

USER:
Write the next message to send.

This sounds heavy, but it's the only way I've found to make bots feel like "automation" instead of "chat."


Prompt pattern #4: tool calls under budget (Telegram bots can get expensive fast)

If your bot hits tools (search, CRM, calendars, ticketing), you're in "agent" territory. And agents have a cost problem: they retry, they explore, and they burn money.

INTENT tackles this as a formal budget-constrained tool use problem and shows something that will feel painfully familiar: prompt-only budget reminders don't reliably prevent overspending. You need an enforcement mechanism that blocks infeasible tool calls and feeds back "why" so the agent replans [3].

In a Telegram bot, you can implement a lightweight version: track remaining budget (tokens, tool calls, dollars), reject tool calls that exceed it, and return a synthetic observation like: "Tool rejected: budget remaining is X; you need a cheaper path or ask user to confirm paid lookup."

The prompt-side move is to explicitly teach the model that "tool calls are expensive" and that it should prefer cheaper alternatives first, but you should assume that's not enough. The system needs guardrails outside the model, because budget feasibility is a hard constraint, not a vibe [3].


Practical examples (what people actually ship)

On the community side, you can see builders doing exactly what the research implies, even if they don't call it that. One r/MachineLearning build uses n8n as an orchestrator, routes Telegram inputs (voice/images/docs) into different processing steps, and includes a strong "tool directive" so the model doesn't pretend it executed actions [4]. That's basically a production-grade version of "separate routing + tool execution + response."

And the "tiny reusable routines" pattern that shows up in r/PromptEngineering is also relevant: you don't need one prompt for your whole bot; you need a small library of prompts for repeated flows like "reply helper," "meeting notes → actions," and "weekly plan" [5]. Telegram bots thrive on repeatability. The moment you standardize a flow, you can lock it behind a stable prompt and stop reinventing it every message.


Closing thought: treat prompts like code paths

When I'm building Telegram bots now, I don't think in terms of "a prompt." I think in terms of "a decision graph."

A router prompt decides the branch. A sanitizer prompt reduces risk. A planner prompt creates a structured intent. A responder prompt writes the Telegram-sized message. Tool calls are enforced by the runtime, not trusted to the model. And I expect the bot to be silent most of the time in groups.

That architecture is the difference between a bot that demos well and a bot that survives contact with actual users.


References

Documentation & Research

  1. GroupGPT: A Token-efficient and Privacy-preserving Agentic Framework for Multi-User Chat Assistant - arXiv cs.CL
    https://arxiv.org/abs/2603.01059

  2. Decoupling Strategy and Execution in Task-Focused Dialogue via Goal-Oriented Preference Optimization - arXiv cs.CL
    https://arxiv.org/abs/2602.15854

  3. Budget-Constrained Agentic Large Language Models: Intention-Based Planning for Costly Tool Use - arXiv cs.AI
    https://arxiv.org/abs/2602.11541

  4. Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs - arXiv cs.AI
    https://arxiv.org/abs/2601.21233

Community Examples

  1. Free Code Real-time voice-to-voice with your LLM & Telegram bot with 25+ tools (n8n) - r/MachineLearning
    https://www.reddit.com/r/MachineLearning/comments/1rk66ay/p_free_code_realtime_voicetovoice_with_your_llm/

  2. I got tired of doing the same 5 things every day… so I built these tiny ChatGPT routines - r/PromptEngineering
    https://www.reddit.com/r/PromptEngineering/comments/1qwfbje/i_got_tired_of_doing_the_same_5_things_every_day/

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Related Articles

How to Make ChatGPT Sound Human
prompt tips•8 min read

How to Make ChatGPT Sound Human

Learn how to make ChatGPT write like a human with better prompts, voice examples, and editing tricks for natural AI emails. Try free.

How to Write Viral AI Photo Editing Prompts
prompt tips•7 min read

How to Write Viral AI Photo Editing Prompts

Learn how to write AI photo editing prompts for LinkedIn, Instagram, and dating profiles with better realism and style control. Try free.

7 Claude PR Review Prompts for 2026
prompt tips•8 min read

7 Claude PR Review Prompts for 2026

Learn how to write Claude code review prompts that catch risks, explain diffs, and improve PR feedback quality. See examples inside.

7 Vibe Coding Prompts for Apps (2026)
prompt tips•8 min read

7 Vibe Coding Prompts for Apps (2026)

Learn how to prompt Cursor, Lovable, Bolt, and Replit to build full apps in 2026 with less rework, better tests, and cleaner handoffs. Try free.

Want to improve your prompts instantly?

On this page

  • The mental model: your bot is a router, not a chatterbox
  • Prompt pattern #1: "Intervention first" prompts for group chats
  • Prompt pattern #2: privacy and sanitization as a first-class step
  • Prompt pattern #3: strategy vs execution (and why it matters for "automations")
  • Prompt pattern #4: tool calls under budget (Telegram bots can get expensive fast)
  • Practical examples (what people actually ship)
  • Closing thought: treat prompts like code paths
  • References