Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

Ask AI about Rephrase

ChatGPTClaudePerplexity

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
tutorials•April 8, 2026•7 min read

How to Prompt AI for IaC

Learn how to prompt AI for Terraform, Docker, and CI/CD with better context, constraints, and validation loops. See examples inside.

How to Prompt AI for IaC

Most bad IaC prompts fail for a simple reason: we ask AI to guess the environment. That works for brainstorming. It breaks fast when the output is Terraform, a Dockerfile, or a deployment workflow that can actually ship or break production.

Key Takeaways

  • Good IaC prompts give the model repo context, runtime constraints, and a clear success condition.
  • Terraform, Docker, and CI/CD need different prompt shapes because they fail in different ways.
  • Research on environment setup shows execution beats surface-level correctness as a validation standard.[1]
  • The best prompts ask the model to propose code and explain assumptions, risks, and test commands.
  • Tools like Rephrase can help turn rough infra requests into structured prompts faster.

Why are IaC prompts harder than normal coding prompts?

Infrastructure prompts are harder because correctness is tied to execution context, not just syntax. A Terraform snippet can look clean and still violate least privilege. A Dockerfile can build and still be unusable in CI. Research on automated environment setup shows that installability, testability, and actual runnability are different levels of success.[1]

Here's what I noticed: most people prompt for IaC like this:

Write Terraform for an AWS ECS service with Docker and CI/CD.

That sounds reasonable. It is also far too open-ended.

The model now has to invent your cloud assumptions, naming scheme, security posture, repo layout, secrets strategy, state handling, and pipeline conventions. Even a very capable model will fill those gaps with plausible nonsense.

A better mental model is this: prompt IaC like you are writing an internal platform ticket. Give the model the same context a senior DevOps engineer would ask for before touching anything.

Research backs this up. Work on automated environment deployment found that stronger outcomes come from holistic repository understanding and iterative validation, not one-shot generation.[1] Another large-scale study on software agent environments shows that robust setup depends on explicit installation and test procedures, not vague task descriptions.[2]


How should you prompt AI for Terraform?

Good Terraform prompts define the target platform, the existing architecture, and the boundaries of change. The model performs better when you state what must stay unchanged, what modules already exist, and what policy constraints matter, because that reduces speculative design choices.[1][2]

When I prompt for Terraform, I try to include five things in prose: provider and region, current architecture, security or compliance constraints, output format, and validation expectations.

Here's a weak prompt versus a strong one:

Before After
"Create Terraform for S3 and IAM." "Generate Terraform for AWS in eu-west-1 to add a private S3 bucket for application uploads and the minimum IAM policy needed for an ECS task role to read and write objects. Follow our existing pattern: one main.tf, variables.tf, and outputs.tf. Do not create users, access keys, or public bucket settings. Add comments only where the policy logic is non-obvious. At the end, list assumptions and the terraform validate and terraform plan checks I should run."

The second version works because it narrows the blast radius. It also forces the model to expose assumptions instead of hiding them.

If you want review rather than generation, say so directly. For example:

Review this Terraform as a platform engineer. Find issues in security, state management, naming consistency, and module reuse. Prioritize findings by severity. For each issue, show the exact block to change and explain the operational impact if left unfixed.

That structure mirrors how real infra review happens. It also aligns with the idea from HerAgent that execution-ready work needs staged validation, not just surface comments.[1]

A practical community example says the same thing in plain language: technical prompts improve when you supply environment-specific constraints instead of generic requests.[4]


How should you prompt AI for Dockerfiles?

Docker prompts work best when they specify the runtime goal, build strategy, and constraints on size, caching, and security. In reproducible environments, small details like base image choice, non-root users, package managers, and test commands matter more than broad app descriptions.[2]

This is where a lot of prompts go wrong. People ask for "a Dockerfile for my app" and forget that a Dockerfile is really an execution contract.

A better prompt tells the model what the container must do:

Write a production-oriented Dockerfile for a Node.js API.
Requirements:
- Use a stable official base image
- Multi-stage build
- Final image must run as non-root
- Optimize for layer caching
- Install only production dependencies in final image
- App listens on port 3000
- Include the exact docker build and docker run commands for local verification
- If you make assumptions about package manager or build output folder, list them explicitly

That prompt quietly does something important. It asks for the artifact and the validation path. That matters because reproducibility research keeps pointing to the same lesson: a setup is not successful until it can actually run.[1][2]

SWE-rebench V2 also highlights something I think prompt engineers should steal immediately: prefer documented commands from repo files and CI configs over model heuristics.[2] In practice, that means your prompt should say:

Use the existing package.json scripts and Docker conventions from this repository. Do not invent a new build process unless the current one is broken.

That one sentence saves a lot of cleanup.


How should you prompt AI for CI/CD pipelines?

CI/CD prompts should define triggers, environments, required checks, and failure behavior. Pipelines are not just YAML generation tasks; they encode release policy. The more explicit you are about stages, secrets, artifacts, and rollback expectations, the less likely the model is to create a pipeline that looks fine but is operationally wrong.[3]

The catch with CI/CD is that models love happy paths. Real pipelines need gates.

So instead of asking for "a GitHub Actions workflow," ask for the workflow plus the policy. For example:

Create a GitHub Actions CI workflow for a Terraform + Docker repo.
Requirements:
- Run on pull requests and pushes to main
- Separate jobs for terraform fmt/validate, Docker build, unit tests, and security scan
- Fail fast on formatting or validation errors
- Cache dependencies where useful
- Do not deploy on pull requests
- On main, build and push the Docker image only if all previous jobs pass
- Use placeholders for secrets and explain where each is needed
- Return one YAML file, then a short section called "Operational risks and assumptions"

I like this format because it forces the model to think in stages. That's also consistent with CI/CD research showing that pipelines become more useful when they measure and expose meaningful signals at each commit, rather than acting like black boxes.[3]

You can go one step further and ask the model to critique its own pipeline:

After generating the workflow, list 3 likely failure modes in real repositories and how you would harden the workflow against them.

That kind of self-audit is underrated.


What does a reusable IaC prompt template look like?

A reusable IaC prompt template should capture context, constraints, desired artifact, and validation steps in one place. This works because infrastructure quality depends on explicit environment details and executable checks, not just a well-phrased request.[1][2]

Here's a template I'd actually use:

You are acting as a senior platform engineer.

Task:
[Describe the Terraform, Docker, or CI/CD change]

Context:
- Repository/project:
- Cloud/platform:
- Existing stack:
- Current constraints:
- Security/compliance requirements:
- What must not change:

Output requirements:
- Return:
- File structure:
- Style or conventions to follow:
- Keep or avoid:

Validation:
- List assumptions explicitly
- Include commands/checks to validate the result
- Flag risky areas or missing information
- If the request is underspecified, ask clarifying questions before generating

This is also where Rephrase's prompt rewriting app is handy. If your first draft is messy, tools like it can quickly reshape the request into something more structured before you paste it into ChatGPT, Claude, or another coding assistant. And if you want more workflows like this, the Rephrase blog has more prompt breakdowns for practical AI use.


The big shift is simple: stop prompting for infrastructure like you're asking for a demo. Prompt like you expect the output to survive review, validation, and deployment. Once you do that, the model gets a lot more useful.

References

Documentation & Research

  1. HerAgent: Rethinking the Automated Environment Deployment via Hierarchical Test Pyramid - The Prompt Report (link)
  2. SWE-rebench V2: Language-Agnostic SWE Task Collection at Scale - arXiv cs.CL (link)
  3. PPTAMη: Energy Aware CI/CD Pipeline for Container Based Applications - The Prompt Report (link)

Community Examples 4. A pattern I keep noticing in technical prompts vs creative prompts - r/PromptEngineering (link)

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Frequently Asked Questions

Treat Terraform prompts like change requests, not vague ideas. Include your cloud provider, constraints, existing module patterns, security rules, and exactly what output format you want.
The biggest mistake is asking for generic output without repository context. Infrastructure work depends on environment details, toolchain versions, security posture, and deployment rules.

Related Articles

How to Prompt AI for API Design
tutorials•8 min read

How to Prompt AI for API Design

Learn how to prompt AI for API design, from OpenAPI specs to endpoint naming and docs. Build cleaner contracts faster. See examples inside.

How to Teach Kids to Prompt AI
tutorials•7 min read

How to Teach Kids to Prompt AI

Learn how to teach kids prompt engineering with simple rules, safe practice, and better AI conversations. See examples inside.

How to Build an AI Learning Curriculum
tutorials•8 min read

How to Build an AI Learning Curriculum

Learn how to build a personal learning curriculum with AI prompts and spaced repetition so skills actually stick. See examples inside.

How to Use AI as a Socratic Tutor
tutorials•8 min read

How to Use AI as a Socratic Tutor

Learn how to prompt AI as a Socratic tutor that asks, guides, and scaffolds instead of blurting out answers. See prompt examples inside.

Want to improve your prompts instantly?

On this page

  • Key Takeaways
  • Why are IaC prompts harder than normal coding prompts?
  • How should you prompt AI for Terraform?
  • How should you prompt AI for Dockerfiles?
  • How should you prompt AI for CI/CD pipelines?
  • What does a reusable IaC prompt template look like?
  • References