Rephrase LogoRephrase Logo
FeaturesHow it WorksPricingGalleryDocsBlog
Rephrase LogoRephrase Logo

Better prompts. One click. In any app. Save 30-60 minutes a day on prompt iterations.

Rephrase on Product HuntRephrase on Product Hunt

Product

  • Features
  • Pricing
  • Download for macOS

Use Cases

  • AI Creators
  • Researchers
  • Developers
  • Image to Prompt

Resources

  • Documentation
  • About

Legal

  • Privacy
  • Terms
  • Refund Policy

© 2026 Rephrase-it. All rights reserved.

Available for macOS 13.0+

All product names, logos, and trademarks are property of their respective owners. Rephrase is not affiliated with or endorsed by any of the companies mentioned.

Back to blog
Prompt Tips•Mar 12, 2026•10 min

Prompting Nano Banana 2 (Gemini 3.1 Flash Image): The Practical Playbook for People Migrating from Midjourney

A hands-on way to write Nano Banana 2 prompts: structure, constraints, text-in-image, consistency, and fast iteration-without Midjourney-style parameter soup.

Prompting Nano Banana 2 (Gemini 3.1 Flash Image): The Practical Playbook for People Migrating from Midjourney

Midjourney taught a whole generation to "prompt like a power user": sprinkle stylistic buzzwords, add a few camera terms, maybe a negative prompt, then tweak parameters until you get lucky.

Nano Banana 2-Google's new image generator that people keep calling "the Midjourney replacement"-rewards a different habit. It's less about prompt poetry and more about instruction design. You don't win by stacking adjectives. You win by being explicit about what the image must contain, how it's framed, what text must appear, and what must not change between iterations.

What's interesting is that this isn't just taste. It lines up with what we know about modern LLM-ish systems: outputs can be surprisingly stable across rephrasings, which is great for you as a creator (predictable iteration), but it also means whatever structure you choose can lock in patterns-good or bad-across runs [1]. So prompt structure is not just "nice to have." It's your control surface.


What Nano Banana 2 actually is (and why that changes prompting)

In the wild, Nano Banana 2 is being described as "Gemini 3.1 Flash Image" and positioned as a faster, more iterative image model with strong instruction following, better text rendering, and workflows like subject consistency and localization [4]. The key practical implication is speed: faster generation collapses the cost of experimentation. Your best prompt strategy becomes: write a clear spec, generate, critique, patch, repeat.

If you're coming from Midjourney, you may be used to writing one "big prompt" and then relying on style defaults + rerolls. With Nano Banana 2, the highest leverage move is to treat your prompt like a product requirement doc for a single image.


My core prompt pattern: "Spec-first, style-second, constraints-last"

Here's the pattern I keep returning to. It's simple, but it forces you to stop hand-waving.

You describe (1) subject, (2) composition, (3) environment + lighting, (4) style references, (5) text requirements, and then (6) constraints. That order matters because it reduces ambiguity early and pushes the "don't mess this up" items to the end where the model tends to respect them.

A nice side effect: this structure maps cleanly to how builders think about prompt pipelines: subject, composition, action, location, style-then extra controls like aspect ratio and text placement [5].

Try this skeleton:

Create an image with the following spec.

Subject:
[who/what, attributes that must be true]

Composition:
[camera angle, framing, lens vibe if relevant, focal point, depth of field]

Scene + lighting:
[location, time of day, light direction/quality, atmosphere]

Style:
[medium, references, rendering intent]

Text in image (if any):
[exact text, language, font style, placement, spelling constraints]

Constraints:
- Must include: [...]
- Must NOT include: [...]
- Do not change: [...]
- Avoid: [common artifacts]

The "Constraints" section is where you do the work Midjourney users often skip. Not because it's fancy, but because it's defensive. It's the difference between "cool image" and "usable asset."


Write prompts for what Nano Banana 2 is unusually good at

Text-in-image: treat typography like code, not vibes

Nano Banana 2's reputation is that it can render crisp text and even localize/translate it inside the image while preserving composition [4]. That's a big shift versus older image models where you'd avoid text entirely.

My take: if you need text, specify it like a unit test. Give the exact string, demand zero spelling errors, and state where it goes. When you iterate, do it as a follow-up instruction: "Keep everything the same, only change the text to …".

That follow-up style is consistent with how people are successfully using it for localization: generate an ad in English, then request a translation into Japanese without changing lighting/composition [4]. That "without changing" clause is doing the heavy lifting.

Consistency across a storyboard: name your entities and pin their attributes

If you want the same character across multiple images, don't just say "same person." Define a stable identity block (hair, skin tone, clothing, distinguishing marks), and then refer back to it verbatim. The model is reported to support maintaining likeness for multiple characters/objects across a workflow [4], but it still needs you to be disciplined about what "the same" means.

A rule I use: decide which 3-5 traits are non-negotiable and repeat them word-for-word.

Web grounding (when relevant): specify what must be grounded

Nano Banana 2 is described as pulling live info/reference images from Google Search for real-world subjects [4]. If that's true in your access path, your prompt should say what's allowed to change (background facts) and what must not (design layout). Grounding is powerful, but it can also introduce "creative drift" unless you lock your spec.


Guardrails are part of prompting (yes, really)

One practical gotcha: Nano Banana 2 may refuse prompts that contain certain copyrightable entities or brand names [4]. This changes how you prompt if you're building commercial design mockups.

Instead of "in the style of Disney" or "Nike ad," you prompt for the properties you want: "whimsical theme-park aesthetic," "athletic brand minimal product ad," "bold sans-serif headline," "high-key studio lighting," and so on. You're basically learning to specify transferable attributes rather than name-dropping.


Practical examples (and how I'd improve them)

Example 1: infographic with strict spelling

This prompt format has already been shown to work well:

Generate a top-down, flat-lay infographic explaining the solar energy cycle.
Ensure there is a logical visual flow and absolutely zero spelling errors in the text labels.

It's good because it contains a hard requirement ("zero spelling errors") and a compositional instruction ("top-down, flat-lay") [4]. If I were tightening it for production use, I'd add layout constraints:

Create a top-down flat-lay infographic explaining the solar energy cycle.

Composition:
- Clean grid layout, 8 labeled steps in a clockwise loop
- Clear arrows between each step
- Plenty of white space

Text in image:
- Use these exact labels (verbatim): [list labels]
- No spelling errors, no extra words

Constraints:
- No logos, no watermarks, no handwritten fonts

Example 2: ad mockup + localization without drift

The "generate, then localize" workflow is the real unlock:

Generate a modern advertisement mockup for a sleek pair of headphones featuring the English text "Feel The Bass".

Follow-up:

Localize this visual by translating the text into Japanese ("低音を感じろ")
without changing the underlying image composition or lighting.

This is a perfect example of "spec-first, constraints-last," because the constraint explicitly prevents the model from "helpfully" redesigning the shot [4].

Example 3: turning simple prompts into pro prompts (community tactic)

There's a popular community approach that says "quantified parameters beat adjectives" and encourages turning "cinematic vibe" into specific lens/film terms, plus adding negative constraints [3]. I agree with the direction, with one caveat: don't overfit to camera jargon. Use it when it expresses a real constraint (lens length, depth of field), not as decoration.

A Nano Banana 2-friendly rewrite of "a bowl of ramen" might look like:

Create a close-up food photograph of a steaming bowl of tonkotsu ramen.

Composition:
- 45-degree overhead angle, tight framing
- Shallow depth of field, focus on noodles and chashu

Lighting:
- Soft diffused side light, warm color temperature

Style:
- High-end editorial food photography, natural textures

Constraints:
- No text, no hands, no extra utensils, no surreal ingredients
- Avoid plastic-looking broth, avoid warped chopsticks

The habit that replaces Midjourney parameters: fast, structured iteration

Here's the workflow I'd actually adopt if I were migrating a team from Midjourney to Nano Banana 2.

First pass: write the spec using the skeleton. Second pass: don't rewrite the whole thing. Patch the smallest possible part, and add a constraint like "keep everything else unchanged." Because if outputs are stable across rephrasings (and research suggests many model behaviors persist even when you paraphrase) then you should exploit that stability for controlled iteration, not fight it with constant prompt rewrites [1].

That's the game now. Less "prompt art." More "prompt engineering" in the literal sense.


References

Documentation & Research

  1. Extracting Recurring Vulnerabilities from Black-Box LLM-Generated Software - arXiv cs.AI (2026) https://arxiv.org/abs/2602.04894
  2. Rethinking Latency Denial-of-Service: Attacking the LLM Serving Framework, Not the Model - arXiv (2026) http://arxiv.org/abs/2602.07878v1

Community Examples
3. After analyzing 1,000+ viral prompts, I made a system prompt that auto-generates pro-level NanoBanana prompts - r/PromptEngineering https://www.reddit.com/r/PromptEngineering/comments/1qq4tet/after_analyzing_1000_viral_prompts_i_made_a/
4. Nano Banana 2: Google's latest AI image generation model - Analytics Vidhya https://www.analyticsvidhya.com/blog/2026/02/nano-banana-2/
5. Image Generation Prompt Flow - r/PromptEngineering https://www.reddit.com/r/PromptEngineering/comments/1quvryb/image_generation_prompt_flow/

Ilia Ilinskii
Ilia Ilinskii

Founder of Rephrase-it. Building tools to help humans communicate with AI.

Related Articles

How to Make ChatGPT Sound Human
prompt tips•8 min read

How to Make ChatGPT Sound Human

Learn how to make ChatGPT write like a human with better prompts, voice examples, and editing tricks for natural AI emails. Try free.

How to Write Viral AI Photo Editing Prompts
prompt tips•7 min read

How to Write Viral AI Photo Editing Prompts

Learn how to write AI photo editing prompts for LinkedIn, Instagram, and dating profiles with better realism and style control. Try free.

7 Claude PR Review Prompts for 2026
prompt tips•8 min read

7 Claude PR Review Prompts for 2026

Learn how to write Claude code review prompts that catch risks, explain diffs, and improve PR feedback quality. See examples inside.

7 Vibe Coding Prompts for Apps (2026)
prompt tips•8 min read

7 Vibe Coding Prompts for Apps (2026)

Learn how to prompt Cursor, Lovable, Bolt, and Replit to build full apps in 2026 with less rework, better tests, and cleaner handoffs. Try free.

Want to improve your prompts instantly?

On this page

  • What Nano Banana 2 actually is (and why that changes prompting)
  • My core prompt pattern: "Spec-first, style-second, constraints-last"
  • Write prompts for what Nano Banana 2 is unusually good at
  • Text-in-image: treat typography like code, not vibes
  • Consistency across a storyboard: name your entities and pin their attributes
  • Web grounding (when relevant): specify what must be grounded
  • Guardrails are part of prompting (yes, really)
  • Practical examples (and how I'd improve them)
  • Example 1: infographic with strict spelling
  • Example 2: ad mockup + localization without drift
  • Example 3: turning simple prompts into pro prompts (community tactic)
  • The habit that replaces Midjourney parameters: fast, structured iteration
  • References