Google’s Gemini 2.5 Flash Image (code‑name “Nano Banana”) is drawing a lot of attention — and for good reason. It combines fast image generation with powerful editing and visual reasoning, making it ideal for apps that need both creativity and control.
In this post, we’ll cover what makes Nano Banana special, show practical use cases, outline pricing expectations, and provide a quickstart so you can try it right away in NextDocs.
Keep a character or product consistent across scenes and angles. This is especially useful for:
Turn text into edits. Examples:
Blend understanding and generation:
All outputs include an invisible SynthID watermark. It’s designed to make AI‑generated content auditable while remaining unobtrusive to viewers.
Nano Banana is available in NextDocs via our fal.ai integration. Pricing is surfaced in the side panel before you run a job. As of this update:
Actual charges depend on model selection and size; always refer to the estimate shown next to the Run action.
Below is a minimal example adapted from Google’s docs. You can use the NextDocs Media panel (AI tab) to try prompts interactively, or call the API directly in your own scripts.
import PIL.Image
from google import genai
from google.genai import types
from io import BytesIO
client = genai.Client()
prompt = """
Show me a picture of a nano banana dish in a fancy restaurant with a Gemini theme
"""
response = client.models.generate_content(
model="gemini-2.5-flash-image-preview",
contents=[prompt],
)
for part in response.candidates[0].content.parts:
if part.text is not None:
print(part.text)
elif part.inline_data is not None:
image = PIL.Image.open(BytesIO(part.inline_data.data))
image.save("generated_image.png")
Open the Media panel → AI → select "Nano Banana" (or leave on Auto) → enter your prompt → review the estimated cost → Run. For edits, pick "Edit", write the instruction (e.g., "remove background"), and preview the result before inserting.
— The NextDocs Team