In case you haven’t heard about it yet, Google recently released a new image model. It is called Gemini 2.5 Flash Image (technically it is gemini-2.5-flash-image-preview). They call it a State-of-the art image generation and editing model. I believe them…
The code name for this model, that seems to have caught on, is Nano Banana.Now you are caught up.
In normal Google-style, they didn’t have a huge announcement. They just made the model available. People found it and are going crazy about it. Why? Well, I can’t speak for “people”, but my guess is because it actually works.
This has been Google’s pattern throughout these AI races. Let the other companies have these massive over-promises and faceplants. Or wait for the competition to release models with some kind of pricing structure and then Google releases their version (which happens to be better) silently and for free.
I had already played around with nano banana in Google AI Studio a little bit. Now it’s in Gemini. Seems like a good enough reason to play around some more.
I started with these two images -
A different Google AI-generated image of Danielle as a collectible toy.
And
The infamous glitter monster (inside joke) - also AI-generated from a different model
Nano banana’s first task was to add the glitter monster to the other image.
Here's the prompt I used -
(I attached both images to the prompt)
Reference the attached images
Add a glitter monster (subject of the second image) to the first image in a similar style
This is what it gave me -
My take - good and bad. The lines around the glitter monster are surprisingly clean. That would have taken a while to do manually. It’s not really in the same style though.
Let’s keep going.
Next prompt -
Make the glitter monster smaller and place it behind her inside of the orange ball
Result -
A lot of good things happened here -
- It is smaller
- It is inside of ball
- It is behind her
- The halo around it is gone (not part of the prompt but it is consistent with the lighting inside of the ball)
- It is clearly the same glitter monster that we started with (other image models have been known to change the image)
Why stop here. Next prompt -
Change the color of the vest worn by the subject holding the camera to green
Nice! The vest is changed (I didn’t specify what kind of green so what it chose is acceptable) and the rest of the image is unchanged.
Let’s go one more time. Prompt -
Sprinkle some glitter on the floor of the orange ball by their feet
Hmmmm. It’s definitely more than just at their feet but c’mon! It’s a glitter monster. It can’t be expected to only have a tiny amount of glitter so I accept this.
Verdict - I think this is damn impressive. One thing that I didn’t mention is that each of the image generations took about 5 seconds. This is way faster than most other models (that don’t give you what you asked for…) The edits weren’t perfect but the prompts were also (intentionally) simple. I am confident that with better initial prompts or descriptive follow-up prompts, the model would give me whatever I asked for.
This is approaching game-changer level. Not just on the image generation side - which isn’t showcased here but I’ve tried it and it is at least as impressive - but if it can edit this quickly and accurately, photographers and editors may need to step up their games. And, to be clear, we still have a photography business as well so we are in that group.
My advice - if you also have some kind of photo creation and/or editing component to your job or business, learn to use and master this tool today. If it does replace the tool(s) that you currently use, you’ll still be able to keep pace with the times. You can be mad about it later…
Well done Google. These are crazy times…

Save Time, Skip the Mouse
ChatGPT can do more than chat, it can save your wrists and your sanity. Our ChatGPT for Complete Beginners course is full of shortcuts (literal and figurative), and our blog tips are like bonus time-savers you didn’t know you needed.
Comments