Google has announced the availability of native image output in Gemini 2.0 Flash for developer experimentation. Initially introduced to trusted testers in December, this feature is now accessible across all regions supported by Google AI Studio.
“Developers can now test this new capability using an experimental version of Gemini 2.0 Flash (gemini-2.0-flash-exp) in Google AI Studio and via the Gemini API,” Google said.
OpenAI also announced the same feature for GPT-4o last year, but the company hasn’t shipped it yet. Notably, Google isn’t using Imagen 3 for generating images, it is fully native Gemini.
Gemini 2.0 Flash integrates multimodal input, reasoning, and natural language processing to generate images. According to Google, the model’s key capabilities include text and image generation, conversational image editing, and text rendering.
So, Google just launched very good image generation that OpenAI announced last year and never ended up shipping it? ;)
— tarun (@tarrooon) March 12, 2025
(This image was generated with Gemini 2.0 flash that just came out: https://t.co/f8TR3Tj8ON) https://t.co/qA6zifrm3M pic.twitter.com/y6oC2jtr5W
“Use Gemini 2.0 Flash to tell a story, and it will illustrate it with pictures while maintaining consistency in characters and settings,” the company explained. The model also supports interactive editing, allowing users to refine images through natural language dialogue.
Another feature is its ability to use world knowledge for realistic image generation. Google claims this makes it suitable for applications such as recipe illustrations. Moreover, the model offers improved text rendering, addressing common issues found in other image-generation tools.
"How is native image gen better than current models?" pic.twitter.com/KOyaGr0VgM
— angel⭐ (@Angaisb_) March 12, 2025
Internal benchmarks indicate that Gemini 2.0 Flash outperforms leading models in rendering long text sequences, making it useful for advertisements and social media content.
Google has invited developers to experiment with the model and provide feedback. “We’re eager to see what developers create with native image output,” the company said. Feedback from this phase will contribute to finalising a production-ready version.
Google recently also launched Gemma 3, the next iteration in the Gemma family of open-weight models. It is a successor to the Gemma 2 model released last year.
The small model comes in a range of parameter sizes—1B, 4B, 12B and 27B. The model also supports a longer context window of 128k tokens. It can analyse videos, images, and text, supports 35 languages out of the box, and provides pre-trained support for 140 languages.