Nano Banana expands to Search and NotebookLM with 5 billion images created

Nano Banana image editing model launches in Google Search and NotebookLM on October 13, 2025, bringing Gemini 2.5 Flash capabilities to more platforms after generating 5 billion images.

Google Lens Nano Banana feature icon on bananas representing new AI image editing capabilities
Google Lens Nano Banana feature icon on bananas representing new AI image editing capabilities

On October 13, 2025, Nano Banana began its expansion beyond the Gemini app, integrating into Google Search and NotebookLM. The announcement marked a significant distribution shift for the image editing model, which had previously been confined to a single application since its August 2025 debut.

According to Naina Raisinghani, Product Manager at Google, more than 5 billion images have been generated using Nano Banana since the initial launch. The model, built on Gemini 2.5 Flash technology, now reaches users across multiple Google properties. A third integration into Google Photos was announced for the coming weeks, though specific timing remained undisclosed.

The Search implementation introduces Nano Banana through Google Lens and AI Mode on both Android and iOS devices. Users access the functionality through a new Create mode within the Lens interface, identified by a yellow banana icon. The system processes photos captured directly through the device camera or selected from existing galleries, transforming images based on text prompts or suggested edits.

Lou Wang, Senior Director of Product Management for Google Search, detailed the operational process in the October 13 announcement. Users open Lens in the Google app, select the Create mode, and input editing requests. The system supports follow-up modifications to previously edited images, enabling iterative refinement of generated content. Beyond direct photo editing, the integration enables users to generate entirely new images from text descriptions through AI Mode's "Create image" tool.

The technical implementation allows users to photograph objects rather than people when camera-shy, addressing practical privacy considerations while maintaining functionality. Gallery integration provides access to existing images without requiring new photography. The feature supports various use cases, from visualizing Halloween costumes for pets to reimagining home furnishings in different styles.

AI Mode extends the workflow beyond image creation. Users can request styling suggestions for generated clothing items or search for purchase options for furniture designs created through the tool. This integration connects creative image generation with practical shopping and research capabilities within the same interface.

In NotebookLM, Nano Banana operates as underlying technology powering enhanced Video Overviews. The model introduces six new visual styles to the overview format, including watercolor and anime aesthetics. According to the announcement, the system generates contextual illustrations derived from uploaded source materials, moving beyond text-only presentations to visual content synthesis.

A new format called Brief emerged from the Nano Banana integration. This condensed overview option addresses use cases requiring rapid information extraction rather than comprehensive analysis. The feature joins existing Video Overview capabilities, which transform notebook content into narrated slide presentations using AI-generated visuals combined with spoken explanations.

NotebookLM's Video Overviews launched in July 2025 with English-only support, later expanding to 80 languages in August 2025. The Nano Banana integration enhances these capabilities by diversifying visual presentation options and enabling stylistic variations in generated content. The Brief format specifically targets users requiring quick insights rather than extended analytical content.

The rollout began in English for users in the United States and India, with additional countries and languages planned for future expansion. Geographic limitations mirror deployment patterns seen across Google's AI feature releases, where initial availability focuses on specific markets before broader international distribution.

For marketing professionals, the Nano Banana expansion presents implications for creative asset production workflows. The Search integration enables rapid visual mockup generation without dedicated design software, potentially accelerating campaign ideation processes. The capability to transform product photography through prompt-based editing reduces barriers to creative experimentation for advertisers managing visual content.

These developments follow established patterns in Google's AI integration strategy. Asset Studio consolidated creative capabilities within Google Ads in September 2025, providing advertisers with AI-powered image generation and editing tools. The Nano Banana deployment extends similar functionality to consumer-facing products, though without the advertising-specific optimization present in Asset Studio.

The timing coincides with broader AI Mode expansion across Google properties. AI Mode integration in Chrome desktop launched through Labs in September 2025, while homepage search bar testing began in June 2025. These deployments reflect systematic distribution of AI capabilities across the company's product ecosystem.

Image generation volume reached 5 billion within approximately two months of the initial Gemini app launch. This adoption rate suggests substantial user engagement with AI-powered creative tools, though Google did not disclose active user counts or demographic breakdowns in the announcement. The figure encompasses all image types generated through the Gemini app interface during the measurement period.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

The Create mode in Google Lens operates alongside existing visual search capabilities. Users can transition from standard object identification tasks to image transformation within the same application. This unified interface reduces context switching for users moving between informational searches and creative editing tasks.

NotebookLM's integration differs from the Search implementation by focusing on automated content generation rather than user-directed editing. The system analyzes uploaded documents, presentations, and media files to determine appropriate visual styles and illustrations. Users select from predefined style options rather than providing custom prompts, streamlining the generation process for research-focused workflows.

The Brief format addresses specific user feedback about NotebookLM's overview length. According to the announcement, the condensed format serves users requiring quick information access rather than comprehensive synthesis. This addition expands the range of outputs available from identical source materials, enabling different team members to generate appropriate content depths for their specific requirements.

Google Photos integration remained unspecified in implementation details. The announcement indicated availability "in the weeks ahead" without providing technical specifications or feature descriptions. This lack of detail contrasts with the comprehensive explanations provided for Search and NotebookLM integrations, suggesting the Photos deployment may still be under development at announcement time.

The expansion strategy distributes Nano Banana across products serving different user intentions. Search addresses immediate creative needs during active browsing sessions. NotebookLM targets research and content synthesis workflows. Photos, based on existing platform functionality, likely focuses on retrospective editing of stored images rather than real-time creation.

Technical infrastructure supporting these integrations relies on Gemini 2.5 Flash processing capabilities. The model handles image generation, style transfer, and contextual illustration creation across all three products. This unified backend enables consistent output quality while accommodating product-specific interface requirements and user workflows.

Privacy considerations remain important for tools processing user-generated images. Google did not specify data retention policies or model training protocols in the announcement. Previous AI feature launches have included privacy documentation, though such details were absent from the October 13 statements about Nano Banana expansion.

The mobile-first deployment strategy limits initial availability to Android and iOS users. Desktop access through browser-based interfaces was not mentioned in the announcement, though AI Mode functionality exists in desktop Chrome environments. This mobile focus aligns with Google Lens usage patterns, where smartphone cameras serve as primary input devices.

For content creators and marketing teams, the developments indicate continued AI integration into standard workflows. Image editing capabilities previously requiring dedicated software become accessible through search and research tools. This democratization of creative functionality may influence content production velocities and experimentation rates across digital marketing teams.

The announcement did not address computational requirements or processing times for image generation. Previous Google AI deployments have varied in response latency based on task complexity. Users generating complex edits or high-resolution outputs may experience different performance characteristics than simple transformations.

Integration with existing Google ecosystem features enables workflow continuity. Users can search for similar items after generating product visualizations, or incorporate AI-edited images into broader research projects through NotebookLM. These connections between discrete products create potential for extended use cases beyond isolated image editing tasks.

Market dynamics in AI-powered creative tools continue shifting as major platforms integrate generative capabilities. Adobe maintains professional editing software dominance, while platform-native tools like Nano Banana target casual users and rapid ideation workflows. The competitive landscape balances specialized professional tools against integrated consumer offerings.

The 5 billion image milestone represents aggregate generation across all Gemini app users during the August-October period. Daily generation rates and geographic distribution of usage were not disclosed. These metrics would provide insight into feature adoption patterns and sustained engagement levels beyond initial novelty periods.

Future expansion plans beyond the announced United States and India availability remain unspecified. Google's typical rollout patterns involve gradual geographic expansion following initial launches, though timelines vary by product and regulatory considerations. International users may experience extended waiting periods for feature access.

Timeline

Summary

Who: Google announced the expansion through Naina Raisinghani, Product Manager, and Lou Wang, Senior Director of Product Management for Google Search. The announcement affects users of Google Search, NotebookLM, and Google Photos across multiple platforms.

What: Nano Banana, an image editing model built on Gemini 2.5 Flash technology, expanded from the Gemini app to Google Search and NotebookLM, with Google Photos integration planned. The model enables users to transform photos through text prompts, generate new images from scratch, and access six new visual styles in NotebookLM Video Overviews. A new Brief format provides condensed insights for quick information needs. The model had generated more than 5 billion images since its August 2025 launch.

When: The announcement occurred on October 13, 2025. The initial Nano Banana launch in the Gemini app took place in August 2025. Rollout began immediately for Search and NotebookLM features, with Google Photos integration planned for "the weeks ahead" following the announcement.

Where: Initial availability covers the United States and India for English-language users on Android and iOS devices. The features appear in Google Lens through the Google app and in NotebookLM's Video Overview generation system. Additional countries and languages were planned without specific timelines disclosed.

Why: The expansion distributes Nano Banana capabilities to platforms where users already engage in exploration, learning, and visual content creation. The 5 billion images generated in the Gemini app demonstrated substantial user demand for AI-powered image editing tools. Integration into Search enables creative transformation during active browsing, while NotebookLM integration enhances research content visualization through stylized illustrations and condensed Brief formats. The developments align with broader strategies to embed AI capabilities across Google's product ecosystem, following patterns established by Asset Studio for advertising and AI Mode integration across multiple properties.