Google AI Adds New Image Recognition and Search Features

2025-04-08

Google has unveiled a significant update to its search-focused AI chatbot, introducing multimodal capabilities that enable it to "understand" images and provide relevant answers. Additionally, the tech giant is expanding the availability of its AI model to millions of new users.

This enhancement integrates a customized version of Gemini AI with Google Lens' image recognition technology, empowering the AI model to analyze images uploaded or captured by users. In response, users receive detailed answers complete with links. The multimodal feature is now live and can be accessed via the Google app on both Android and iOS platforms.

According to Google's Vice President of Search Products, Robby Stein, "The AI model builds upon years of research in visual search and expands its functionality further. Leveraging Gemini's multimodal abilities, the AI interprets entire scenes within images, including the relationships between objects and their unique textures, colors, shapes, and arrangements."

Google explained that this update employs a "divergent query technique," which generates multiple queries based on the identified image and its elements. This approach ensures responses are not only comprehensive but also richly contextual. For instance, the AI model can recognize books depicted in an image, recommend highly-rated alternatives, and refine suggestions based on user queries.

The AI-powered search service is Google’s answer to competitors like Perplexity and ChatGPT. It delivers AI-generated summaries in response to user inquiries, pulling information from Google’s extensive search index.

Previously, the AI model was introduced last month exclusively for Google One AI Premium subscribers (limited to the experimental version). Now, Google has started rolling out the AI model to millions of experimental users across the U.S., no longer restricting access to paid premium members.