Google LLC has unveiled plans to broaden its Gemini AI model portfolio and enhance the availability of existing models.
Firstly, Google is fully rolling out the updated Gemini 2.0 Flash on its AI Studio and Vertex AI platforms. This follows the company's move to make the 2.0 Flash available to all users within the Gemini application, compatible with both desktop and mobile devices.
Google has also introduced an experimental version of Gemini 2.0 Pro, its flagship model excelling in coding and complex prompts, and announced that the 2.0 Flash Thinking experimental edition is now widely accessible. The new 2.0 Flash Thinking model is a compact, swift AI model optimized for logic and reasoning.
In addition, Google launched the entirely new Gemini 2.0 Flash-Lite model, designed to be the most cost-effective AI solution from the company, currently in public preview.
2.0 Pro Experimental Edition
Google stated that by sharing early experimental versions of Gemini 2.0 with developers and advanced users, it gained valuable feedback about the strengths of its AI models. With the release of the Gemini 2.0 Pro experimental edition, the company aims to continue this trend.
The experimental Gemini 2.0 Pro model features a context window of 2 million tokens, enabling it to handle vast documents and videos, or approximately 1.5 million words. It can also invoke tools like Google Search and execute code.
Gemini 2.0 Pro succeeds Google's previous flagship model, Gemini 1.5 Pro, which was launched last February.
2.0 Flash Thinking Experimental Edition
To create a model capable of "deep thinking" through optimized reasoning, Google released the 2.0 Flash Thinking experimental edition in December. The open-source R1 reasoning model by Chinese AI startup DeepSeek also performs deep thinking but garnered more media attention.
Building upon the speed and performance of 2.0 Flash, Google developed the new experimental model, training it to break down prompts into a series of steps so it essentially completes tasks systematically.
"The 2.0 Flash Thinking experimental edition showcases its thought process, allowing you to see why it responds in certain ways, what assumptions it makes, and trace the reasoning path of the model," said Patrick Kane, product management director of the Google Gemini app, in the announcement.
The company also mentioned there will be a version of Flash Thinking that can interact with applications such as YouTube, Search, and Google Maps. This would enable the reasoning model to function as a helpful AI assistant leveraging its inherent reasoning capabilities.
The new 2.0 Flash Thinking experimental edition and the 2.0 Pro experimental edition are being rolled out today to the Gemini web and mobile apps.
2.0 Flash-Lite: A Small, Cost-Effective Model
The latest addition to Google's Gemini lineup, 2.0 Flash-Lite, maintains the speed and pricing of Flash 1.5 while performing better across most quality benchmarks.
Like Flash 2.0, Flash-Lite offers a context window of 1 million tokens and multimodal inputs. For instance, Google says the new model can generate single-line captions for around 40,000 unique photos and costs less than a dollar within the paid tier of Google AI Studio.
This combination of speed and efficiency at such a low cost is particularly appealing to the marketing and retail sectors. For marketers, the model can help generate customized emails for clients at a low cost, while in retail, it's suitable for producing extensive text descriptions for product photos without exceeding budget constraints.
Gemini 2.0 Flash-Lite is now in public preview on Google AI Studio and Vertex AI.