Google's Android development environment is undergoing a Gemini-style revolution. The company recently announced that Gemini 1.5 Pro will officially debut on Android Studio later this year. The new version not only provides a longer context window and multimodal input, but also allows developers to utilize Gemini to generate code suggestions, analyze crash reports, and recommend corrective actions for development issues.
Matthew McCullough, Vice President of Product Management for Google's Android development tools and experiences, stated at the Google I/O developer conference press conference: "Android has a unique position in bringing Google's AI innovations to a wider application ecosystem. This is why we continue to invest in easy-to-use tools and APIs to meet the needs of developers and focus on areas where we can have the greatest impact."
He further pointed out: "We provide developers with various ways to leverage the Gemini model in Android applications. Since introducing AI capabilities in Android Studio last year, we have been refining the underlying model, integrating developer feedback, and expanding availability to more countries. Our goal has always been to help developers utilize AI in their workflows and improve productivity."
From Gemini 1.0 to Gemini 1.5 Pro
Google announced a few weeks ago that Android Studio will be equipped with Gemini 1.0 Pro. As a preview version, this model is available for free to developers. However, this year, Google plans to upgrade its AI product and replace Gemini 1.0 Pro with the more advanced Gemini 1.5 Pro. With a larger context window (1 million tokens compared to 32,000 tokens), the new model can provide higher quality responses.
Providing better AI solutions for developers is crucial for Google, especially in the mobile device field. This helps Google maintain its leading position or at least stay competitive with rivals like Apple. It is rumored that Apple is using OpenAI's ChatGPT to enhance Siri. Meanwhile, the AI wearables market is also evolving, with the popularity of devices like Ray-Ban Meta smart glasses and the emergence of new devices like Humane AI Pin and Rabbit r1, highlighting the trend of mobile AI use cases beyond smartphones. Google cannot overlook developers building applications on Android.
A new chapter after Google Assistant
Google Assistant used to be the well-known AI on Android devices. Since the launch of Google Actions in 2016, it has been open to developers. However, the era of utilizing Assistant has passed. The addition of Gemini allows developers to more freely integrate AI into their applications in a more localized way.
A new experience with code suggestions and crash reports
At the 2023 Google I/O conference, Google introduced Studio Bot for Android Studio. It is an AI coding assistant driven by Google's Codey text-to-code base model and is the successor to PaLM 2. Developers can ask questions about Android development or request Studio Bot to fix errors in existing code.
A year later, Studio Bot was renamed Gemini in Android Studio. Once enabled, developers can prompt the model to perform various tasks, from simplifying complex code to executing specific code transformations, such as "make this code more standardized" or generating new features. With the new name, improved model, and enhanced features, developers have a brand new experience.
McCullough specifically introduced the new code suggestion feature in a brief demo last week, demonstrating how Gemini parses selected code snippets and explains their purpose. This helps developers determine if they are editing the correct part of the application and the potential impact of changes on other areas. In addition, he also demonstrated how Gemini can translate parts of the code into other languages.
While it is not clear if Studio Bot exists in its current form, it is certain that Google is directly integrating Gemini into its products rather than creating it as a standalone product. Several companies already offer similar coding assistants, such as Microsoft's Copilot, GitHub Copilot, Oracle Code Assist, Amazon CodeWhisperer, and Tabnine.
Furthermore, Google has also updated its Gemini API, providing Android Studio with an introductory application template. Developers can use this API to run prompts directly, use image sources as input, and present responses on the screen. This may be a helpful starting point for developers who want to quickly build Android applications. This approach is similar to website templates offered by Wix, Squarespace, or WordPress.com, where users can choose a template and customize it according to their needs. However, in Android Studio, developers can instruct Gemini to build the application for them.
Finally, developers can now utilize Gemini to gain a deeper understanding of the reasons behind crashes in their Android applications. This AI model analyzes crash reports in detail, provides in-depth insights, generates crash summaries, and offers action recommendations for the issues, including sample code fixes and relevant documentation links. All these features can be easily accessed through the App Quality Insights tool in Android Studio after activating Gemini.
This feature was actually born out of the successful integration of Android Studio with Firebase Crashlytics years ago. At that time, this initiative was widely praised as "an important step for Android developers to improve application stability." Combined with data from Android Vitals, it eventually led to the creation of App Quality Insights (AQI) in Android Studio. However, although AQI provides developers with rich data support, interpreting and analyzing the data still requires manual effort, which undoubtedly increases their workload and may consume a significant amount of time.
Google now hopes to leverage the power of Gemini to help developers handle these cumbersome analysis tasks, thereby freeing up more resources for them to focus on improving the overall application experience.