Google Launches Gemini Live AI Video Feature

2025-03-24

Google has started rolling out new AI capabilities to Gemini Live, enabling it to "see" the content displayed on users' screens or captured by their smartphone cameras in real time and respond to related queries. This development was confirmed by Google spokesperson Alex Joseph in an email statement. The launch of these features comes nearly a year after Google first showcased the underlying technology known as "Project Astra."

According to a report from 9to5Google, a Reddit user claimed that this functionality appeared on their Xiaomi device. The user shared a video demonstrating Gemini's new screen-reading ability. This is one of two features announced by Google earlier in March, with plans to roll them out later this month to Gemini Advanced subscribers as part of the Google One AI Premium plan.

The other feature being introduced under Project Astra is live video interpretation, which allows Gemini to analyze what the smartphone camera is capturing in real time and provide answers to related questions. In a demo video released by Google this month, a user was shown using this functionality to seek Gemini’s advice on choosing the right paint color for newly glazed pottery.

With these advancements, Google reaffirms its leading position in the AI assistant space. Meanwhile, Amazon is preparing to launch a limited early access version of its upgraded Alexa Plus, while Apple has delayed the release of its enhanced Siri. Both companies aim to deliver capabilities similar to what Astra is now bringing to the market. Additionally, although Samsung still maintains its Bixby assistant, Gemini has become the default assistant on its devices.