Google's New Product "Threat Intelligence" Focuses on Cybersecurity

2024-05-07

As people strive to find more practical uses for generative AI beyond just creating fake photos, Google plans to direct AI towards cybersecurity and make threat reports easier to read.

In a blog post, Google stated that its new cybersecurity product, Google Threat Intelligence, combines the work of its Mandiant cybersecurity division and VirusTotal threat intelligence with the Gemini AI model.

This new product utilizes the Gemini 1.5 Pro large language model, reducing the time required for reverse engineering malware attacks. The company claims that Gemini 1.5 Pro, released in February, analyzed the WannaCry virus code in just 34 seconds - the ransomware attack from 2017 that paralyzed hospitals, companies, and other organizations worldwide - and identified a kill switch. This is impressive but not surprising, as large language models excel at reading and writing code.

However, another potential use of Gemini in the threat domain is summarizing threat reports into natural language in Threat Intelligence, allowing companies to assess the potential impact of potential attacks - in other words, preventing companies from overreacting or underreacting to threats.

Google states that Threat Intelligence also has a vast information network for monitoring potential threats before an attack occurs. It allows users to see the bigger picture of the cybersecurity landscape and identify priority areas. Mandiant provides human experts to monitor potential malicious groups and serves as consultants to collaborate with companies in preventing attacks. VirusTotal's community also regularly publishes threat indicators.

Google acquired cybersecurity company Mandiant in 2022, which discovered the SolarWinds network attack against the US federal government in 2020.

Google also plans to leverage Mandiant's experts to assess security vulnerabilities surrounding AI projects. Through Google's security AI framework, Mandiant will test the defense capabilities of AI models and assist in red team testing. While AI models can help summarize threats and reverse engineer malware attacks, these models themselves can sometimes become targets of malicious actors. These threats sometimes include "data poisoning," where malicious code is added to the data captured by AI models, rendering them unresponsive to specific prompts.

Of course, Google is not the only company combining AI with cybersecurity. Microsoft has introduced Copilot for Security, supported by GPT-4 and Microsoft-specific AI models for cybersecurity experts to ask questions about threats. Whether these are truly good use cases for generative AI remains to be seen, but it is encouraging to see it being used for these meaningful purposes.