Cloudflare Unveils AI Firewall to Safeguard LLMs from Cyberattacks
Cloudflare, a global cloud connectivity provider, has announced the development of an AI firewall that provides an additional layer of protection for enterprise artificial intelligence (AI) large language models (LLMs) by identifying potential attacks before they can compromise critical functions or access sensitive data.
The company has also unveiled a new suite of defensive cybersecurity tools that leverage AI to combat emerging AI threats. These tools include detecting abnormal user behavior, scanning emails to flag suspicious messages, and mitigating threats to organizations.
As more companies incorporate LLMs and AI models as a core part of their digital transformation, they must confront the accompanying security risks. According to a recent Deloitte study, only a quarter of C-level executives believe their organizations are adequately prepared to address the risks posed by AI.
"We are in an AI arms race, and many AI-driven applications today—many of which power our healthcare, banking systems, and power grids—are built on security models, which is crucial," said Matthew Prince, co-founder and CEO of Cloudflare. "This protection should be available to everyone, as a secure internet benefits everyone."
The new AI firewall from Cloudflare offers security teams the ability to rapidly detect new threats, as it can be deployed in front of any LLM running on Cloudflare's existing Workers AI product. Workers AI enables developers to deploy AI models at scale on Cloudflare's global network, bringing LLMs as close as possible to enterprise customers for ultra-low latency response.
By setting up the firewall before the LLM, it can scan user-submitted prompts to identify behavior attempting to exploit the model and extract data. As a result, it can automatically block threats without the need for manual intervention. Any customer running LLMs on Cloudflare Workers can take advantage of this firewall and access this new feature for free to mitigate growing concerns such as prompt injection and other attack vectors.
Prompt injection attacks aim to extract sensitive information from LLMs by carefully crafting a prompt that hijacks the model's behavior to produce the exact content desired by the attacker. With prompt injection, attackers can override previous instructions given to the LLM by providing new commands, potentially leading to the disclosure of sensitive information or access to critical functions.
Cloudflare's new protection employs AI to counter AI
Cloudflare states that it offers a personalized approach to protect enterprise networks from new risks brought by new technologies, such as AI-enhanced attacks, through its Defensive AI. This is achieved by using AI to detect email threats, malicious code, and abnormal traffic patterns.
The company claims to have expanded its service range to train AI models based on specific traffic patterns and customize defense strategies based on the underlying behavior of a company's network and environment.
"Defensive AI is a key advantage that puts defenders ahead of today's adversaries by understanding the 'normal baseline' in a customer's environment, mitigating threats, and improving resilience," said Prince.
With the rise of AI, such as OpenAI's ChatGPT, attackers have become increasingly sophisticated in phishing scams, which are attempts to deceive users into disclosing sensitive information through emails or messages. In the past, these scams were not always very convincing due to obvious errors in written information, such as grammar mistakes or poor design. However, with the help of AI, attackers can customize emails to their targets, making them more likely to be believed and tricking individuals into revealing passwords or sensitive information.
Cloudflare claims that its Defensive AI enables faster detection of email threats, allowing phishing messages to be identified before employees fall victim to them.
Using the same defense strategy, Cloudflare is developing an application programming interface (API) anomaly detection model that will prevent attacks targeting network penetration, application attacks, and data theft. The aim is to generate a normal behavior model within the network and then monitor for anomalies in traffic, as attacks deviate from the proper behavior of an application, serving as a barrier against malicious attacks.
"We are now in an era where AI is pitted against AI," said Prince. "It is crucial to protect data in a personalized manner and defend against the complex threats faced by organizations—fast and at scale."