AI Aggregator OmniGPT Reportedly Hacked, Leaking Sensitive User Data

2025-02-13

According to reports, OmniGPT Inc., an AI aggregator, suffered a breach where a hacker posted over 34 million user chat records, along with 30,000 user emails and phone numbers on a well-known hacker forum.

As a platform that acts as an intermediary, OmniGPT allows users to access AI and large language models from various companies such as OpenAI's ChatGPT, Google LLC's Gemini, and Anthropic PBC's Claude. This aggregation model has become quite popular among users who wish to try and use different models without subscribing to each one individually.

The hacker claiming responsibility for the data theft goes by the name "Gloomer" on the notorious hacker site Breach Forums, which the FBI attempted to shut down in May 2024 but re-emerged in some form afterward.

"The leak includes all messages between the site's users and chatbots, links to all uploaded files by users, and 30,000 user emails," Gloomer wrote on the site. "You can find plenty of useful information like API keys and credentials within the messages. Many of the files uploaded to the site are very interesting since they sometimes contain credentials/billing information."

While the specific method of intrusion remains undisclosed, researchers reporting on Hackread.com stated that the leaked data includes messages exchanged between users and chatbots, as well as links to uploaded files, some of which contained credentials, billing information, and API keys. Over 8,000 email addresses shared by users during their conversations with chatbots were also discovered.

The leaked data also includes file upload links for documents stored on OmniGPT servers, potentially containing sensitive information in PDF and document formats, further confirming that the data was indeed stolen from OmniGPT. The company has yet to comment on the matter.

"If confirmed, this breach of OmniGPT highlights that even practitioners using cutting-edge technologies like generative AI can be compromised, underscoring the importance of adhering to industry best practices such as application security assessments, certification, and validation," said Andrew Bolster, Senior Manager of R&D at Black Duck Software Inc., via email to SiliconANGLE. "Perhaps most unsettling for these users is the nature of their deeply personal and private 'conversations' with these chatbots; chatbots are often used as 'confidants' to process intimate personal, psychological, or financial issues people are dealing with."

Eric Schwake, Director of Cybersecurity Strategy at API security firm Salt Security Inc., warned that while the reported data breach involving OmniGPT awaits official confirmation, the potential exposure of user information and chat logs—including sensitive items like API keys and credentials—highlights the urgent need for robust security measures in AI platforms.

He added, "If verified, this incident will reveal risks associated with the storage and processing of user data in AI interactions. Organizations creating and deploying AI chatbots must prioritize data protection throughout the lifecycle, ensuring secure storage, implementing access controls, using strong encryption, and conducting regular security assessments."