Slack faces resistance for using user data to train AI
According to reports, Slack has been using customer data to support its machine learning (ML) capabilities, such as improving the relevance and ranking of search results. This has sparked criticism as confusing policy updates have led many to believe that their data is being used to train AI models.
According to the company's policy, anyone wishing to opt out must request their organization's Slack administrator to email the company to stop using their data.
This disclosure came after Corey Quinn, an executive at Duckbill Group, wrote on X, "Slack, sorry, what are you doing with users' direct messages, information, and files?"
Quinn quoted a passage from Slack's privacy principles, which stated, "To develop AI/ML models, our systems analyze customer data submitted to Slack (such as messages, content, and files) as well as other information defined in our privacy policy and your customer agreement."
Another section stated, "To opt out, have your organization, workspace owner, or primary owner contact our customer experience team at feedback@slack.com."
Slack quickly responded to the article, confirming that it is using customer content to train certain AI tools within the application. However, they clarified that this data is not used for their advanced AI products, which are completely separate from user information.
The cloud-based team communication platform stated that the information used to support ML is anonymous and does not access messages.
Meredith Whittaker, the president of end-to-end encrypted messaging app Signal, criticized Slack's use of data. She said on X, "Signal would never do this. We don't collect your data at all, so we have nothing to 'mine' for 'AI'."
Slack responded to the criticism about using user data to train models.
In response to the strong community reaction, Slack, owned by Salesforce, released a separate blog post to address these concerns, stating, "We do not build or train these models in any way that enables them to learn, remember, or replicate any type of customer data."
Slack also confirmed that user data is not shared with third-party large language model (LLM) providers for training purposes.
A Slack engineer attempted to clarify the situation on Threads, explaining that the privacy rules "originally pertained to the search/recommendation work we've been doing for years before Slack AI" and acknowledged that they "do need to update."