Slack has come under siege for using customer data to train its global AI models and its generative AI plugin. Sure, requiring users to manually unsubscribe from email seems like a sneaky move (isn't email avoidance the goal of Slack?), but the messaging app doesn't bear full responsibility here. Popular workplace apps have integrated AI into their products, including Slack AI, Jira AI-Powered Virtual Agent, and Gemini for Google Workspace. Anyone using technology today (especially for work) should assume that their data will be used to train AI. Therefore, it is up to people and companies to avoid sharing sensitive data with third-party applications. Anything less than that is naive and risky.
Co-founder and CTO of Nightfall AI.
Trust nobody
There is a valid argument circulating on the internet that Slack's opt-out policy sets a dangerous precedent for other SaaS applications to automatically allow customers to share data with AI and LLM models. Regulators will likely look at this, especially for companies that work in places protected by the General Data Protection Regulation (but not the California Consumer Privacy Act, which allows companies to process personal data without permission until a user opts out). Until then, anyone using AI (which IBM estimates is more than 40% of companies) should assume that the information shared will be used to train models.
We could dive into the ethics of AI training in individual billion-dollar business ideas coming to life in Slack threads, but surely someone on the internet has already written that. Instead, let's focus on what's really important: whether or not Slack's AI models are trained on its users' sensitive data. This means personally identifiable information (PII), such as social security numbers, names, email addresses, and phone numbers; personal health information (PHI); or secrets and credentials that can expose PII, PHI and other valuable business and customer information. This is important because if AI is trained on this information, it creates risks of sensitive data exposure, rapid injection attacks, model abuse, and more. And those are the things that can make or break a company.
While Slack's updated privacy principles state: “For any models that will be widely used across all of our customers, we do not build or train these models in such a way that they can learn, memorize, or reproduce any of the customer's information.” Data”, companies should take it upon themselves to ensure that their sensitive data does not come into contact with third-party AI models. That is how.
Adopt a shared responsibility model
This isn’t the first time the question of who has the responsibility for security – the service provider or the technology user – has come up. In fact, it was such a hot topic of discussion during the mass migration to the cloud that the National Institute of Standards and Technology (NIST) has provided an answer. It’s a framework that clearly defines the responsibilities of cloud service providers (CSPs) and cloud consumers to ensure that both parties contribute to security and compliance. This is called the cloud shared responsibility model, and it’s been working well for over a decade.
The same shared responsibility model can be applied if you replace Slack (or any other SaaS application that uses AI) with the CSP. Slack must be responsible for protecting its underlying infrastructure, platform, and services, and Slack customers must be responsible for protecting their company's and customers' sensitive data. In this model, here are some ways Slack customers can ensure that sensitive data is not used to train Slack AI.
– Use a human firewall. Employees are the first line of defense against sensitive data entering a third-party app like Slack. While regular security training is important, it’s best combined with a solution that identifies potential policy violations and allows employees to delete or encrypt sensitive data before sharing it. – Filter inputs. The best way to prevent sensitive data from entering Slack’s AI model is to not share it with Slack in the first place. Companies should use a solution that intercepts outgoing Slack messages and deletes or encrypts sensitive data before sharing it with Slack. – Never share secrets, keys, or credentials in Slack. At a minimum, this information should be encrypted and stored and shared using a password manager or vault. Additionally, companies should take advantage of advice to use a human firewall and filter out past entries to ensure these keys to the kingdom aren't accidentally shared via Slack (or email or GitHub; we've seen how that works).
Perhaps the Hacker News community is rightly angry that they didn’t know they needed to opt out of letting Slack use their data to train its global AI models and Slack’s AI. And for those who opt out now, there are still plenty of unanswered questions, such as whether their data will be retroactively deleted from Slack’s models, and what compliance implications that may have. This has surely sparked debates about transparency around training AI models in conference rooms or Slack channels (too soon?) across the industry, and we’re likely to see more companies updating their privacy policies in the coming months to avoid a user backlash similar to what Slack has seen this week.
Regardless of what those policies say, the best way to prevent AI from training your sensitive data is to avoid exposing it in the first place.
We have presented the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: