Don't blame Slack for training its AI on your sensitive data

Slack has come under siege for using customer data to train its global AI models and its generative AI plugin. Sure, requiring users to manually unsubscribe from email seems like a sneaky move (isn't email avoidance the goal of Slack?), but the messaging app doesn't bear full responsibility here. Popular workplace apps have integrated AI into their products, including Slack AI, Jira AI-Powered Virtual Agent, and Gemini for Google Workspace. Anyone using technology today (especially for work) should assume that their data will be used to train AI. Therefore, it is up to people and companies to avoid sharing sensitive data with third-party applications. Anything less than that is naive and risky.

Rohan Sathe

Co-founder and CTO of Nightfall AI.

Trust nobody

There is a valid argument circulating on the internet that Slack's opt-out policy sets a dangerous precedent for other SaaS applications to automatically allow customers to share data with AI and LLM models. Regulators will likely look at this, especially for companies that work in places protected by the General Data Protection Regulation (but not the California Consumer Privacy Act, which allows companies to process personal data without permission until a user opts out). Until then, anyone using AI (which IBM estimates is more than 40% of companies) should assume that the information shared will be used to train models.

scroll to top