In a controversial move, Slack has been training the models it uses for its generative AI capabilities on user messages, files, and more, by default and without users' explicit consent.
Instead (according to Engadget), those who want to opt out must do so through their organization's Slack administrator, who must email the company to stop the data use.
The revelation that potentially sensitive information is being used to train Slack's AI highlights the darker sides of the technology: Generative AI has already been criticized for failing to properly cite sources and its potential to generate content that could be subject to infringement of copyright.
Slack criticized for using customer data to train AI models
An excerpt from the company's privacy principles page reads:
“To develop non-generative AI/ML models for features like emoji and channel recommendations, our systems analyze customer data (for example, messages, content, and files) sent to Slack, as well as other information (including usage information). ) as defined in our Privacy Policy and in your customer agreement.”
Another passage reads: “To opt out, please have your organization, workspace owners, or primary owner contact our customer experience team at [email protected]…”
The company does not provide a time frame for processing such requests.
In response to the uproar among the community, the company published a separate blog post to address concerns that were raised, adding: “We do not build or train these models in such a way that they can learn, memorize, or reproduce customer data.” of any type.”
Slack confirmed that user data is not shared with third-party LLM providers for training purposes.
TechRadar Pro asked Slack's parent company, Salesforce, to clarify some details, but the company did not immediately respond.